Your first AI application
Building your first AI application on GLBNXT Platform is designed to be straightforward. The infrastructure is already in place, models are available immediately, and the platform components needed to connect data, logic, and interfaces are ready to use from day one. This section walks you through the process of going from a blank canvas to a working AI application running in your platform environment.
Before You Start
Before building your first application, confirm the following:
You have logged in to the platform console and can access your environment
Your team roles and permissions have been assigned correctly
You know which models are available in your Model Hub
You have a clear idea of the type of application you want to build
If you are unsure which application type fits your use case, the Solution Architecture Patterns section provides an overview of the three flagship categories available on the platform: AI Assistants and Chat Interfaces, RAG and Knowledge Systems, and Multi-Agent Workflows and Automation.
Choosing Your Approach
GLBNXT Platform supports both low-code and full-code development paths. Before you start building, it is worth deciding which approach is right for your team and your use case.
Low-code is the fastest path to a working application. It uses visual builders, pre-built templates, and drag-and-drop configuration to connect models, data sources, and interfaces without writing significant amounts of code. This approach is well suited for teams who want to demonstrate value quickly or who are building solutions that follow established patterns.
Full-code gives engineers direct access to model APIs, database connections, and platform primitives. It is suited for custom architectures, complex integrations, or solutions that require precise control over logic and data flow.
Both paths are available within the same platform environment, and most production applications combine elements of both. Start with the approach that matches your team's strengths and the complexity of your first use case.
Building a Simple AI Assistant
The quickest way to understand how the platform works end-to-end is to build a simple AI assistant. The following steps guide you through connecting a model, configuring a chat interface, and running your first conversation.
Step 1: Select a Model
Navigate to the Model Hub in your platform console. You will see a list of models available in your environment. Select a model appropriate for conversational use. If you are unsure which model to choose, your GLBNXT onboarding contact can advise based on your specific requirements.
Note the model endpoint URL displayed in the model details. You will need this in the next step.
Step 2: Configure a Chat Interface
Navigate to the Applications area of the console. Depending on your environment configuration, you will have access to a hosted chat interface that can be connected to any model available in your Model Hub. Open the configuration settings for the chat interface and set the model endpoint to the URL you noted in Step 1. Set a system prompt that defines the assistant's behaviour and scope. Save your configuration.
Step 3: Test Your Application
Launch the chat interface from the Applications area. Send a test message and confirm that the model responds correctly. Review the response quality against your system prompt configuration and adjust as needed. Once you are satisfied with the behaviour, your first AI application is running.
Step 4: Review Your Observability Output
Navigate to the Monitoring and Observability area of the console. You should see logs and usage data generated by the test conversation you just ran. Confirm that logging is active and that your interactions are being captured in the audit trail. This is an important step before any production deployment. GLBNXT Platform captures a complete record of model interactions, and confirming this is working correctly from the start ensures your compliance posture is in place from day one.
Going Further
A simple chat assistant is the starting point, not the end goal. From here, your team can extend the application in several directions depending on your use case:
Add a knowledge base: connect a vector database and a document ingestion pipeline to give your assistant access to your organisation's own data through RAG
Add workflow automation: connect your assistant to external systems, APIs, and data sources using workflow automation tools available in your environment
Build a multi-agent system: extend your application with additional agents that handle specific tasks, reason over tools, and collaborate to complete complex processes
Each of these directions is covered in detail in the Building AI Solutions section of this documentation.
Getting Help
If you encounter issues during your first build, your GLBNXT onboarding contact is your first point of support during the onboarding period. For questions about specific platform components, refer to the relevant sections of this documentation. For issues with the environment itself, use the support channels described in the Support and Escalation section.
Last updated
Was this helpful?