sitemapArchitecture patterns

GLBNXT Platform is designed to support a wide range of AI application architectures. Rather than prescribing a single way to build, the platform provides the components and managed services that underpin every common AI solution pattern, and gives your team the flexibility to apply them in the way that best fits your use case.

This section introduces the three flagship solution categories available on the platform, describes the architectural patterns that underpin each one, and helps your team identify the right starting point for the solution you are building.

The Three Solution Categories

Every AI application built on GLBNXT Platform falls into one or more of the following categories. Understanding which category your use case belongs to shapes the architecture decisions, component choices, and development approach you should take.

AI Assistants and Chat Interfaces

AI assistants are conversational applications that give users access to AI capabilities through a natural language interface. They range from simple single-turn question and answer tools to sophisticated multi-turn agents that can reason, retrieve information, use tools, and maintain context across a conversation.

Common use cases in this category include internal knowledge assistants, customer support automation, compliance and legal Q&A tools, and sector-specific assistants for public sector, healthcare, and financial services organisations.

The core architectural pattern for this category involves a language model connected to a managed chat interface, with optional extensions for knowledge retrieval, tool use, and memory. The complexity of the architecture scales with the capabilities the assistant needs to deliver.

RAG and Knowledge Systems

Retrieval-augmented generation systems give AI models access to your organisation's own data at inference time, enabling them to answer questions, analyse documents, and surface insights from information they were not trained on. RAG is the foundational pattern for any AI application that needs to work with proprietary, domain-specific, or frequently updated information.

Common use cases in this category include contract review and analysis, financial document search, internal knowledge base search, policy analysis, and any application where the quality of the AI output depends on access to accurate, up-to-date organisational data.

The core architectural pattern for this category involves a document ingestion pipeline that processes source content into embeddings stored in a vector database, a retrieval layer that identifies the most relevant content for a given query, and a language model that generates responses grounded in the retrieved context.

Multi-Agent Workflows and Automation

Multi-agent systems coordinate multiple AI models and automated processes to complete complex tasks that require reasoning, decision-making, and action across multiple steps. Unlike single-model applications, multi-agent architectures decompose complex problems into subtasks handled by specialised agents, with orchestration logic managing how agents interact and how outputs flow between them.

Common use cases in this category include automated reporting and analytics, document routing and processing pipelines, case management workflows, and financial or technical analysis tasks that involve multiple stages of reasoning and data access.

The core architectural pattern for this category involves an orchestration layer that manages agent coordination, individual agents equipped with specific tools and capabilities, workflow automation connecting agents to external systems and data sources, and memory and state management that allows agents to maintain context across a multi-step process.

Architectural Building Blocks

Regardless of which solution category your use case falls into, every AI application on GLBNXT Platform is built from a consistent set of architectural building blocks. Understanding these components and how they relate to each other is the foundation of effective solution design on the platform.

Language Models

The core reasoning and generation capability in any AI application. Models are served through the platform's managed inference layer and accessed via stable API endpoints. Your architecture connects models to the context, tools, and interfaces appropriate for your use case.

Vector Databases

The storage and retrieval layer for semantic search and RAG. Vector databases hold the embeddings that enable your application to find relevant information at query time. Weaviate and Qdrant are available in your environment depending on your configuration.

Data Pipelines

The processes that move information from source systems into a form your AI application can use. For RAG systems, this means document ingestion, chunking, embedding generation, and indexing. For workflow automation, this means connecting data sources, transforming outputs, and routing information between steps.

Workflow Automation

The orchestration layer that connects models, agents, data sources, APIs, and interfaces into coherent end-to-end processes. Workflow automation handles triggers, conditional logic, error handling, and the movement of data between components in a way that does not require custom integration code for every connection.

Chat Interfaces and Frontends

The user-facing layer of conversational and assistant applications. Chat interfaces connect to model endpoints and present AI capabilities to end users through a managed, configurable interface. For applications that require a custom frontend, the platform provides the API layer that custom interfaces connect to.

APIs and Functions

The programmable layer that exposes AI capabilities to downstream systems and external integrations. APIs and function endpoints allow your AI application to be called from other services, embedded in existing workflows, or consumed by client applications outside the platform environment.

Memory and State

The layer that enables AI applications to maintain context across multiple interactions or workflow steps. Memory can be implemented at the session level for conversational applications or at the process level for long-running agent workflows.

Choosing the Right Pattern

Most production AI applications on GLBNXT Platform combine elements from more than one solution category. A knowledge assistant, for example, is an AI assistant pattern extended with a RAG architecture to ground responses in organisational data. An automated reporting system might combine a RAG layer for data retrieval with a multi-agent workflow for analysis and output generation.

When approaching a new use case, a useful starting point is to ask three questions:

Where does the knowledge come from? If the application needs to work with your organisation's own data, a RAG layer is almost always part of the architecture. If the application relies entirely on the model's trained capabilities, a simpler assistant pattern may be sufficient.

How complex is the process? If the task can be completed in a single model interaction, a straightforward assistant architecture is appropriate. If the task requires multiple steps, external tool use, or coordination between different capabilities, a multi-agent or workflow architecture is likely needed.

Who is the end user? If the application is user-facing and conversational, the architecture needs a chat interface layer. If the application is process-facing and automated, the architecture needs workflow automation and API endpoints rather than a conversational frontend.

The answers to these questions will point you toward the right combination of building blocks for your solution. The Building AI Solutions section of this documentation covers each solution category in detail, with guidance on component selection, data architecture, and implementation patterns for each use case type.

Last updated

Was this helpful?