Integration patterns
AI applications built on GLBNXT Platform rarely operate in isolation. They connect to existing enterprise systems, consume data from sources outside the platform, push outputs to downstream processes, and interact with the broader technology landscape of your organisation or your clients. Integration patterns are the established approaches for making these connections reliably, securely, and in ways that are maintainable as both the platform and the systems it connects to evolve over time.
This section describes the most common integration patterns used in GLBNXT Platform deployments, the considerations that apply to each, and guidance on choosing the right pattern for your use case.
Inbound Data Integration
Inbound data integration covers the patterns by which data from systems outside the platform is brought into your environment for use by AI applications. The right pattern depends on the volume and velocity of data involved, the nature of the source system, and whether data needs to be available in real time or can be processed in batches.
Batch Ingestion
Batch ingestion processes data from a source system at defined intervals rather than continuously. A scheduled process retrieves data from the source, transforms it into the format required by the platform component that will consume it, and loads it into the appropriate storage layer, whether that is MinIO for unstructured documents, Postgres for structured records, or a vector database for content that will be retrieved semantically.
Batch ingestion is the most straightforward integration pattern and is appropriate for use cases where data does not need to be available immediately after it is created or updated in the source system. Document-based knowledge bases, reference datasets, and reporting pipelines that run on a defined cadence are all well suited to batch ingestion.
Workflow automation is the primary tool for implementing batch ingestion on GLBNXT Platform. Scheduled triggers initiate the ingestion workflow, which retrieves data from the source, applies any required transformation, and loads it into the target storage layer. Error handling and retry logic can be configured within the workflow to manage source system unavailability or data quality issues without manual intervention.
Event-Driven Ingestion
Event-driven ingestion processes data as soon as it is created or updated in the source system, rather than waiting for a scheduled batch run. The source system emits an event when data changes, a webhook or message queue listener in the platform receives the event, and an ingestion workflow processes the new or updated data immediately.
Event-driven ingestion is appropriate for use cases where data freshness is important, such as RAG systems that need to reflect recent document updates, real-time classification pipelines, and AI-assisted processes where decisions are made shortly after data is created.
Implementing event-driven ingestion requires the source system to support webhook delivery or message queue publishing. Where source systems do not support these mechanisms natively, a polling approach, in which the platform periodically checks for new or updated records, can approximate event-driven behaviour with a defined latency trade-off.
API-Based Data Retrieval
For AI applications that need access to live data at query time rather than indexed data retrieved from a platform-managed store, API-based data retrieval calls the source system directly as part of processing a user request or agent task. A model or workflow step makes an authenticated API call to the source system, receives the current data, and incorporates it into the AI processing step.
This pattern is suited for data that changes frequently enough that ingesting it into a platform store would create unacceptable staleness, or for data that is large or sensitive enough that storing it in the platform is undesirable. API-based retrieval introduces a dependency on the source system's availability and latency at query time, which should be accounted for in the application design.
Outbound Data Integration
Outbound data integration covers the patterns by which AI-generated outputs, decisions, or actions are delivered to systems outside the platform. These patterns determine how AI capability moves from the platform into the downstream processes and systems that act on it.
Webhook Delivery
Webhook delivery pushes AI-generated outputs to an external system as soon as they are produced. When a workflow completes, an agent finishes a task, or an application produces an output, the platform makes an HTTP POST request to a configured endpoint in the receiving system, delivering the output payload.
Webhook delivery is suited for real-time notification use cases, for systems that need to react immediately to AI outputs, and for integration with external platforms that support inbound webhook handling. Most modern SaaS platforms and enterprise applications support webhook ingestion, making this a broadly applicable outbound integration pattern.
API Push
API push sends AI-generated outputs to an external system by calling that system's API directly from a workflow or function within the platform. Unlike webhook delivery, which is triggered by the platform and received passively by the external system, API push is an active call from the platform to a defined external endpoint.
API push is appropriate when the external system does not support inbound webhooks but does expose an API for writing data, when the output needs to be delivered to a specific record or endpoint within the external system that is determined at runtime, or when the integration requires error handling and retry logic beyond what a simple webhook delivers.
Credentials for external API calls made from platform workflows are managed through the secrets vault, ensuring that integration credentials are not exposed in workflow configuration.
Database Write-Back
Database write-back stores AI-generated outputs directly in an external database or data warehouse as part of a workflow execution. This pattern is suited for use cases where AI outputs need to be combined with existing enterprise data for reporting, analysis, or downstream processing, and where the receiving system is a database rather than an application with an API.
Write-back integrations require careful attention to data schema alignment between the AI output format and the target database structure, and to access control for the credentials used to write to the external database.
File and Document Delivery
File and document delivery places AI-generated outputs into a shared file system, document management platform, or collaboration tool for consumption by users or downstream processes. This pattern is suited for document generation workflows, report automation, and any use case where the output of AI processing is a file that needs to be placed in an existing document workflow.
Common destinations include SharePoint, Google Drive, S3-compatible storage, and enterprise content management platforms. File delivery is typically implemented through workflow automation using the appropriate connector for the target platform, with credentials managed through the secrets vault.
Internal Platform Integration
Beyond external system integration, GLBNXT Platform components integrate with each other to form coherent end-to-end AI architectures. Understanding how internal integration works is as important as understanding external integration patterns.
Component Chaining
Component chaining connects platform services in sequence so that the output of one component becomes the input to the next. A document ingestion pipeline chains MinIO object storage, an embedding model, and a vector database. A RAG assistant chains a vector database retrieval step, a prompt assembly step, and a language model inference call. A reporting workflow chains a database query, a summarisation model call, and a document generation step.
Component chaining on GLBNXT Platform is implemented through workflow automation for multi-step processes, through direct API calls between platform services for tightly coupled application components, and through agent tool use for dynamic chaining within a reasoning loop.
Event-Driven Internal Integration
Platform components can communicate through events as well as direct calls. A document uploaded to MinIO can trigger an ingestion workflow. A workflow completion can trigger a notification or a downstream process. An agent task result can trigger a quality evaluation step. Event-driven internal integration allows platform components to react to each other without tight coupling, making architectures more resilient and easier to extend.
Shared Data Services
Multiple applications and workflows within a platform environment can share access to common data services including Postgres databases, vector database collections, and MinIO storage buckets. Shared data services allow different AI applications to draw on the same underlying data without duplicating it, and enable outputs from one application to be consumed as inputs by another.
Access control policies applied to shared data services determine which applications and workloads can read from and write to each resource, ensuring that data sharing does not compromise the access boundaries appropriate for your environment.
Identity and Authentication Integration
SSO and Identity Provider Integration
GLBNXT Platform integrates with enterprise identity providers through SAML and OAuth 2.0, allowing platform access to be governed by your organisation's existing identity and access management policies. For AI applications that include user-facing frontends, identity provider integration can be extended to authenticate end users of those applications through the same organisational identity, providing a consistent single sign-on experience across the platform and the applications built on it.
User Context Propagation
For AI applications where the identity of the requesting user should influence the behaviour of the application, such as personalised assistants or applications that enforce data access policies based on user role, user identity can be propagated from the authentication layer through the application stack to the AI processing layer. This allows models, retrieval systems, and workflow steps to receive user context and apply appropriate access and personalisation logic.
Enterprise System Integration
CRM and ERP Integration
AI workflows on GLBNXT Platform can integrate with enterprise CRM and ERP systems to retrieve customer, transaction, and operational data at processing time, and to write AI-generated insights, recommendations, or actions back to those systems. Integration is implemented through the API connectors and workflow automation tools available in your environment, with credentials managed through the secrets vault.
Document Management Integration
Enterprise document management platforms including SharePoint, Confluence, and Notion can be connected to GLBNXT Platform as document sources for RAG pipelines or as destinations for AI-generated outputs. Document management integration typically involves a combination of batch or event-driven ingestion for indexing document content, and file delivery for writing outputs back to the document management platform.
Communication Platform Integration
AI workflows can integrate with communication platforms including email, Slack, Microsoft Teams, and others to deliver AI-generated notifications, summaries, and outputs to users through the channels they already use. Communication platform integration is commonly used in workflow automation for escalation notifications, scheduled report delivery, and event-driven alerts.
Security Considerations for Integrations
Every integration point is a potential security boundary that requires attention. The following considerations apply across all integration patterns on GLBNXT Platform.
Credential management: all credentials used by integrations are managed through the platform secrets vault. Integration credentials are never stored in workflow configuration, application code, or any location accessible to developers. Credentials are rotated through the vault without requiring changes to integration logic.
Least privilege for integration accounts: service accounts and API keys used by platform integrations to access external systems should be scoped to the minimum permissions required for the integration to function. Integration accounts should not hold broad administrative access to external systems simply for convenience.
Data validation at integration boundaries: data received from external systems at inbound integration points should be validated before being processed by AI components. Unvalidated external data introduces risks ranging from data quality issues to prompt injection attacks in AI applications that incorporate external content into model inputs.
Audit coverage for integration traffic: integration traffic flowing through workflow automation and API endpoints is logged as part of the platform audit trail. Ensure that integration logging is configured to capture the information required to trace data flows across integration boundaries for compliance and investigation purposes.
For guidance on building the API and function layer that supports outbound integration patterns, see the APIs and Functions section. For guidance on workflow automation tools used to implement integration patterns, see the Workflow Automation section.
Last updated
Was this helpful?