APIs & Functions
APIs and functions are the layer of GLBNXT Platform that makes AI capabilities programmable and composable. They allow the AI applications and model capabilities running in your platform environment to be called from external systems, embedded in existing products, consumed by client applications, and integrated into the broader technology landscape of your organisation or your clients.
Where chat interfaces expose AI to human users through a conversational frontend, APIs and functions expose AI to other systems through programmable endpoints. This distinction is fundamental to how AI capabilities move from isolated applications into the fabric of how an organisation operates.
On GLBNXT Platform, APIs and functions are built on the same managed infrastructure that underpins every other solution component. Your team designs and deploys the logic. GLBNXT handles the infrastructure that runs it, the networking that exposes it securely, and the observability that gives you visibility into how it performs.
What APIs and Functions Are For
APIs and functions are the right architectural choice when an AI capability needs to be consumed by a system rather than a user, when an AI application needs to be embedded in a product that already exists, or when the output of an AI process needs to flow automatically into another part of your technology stack.
Common use cases in this category include:
Exposing a model inference capability as a REST endpoint that an existing web application or mobile app can call to enrich its functionality with AI
Building a document processing API that accepts file uploads, runs AI extraction or summarisation, and returns structured outputs to the calling system
Creating a classification endpoint that external systems can call to categorise incoming data, requests, or content in real time
Developing a generation API that produces text, code, or structured content on demand for downstream consumption by other services
Wrapping a complex RAG pipeline in a simple API endpoint so that consuming applications do not need to understand the retrieval architecture behind it
Providing webhook endpoints that external systems can call to trigger agent workflows, data processing pipelines, or AI-powered automation sequences
If your use case involves making AI capabilities available to systems rather than directly to users, APIs and functions are the appropriate solution layer.
API Design on GLBNXT Platform
APIs built on GLBNXT Platform follow standard REST principles and are accessed over HTTPS with authentication required on every request. Endpoints are exposed through the platform's managed networking layer, which handles routing, SSL termination, and access control at the infrastructure level. Your team implements the business logic of the API. The platform handles the infrastructure concerns that make it reachable and secure.
When designing an API that exposes AI capabilities, the following principles lead to more reliable, maintainable, and easy-to-consume endpoints.
Keep Endpoints Focused
Each API endpoint should do one thing well. An endpoint that accepts a document and returns a summary should not also classify the document, extract entities, and trigger a downstream workflow in the same call. Composing focused endpoints gives consuming applications the flexibility to use each capability independently and makes individual endpoints easier to test, monitor, and improve over time.
Design for the Consumer
The interface your API exposes should be designed from the perspective of the system that will call it, not the perspective of the AI pipeline that runs behind it. A consuming application cares about the inputs it needs to provide and the outputs it will receive. It should not need to understand how the model is called, how retrieval is configured, or how the prompt is constructed. These are implementation details that belong inside the API, not in its interface.
Handle Asynchronous Processing
AI operations, particularly those involving complex reasoning, multi-step agent execution, or large document processing, often take longer than the timeout limits of synchronous HTTP requests. For operations that may take more than a few seconds to complete, design the API as an asynchronous endpoint. The initial request returns a job identifier immediately. The consuming application polls a status endpoint or receives a webhook notification when processing is complete and the result is available.
This pattern prevents timeout failures in consuming applications, makes the API more robust under variable processing times, and allows long-running AI tasks to run to completion without forcing the calling system to maintain an open connection.
Version Your APIs
API interfaces change as your AI applications evolve. A consuming application that is built against a specific API version should not break when you update the underlying AI logic or change the output format of a newer version. Versioning your APIs from the start, using path-based versioning such as /v1/ and /v2/ prefixes, allows you to evolve your AI capabilities without breaking existing integrations.
Functions
Functions are lightweight, single-purpose code units that can be triggered by events, called by workflows, or invoked by agents as tools. They are the building blocks of custom logic within the GLBNXT Platform environment, handling the specific data transformations, integrations, and processing steps that do not map to pre-built workflow nodes or platform components.
Functions on GLBNXT Platform are containerised and managed by the platform infrastructure. Your team writes the function logic. GLBNXT handles deployment, scaling, and execution. Functions can be invoked synchronously, returning a result to the caller immediately, or asynchronously, executing in the background and writing their output to a storage location or triggering a subsequent workflow step.
Common Function Patterns
Data transformation functions accept a data payload in one format and return it in another. They are used at integration points between systems that represent the same information differently, or between pipeline steps that expect different input structures.
Validation and enrichment functions accept a record or document, apply business logic or AI processing to validate or enrich it, and return the result. These functions are frequently used as steps in ingestion pipelines, workflow automation sequences, and agent tool sets.
Connector functions wrap the authentication and request logic required to call a specific external API, presenting a clean interface to the workflow or agent that calls them without exposing the complexity of the underlying integration.
Evaluation functions assess the quality, safety, or compliance of AI-generated outputs against defined criteria. They are used as quality gates in automated pipelines and as tool calls within agent reasoning loops.
Authentication and Access Control
Every API endpoint and function on GLBNXT Platform requires authentication. Unauthenticated requests are rejected at the networking layer before they reach your application logic. Authentication is implemented using API keys or token-based authentication depending on the consuming application's requirements and the sensitivity of the endpoint.
API keys for external consumers are managed through the platform secrets vault. Keys can be scoped to specific endpoints or sets of capabilities, ensuring that a consuming application can only call the endpoints it is explicitly authorised to use. Key rotation is handled through the vault without requiring changes to your API implementation.
For internal consumers within the platform environment, service-to-service authentication uses platform-managed tokens that are injected at runtime through the same secrets management mechanism used by other platform workloads.
Rate Limiting and Quota Management
APIs that expose AI model calls need to account for the cost and performance implications of model inference. Unconstrained API usage can consume compute resources unexpectedly and affect the performance of other workloads in your environment. Implementing rate limiting on your API endpoints protects against both unintentional overuse and deliberate abuse.
Rate limiting can be applied at the platform networking layer for coarse-grained protection, and within your application logic for fine-grained per-consumer or per-endpoint control. Your GLBNXT contact can advise on the appropriate rate limiting configuration for your environment based on your expected usage patterns and compute allocation.
Observability for APIs and Functions
All API calls and function invocations on GLBNXT Platform are logged as part of the platform audit trail. Request volumes, response times, error rates, and authentication events are captured automatically and visible through the Monitoring and Observability area of the platform console.
For APIs that expose AI model calls, request and response payloads can be included in the audit log depending on the compliance requirements of your environment. This supports use cases where a complete record of what was requested and what was returned is required for regulatory or governance purposes.
Function execution logs capture runtime output including errors, warnings, and diagnostic information generated by your function code. These logs are searchable through the platform console and can be exported for integration with your organisation's existing log management tooling.
Connecting APIs and Functions to the Platform
APIs and functions on GLBNXT Platform are not standalone components. They connect to and draw on all of the other managed services available in your environment.
An API endpoint can call a model in the Model Hub for inference, query a vector database for retrieval, read from and write to Postgres for structured data, retrieve files from MinIO, trigger a workflow automation sequence, or invoke an agent. The platform handles authentication and connectivity between these components. Your API logic assembles them into the capability it exposes.
This connectivity means that a well-designed API endpoint on GLBNXT Platform can represent a complete AI capability rather than a thin wrapper around a single model call. The complexity of the underlying architecture is hidden behind the API interface, and consuming applications benefit from the full power of the platform stack through a simple, stable endpoint.
Getting Started
The recommended starting point for building APIs and functions on GLBNXT Platform is to identify an AI capability that already exists in your environment and expose it as a stable endpoint that a consuming application can call.
A practical first API build follows this sequence:
Identify the AI capability you want to expose and define the interface from the consumer's perspective, specifying the inputs the endpoint accepts and the outputs it returns
Implement the endpoint logic that accepts the request, calls the appropriate platform components, and assembles the response
Configure authentication for the endpoint using the platform secrets vault
Test the endpoint with representative inputs, validating that outputs are correct and that error cases are handled appropriately
Review the observability output in the platform console to confirm that logging and monitoring are capturing the information you need
Share the endpoint with the consuming application and monitor usage during the initial integration period
For APIs that expose agent capabilities or multi-step AI processing, implement the asynchronous request pattern from the start rather than adding it later. The effort required to retrofit asynchronous handling into a synchronous API design after consumers have already integrated against it is considerably greater than designing for it upfront.
For guidance on using APIs as integration points within workflow automation, see the Workflow Automation section. For guidance on exposing agent capabilities through API endpoints, see the Agents and Memory section.
Last updated
Was this helpful?