serverManaged infrastructure layer

The Managed Infrastructure Layer is the foundation that every solution built on GLBNXT Platform runs on. It encompasses all compute, networking, orchestration, storage, security, and compliance capabilities that GLBNXT operates on your behalf. Your development team never needs to provision, configure, or maintain any part of this layer. GLBNXT handles it entirely so your engineers can focus on building AI products rather than managing the infrastructure beneath them.

This section explains what the Managed Infrastructure Layer includes, how each component works, and what it means for your team in practice.

Compute

GLBNXT Platform provides on-demand GPU and CPU compute resources optimised for AI workloads. GPU resources are available immediately for inference, fine-tuning, and training tasks without any provisioning delay. CPU resources handle supporting services, orchestration workloads, and non-GPU compute tasks across the environment.

Compute resources are scaled automatically based on workload demand. Your team does not need to manage resource allocation, scaling policies, or capacity planning. GLBNXT monitors compute usage continuously and ensures that resources are available when your applications need them.

Kubernetes Orchestration

All workloads on GLBNXT Platform run on a managed Kubernetes environment. Container scheduling, deployment management, scaling, health monitoring, and failover are handled automatically at the orchestration layer. Your team deploys applications and services without needing to interact with Kubernetes directly unless you choose to.

The orchestration layer ensures that every component in your stack runs reliably, recovers automatically from failures, and scales in response to load without manual intervention. GLBNXT manages the Kubernetes control plane, node health, and cluster configuration as part of the platform service.

Networking and Security

Your platform environment runs within an isolated network boundary. All internal service communication is encrypted, and external access is controlled through managed ingress points with configurable rules. GLBNXT configures and maintains firewall policies, network segmentation, and traffic routing for your environment.

Network isolation ensures that your workloads are separated from other platform tenants at the infrastructure level. Outbound connectivity to approved external services is available and configurable, while all inbound access is controlled and audited.

Secrets and Credential Vault

All credentials, API keys, tokens, and sensitive configuration values are managed through the platform's built-in secrets vault. Secrets are never stored in code, configuration files, or environment variables that are accessible to application developers. Instead, they are injected securely into application workloads at runtime by the platform.

This approach eliminates a common source of security incidents in AI development environments, where credentials for model APIs, databases, and external services are frequently handled insecurely. On GLBNXT Platform, credential management is a platform responsibility rather than a developer responsibility.

Storage and Backups

GLBNXT Platform provides multiple storage types to support the data needs of AI applications. Object storage is available for large unstructured data such as documents, media files, and model artefacts. Relational database storage supports structured data and application state. Vector storage is provisioned alongside the vector database services available in your environment.

All data stored on the platform is backed up automatically according to the backup policies configured for your environment. Backup frequency, retention periods, and recovery procedures are defined at the platform level. Your team does not need to manage backup infrastructure or monitor backup health.

Observability and Monitoring

GLBNXT Platform provides full-stack observability across all infrastructure components and application workloads in your environment. Infrastructure metrics, application health, model usage, and performance data are collected continuously and made available through the monitoring dashboards in your platform console.

Alerting is configured at the platform level to detect and respond to infrastructure issues before they affect your applications. GLBNXT monitors the health of every component in the stack and manages incident response for infrastructure-level events. Your team receives visibility into platform health without carrying the operational burden of managing the monitoring infrastructure itself.

Compliance and Audit Trails

Every action taken within the platform environment is logged. This includes user access events, API calls, model inference requests, data access operations, and administrative changes. Audit logs are immutable, timestamped, and retained according to the compliance requirements agreed for your environment.

Audit data is available through the Monitoring and Observability area of the platform console and can be exported for integration with your organisation's existing compliance and SIEM tooling. GLBNXT maintains the audit infrastructure and ensures that log completeness and integrity are preserved at all times.

Model Routing

Model routing manages how inference requests are directed to the appropriate model and compute resources within your environment. When your application makes an inference request, the routing layer handles model selection, load balancing across available compute, and failover if a serving instance becomes unavailable.

Model routing is transparent to application developers. Your application calls a model endpoint and the routing layer handles everything behind it. As your environment grows and more models are added, routing configuration can be updated centrally without changes to your application code.

What This Means for Your Team

The Managed Infrastructure Layer means that your development team starts every project on a fully operational, enterprise-grade foundation. There are no infrastructure tickets to raise before beginning development, no waiting periods for environment provisioning, and no ongoing operational work required to keep the platform running.

The practical outcome is that engineering effort on GLBNXT Platform goes almost entirely into building AI solutions. The infrastructure concerns that typically consume significant DevOps and platform engineering time are a platform responsibility, not a team responsibility. Your engineers build. GLBNXT operates.

Last updated

Was this helpful?