databaseStorage & data layer

The storage and data layer on GLBNXT Platform provides every data service your AI applications need, managed and operated by GLBNXT as part of the platform foundation. Relational databases, object storage, and vector databases are all provisioned within your environment, configured to work together, and maintained without any operational overhead falling to your development team.

This section explains what storage services are available, when to use each one, and how they integrate with the AI applications and pipelines your team builds on the platform.

Storage Services Overview

GLBNXT Platform provides three categories of storage, each serving a distinct role in AI application architectures.

Relational database storage handles structured data, application state, user data, and transactional workloads. Postgres is the relational database available on the platform, suitable for any use case that requires structured querying, relational data models, or ACID-compliant transactions.

Object storage handles large volumes of unstructured data such as documents, images, audio files, model artefacts, and other binary content. MinIO provides S3-compatible object storage within your environment, giving your applications a familiar interface for storing and retrieving large files without any dependency on external cloud storage services.

Vector storage handles high-dimensional embeddings used for semantic search, retrieval-augmented generation, and similarity matching. GLBNXT Platform supports Postgres, Weaviate and Qdrant as vector database options, with the appropriate choice configured for your environment based on your use case requirements.

All three categories of storage run entirely within your platform environment, within EU infrastructure, and with no data leaving your sovereign boundary.

Postgres

Postgres is the relational database service available on GLBNXT Platform. It is suited for structured data that requires querying with SQL, relational modelling, or transactional consistency. Common uses in AI application architectures include storing user profiles and conversation history, managing application configuration and metadata, logging structured outputs from AI workflows, and maintaining reference data that models or agents query at runtime.

GLBNXT manages Postgres provisioning, configuration, patching, and backup. Your team connects to Postgres using standard database connection libraries and tools. Connection credentials are managed through the platform secrets vault and injected into your applications securely at runtime.

MinIO

MinIO provides S3-compatible object storage within your GLBNXT Platform environment. It is designed for storing and retrieving large unstructured files and is the primary storage layer for document ingestion pipelines, RAG systems, and any application that works with file-based content.

Common uses include storing source documents before and after ingestion into a RAG pipeline, holding model artefacts and fine-tuned weights, archiving conversation logs and output files, and staging data between workflow steps. Because MinIO uses the S3 API, any tool or library that works with S3 will work with MinIO on GLBNXT Platform without modification.

GLBNXT manages MinIO provisioning, access controls, and storage policies. Bucket configuration and access permissions for individual applications are managed through the platform console or via the MinIO API depending on your team's preferred approach.

Vector Databases

Vector databases are a core component of AI applications that involve semantic search, document retrieval, or retrieval-augmented generation. They store embeddings, which are numerical representations of text, images, or other content, and enable fast similarity search across large datasets that relational databases are not designed to handle efficiently.

GLBNXT Platform supports multiple vector database options: Postgres, Weaviate and Qdrant (and others). Both are available within your platform environment. The choice between them is typically made during onboarding based on your specific use case requirements, and your GLBNXT contact can advise on which is the better fit for your architecture.

Weaviate

Weaviate is a vector database well suited to use cases that require rich semantic search capabilities, multi-modal data handling, and tight integration with AI models for automatic vectorisation at ingestion time. It supports hybrid search combining vector similarity with structured filtering, making it a strong choice for knowledge bases and document retrieval systems that need to combine semantic and keyword-based search.

Qdrant

Qdrant is a high-performance vector database optimised for speed and efficiency at scale. It is well suited to applications that process large volumes of embeddings and require low-latency retrieval under high query loads. Qdrant is a strong choice for production RAG pipelines and real-time AI applications where retrieval performance is a critical requirement.

Elasticsearch

For use cases that require full-text search, log analytics, or hybrid retrieval combining keyword and vector search, GLBNXT Platform includes Elasticsearch. Elasticsearch is particularly suited to applications that need to search across large document sets using both structured queries and full-text matching, and to observability use cases where log data needs to be indexed and queried efficiently.

Elasticsearch on GLBNXT Platform is managed as part of the platform stack. Your team can connect to Elasticsearch for application-level search requirements without managing the underlying cluster, index configuration, or scaling.

Backup and Data Retention

All data stored on GLBNXT Platform is backed up automatically. Backup policies are configured at the platform level for each storage service and are aligned with the data retention requirements agreed for your environment during onboarding. GLBNXT monitors backup health and manages recovery procedures. Your team does not carry any operational responsibility for backup infrastructure.

If your organisation has specific data retention or deletion requirements driven by GDPR or other regulatory obligations, these are implemented at the platform level and documented in your data processing agreement with GLBNXT.

Data Sovereignty and Residency

All storage services on GLBNXT Platform run within EU infrastructure. No data stored in Postgres, MinIO, Weaviate, Qdrant, or Elasticsearch leaves the European Union at any point. This applies equally to data at rest and data in transit between platform components.

Data residency guarantees are documented in your GLBNXT service agreement and data processing agreement. If your organisation operates under specific data localisation requirements, your GLBNXT contact can confirm the exact infrastructure region used for your environment.

Choosing the Right Storage for Your Use Case

Different parts of an AI application typically require different storage types, and many production solutions on GLBNXT Platform use all three categories together. A common pattern for a RAG-based AI assistant would use MinIO to store source documents, Weaviate or Qdrant to store the embeddings generated from those documents, and Postgres to store conversation history, user data, and application metadata.

If you are unsure which storage services are appropriate for your use case, the Building AI Solutions section covers common data architecture patterns for each solution category available on the platform.

Last updated

Was this helpful?