Supported open-source tools
GLBNXT Platform is built on open-source technology. Every component in the platform stack is drawn from the open-source ecosystem, made enterprise-ready, and delivered as a managed service within your environment. Your team works with the best open tools available without carrying the operational burden of deploying, configuring, patching, and maintaining them. GLBNXT handles that entirely.
This section provides an overview of the open-source tools available on GLBNXT Platform, organised by the role each plays in the platform stack. Not every tool listed here is available in every environment by default. The specific stack configured for your organisation is agreed during onboarding based on your use case requirements. Contact your GLBNXT contact if you need a tool that is not currently available in your environment.
Open-Source Tools by Category
OpenWebUI
AI Assistants and Chat Interfaces
Self-hosted chat interface for interacting with language models, supporting multi-model conversations and RAG-based chat
LibreChat
AI Assistants and Chat Interfaces
Flexible open-source chat interface supporting multiple AI backends, conversation management, and agent capabilities
Jupyter Notebook
Development and Experimentation
Interactive computing environment for data science, model experimentation, and AI development workflows
n8n
Workflow Automation and Orchestration
Visual workflow automation platform connecting AI models, APIs, databases, and external services through a node-based interface
Langflow
Workflow Automation and Orchestration
Visual builder for AI pipelines and RAG workflows, suited for prototyping and iterating on language model chains
LangChain
AI Development Framework
Open-source framework for building applications with language models, supporting chains, agents, and retrieval workflows
LiteLLM Proxy
Model Serving and Routing
Unified API proxy that standardises access to multiple language model providers behind a single consistent interface
Langfuse
Observability and Monitoring
LLM tracing, evaluation scoring, and dataset management for tracking model quality and performance across AI applications
PostgreSQL
Data Storage and Management
Relational database for structured data storage, application state, conversation history, and transactional workloads
PGVector
Vector Storage and Retrieval
PostgreSQL extension enabling vector similarity search directly within a relational database, suited for lightweight RAG use cases
Weaviate
Vector Storage and Retrieval
Dedicated vector database designed for semantic search, hybrid retrieval, and multi-modal data handling at scale
Qdrant
Vector Storage and Retrieval
High-performance vector database optimised for low-latency retrieval under high query loads in production RAG pipelines
Elasticsearch
Search and Analytics
Full-text search and analytics engine supporting hybrid retrieval combining keyword and vector search across large document sets
MinIO
Data Storage and Management
S3-compatible object storage for documents, model artefacts, binary files, and unstructured content within the platform environment
Model Serving and Inference
Ollama is the primary inference runtime for open-source language and embedding models on the platform. It supports a wide range of models from the open-source ecosystem and exposes them through a consistent API compatible with standard AI development frameworks. GLBNXT manages Ollama deployment, model loading, and version management within your environment.
NVIDIA NIM provides hardware-optimised model serving for production inference workloads that require high throughput and low latency. NIM runs on NVIDIA GPU infrastructure and is suited for user-facing applications and high-volume API services where inference performance is a critical requirement.
Hugging Face provides access to the open-source model registry from which models can be pulled and deployed into your environment. Models sourced from Hugging Face are validated and served through the platform's managed inference layer rather than called externally, keeping all inference within your sovereign boundary.
AI Assistants and Chat Interfaces
Open WebUI is a feature-rich, self-hosted chat interface that provides a conversational frontend for interacting with language models. It supports multi-model conversations, document uploads, retrieval-augmented chat, and a range of configuration options for assistant behaviour. Open WebUI is available as a managed application within your environment and is the primary chat interface for GLBNXT Workspace deployments.
LibreChat is an open-source chat interface that supports multi-model interaction, conversation management, and integration with a range of AI backends. It is suited for deployments requiring a flexible, configurable chat experience with support for multiple models and agent capabilities within the same interface.
Workflow Automation and Orchestration
n8n is a workflow automation platform that enables teams to build complex automated processes connecting AI models, APIs, databases, and external services through a visual node-based interface. It supports over one thousand service integrations and is the primary low-code workflow automation tool on GLBNXT Platform.
Langflow is a visual builder for AI pipelines and workflows with a focus on language model chains, RAG systems, and agent configurations. It is particularly suited for prototyping and iterating on AI pipeline designs before committing to a production architecture, and for teams who prefer a visual approach to constructing model chains and retrieval workflows.
Vector Databases
Weaviate is a vector database designed for semantic search, multi-modal data handling, and hybrid retrieval combining vector similarity with structured metadata filtering. It supports automatic vectorisation at ingestion time and is well suited for knowledge base applications and document retrieval systems that require both semantic and keyword-based search.
Qdrant is a high-performance vector database optimised for low-latency retrieval at scale. It is suited for production RAG pipelines and real-time AI applications where retrieval speed under load is a critical requirement.
Data Storage and Management
Postgres is the relational database service available on GLBNXT Platform, used for structured data storage, application state management, conversation history, and any use case requiring SQL querying, relational data models, or transactional consistency. Postgres is a foundational component of most AI application architectures on the platform.
Supabase provides a developer-friendly layer on top of Postgres, adding authentication, real-time data subscriptions, and a REST API that makes it straightforward to build data-backed applications without writing custom database access layers. It is suited for teams who want the power of Postgres with a more accessible developer interface.
MinIO provides S3-compatible object storage within your platform environment for storing large unstructured data including documents, images, model artefacts, and binary files. Any tool or library that works with the S3 API works with MinIO on GLBNXT Platform without modification.
Elasticsearch provides full-text search, log analytics, and hybrid retrieval capabilities combining keyword and vector search. It is suited for applications that need to search across large document sets using structured queries and full-text matching, and for observability use cases where log data needs to be indexed and queried efficiently.
Observability and Monitoring
Langfuse provides LLM tracing, evaluation scoring, and dataset management for AI applications and pipelines. It captures traces of model interactions, supports automated and human evaluation workflows, and provides dashboards for tracking quality and performance metrics over time. Langfuse is the primary LLM observability tool on GLBNXT Platform.
Opik provides evaluation and observability capabilities focused on pipeline quality and output assessment. It supports evaluation dataset construction, automated scoring against defined criteria, and comparison of model versions across evaluation runs, making it well suited for structured model evaluation and selection processes.
Kubernetes Orchestration and Infrastructure
Rancher is the Kubernetes management platform used by GLBNXT to orchestrate containerised workloads across the platform infrastructure. It provides the control plane for container scheduling, deployment management, scaling, and cluster health monitoring across your environment. Rancher is a platform-managed component that operates transparently beneath your applications.
Open Standards and No Lock-In
Every tool in the GLBNXT Platform stack is open-source and built on open standards. There are no proprietary data formats, no black-box components, and no lock-in to GLBNXT as a vendor at the tool level. If your organisation's requirements change, the underlying technology remains fully accessible and portable.
GLBNXT's role is to take these tools, make them enterprise-ready, integrate them into a coherent managed platform, and operate them reliably within a sovereign, compliant environment. The value GLBNXT adds is in the orchestration, the operational management, and the security and compliance layer that sits across the full stack. The tools themselves remain open.
As the open-source AI ecosystem evolves rapidly, GLBNXT continuously evaluates new tools and updates the platform stack to include components that meet the quality, security, and reliability standards required for enterprise production environments. If your team is working with an open-source tool not listed here that you would like to see available in your environment, contact your GLBNXT contact to discuss its suitability for inclusion.
Last updated
Was this helpful?