Model hub
The Model Hub is your central place to discover and explore all AI models available on the GLBNXT Platform. Whether you are looking for a model for text generation, code assistance, or multimodal tasks, the Model Hub helps you compare models and find the right fit for your use case.
Overview
The Model Hub presents all available models in a searchable, filterable table. Each model entry gives you key information at a glance: the model's name, its creator, a technical model ID for use in API calls, a description of what the model can do, and its supported use cases.

Finding the Right Model
To help you navigate the model library, the Model Hub offers several filters:
Model Creator: Filter by the organization behind the model (e.g., Anthropic, Mistral).
Region: Filter by where the model is hosted. This is important for compliance and data residency requirements (see "Regional Availability" below).
Use Case: Filter by what the model is designed for, helping you quickly find models that match your needs.
You can also search by model name or model ID using the search field.

Regional Availability
Each model shows one or more flag icons indicating where it is hosted. Models may be configured in redundant setups across multiple regions, which is why you may see more than one flag. The flags have the following meaning:
Dutch flag: The model runs on physical hardware located in The Netherlands.
European flag (EU): The model runs on physical hardware in Europe, operated by a European company.
US flag: The model runs on physical hardware in the United States, or the hosting company is a US entity.
Understanding regional availability helps you make informed decisions about data residency and regulatory compliance.
GLBNXT-Hosted Models
Models with the glbnxt/ prefix in their model ID run exclusively on dedicated GLBNXT infrastructure. This means the model is hosted and managed entirely by GLBNXT, giving you full control over where your data is processed.
Model ID
Each model has a unique technical identifier (e.g., anthropic/claude-opus-4.6) that you use to reference it in API calls. You can copy a model ID to your clipboard by clicking the copy icon next to it. A confirmation notification appears when the ID is copied.
Model Card
Clicking on a model row opens its model card, a detailed view with additional information:
Overview: The model's full name, model ID (both copyable), creator, and regional availability.
Supported Modalities: Shows which input types (e.g., Text, Image, Audio, Video, Code, Document) and output types the model supports, with clear indicators for supported and unsupported modalities.
Capabilities: Describes what the model can do, when available.
Technical Specifications: Key technical details such as context window size and maximum completion tokens.
The model card helps you evaluate whether a model meets your technical requirements before integrating it.

Benchmarks
When choosing a model, benchmarks can help you understand how it performs on standardized tasks. Benchmarks measure a model's abilities across areas like reasoning, coding, mathematics, and instruction following.
The Model Hub may display benchmark scores when available. For a broader view of how models compare, the following independent sources provide up-to-date benchmark data:
Artificial Analysis: Compares hundreds of models on quality, speed, and pricing. Covers both open-source and proprietary models. A good starting point for a general performance overview.
Hugging Face Model Cards: Each model's page on Hugging Face includes detailed benchmark scores reported by the model creator. Useful for in-depth technical evaluation of a specific model.
Chatbot Arena: Ranks models based on anonymous head-to-head evaluations by real users. The Elo-based leaderboard reflects how models perform in practice, beyond standardized tests.
These sources are publicly accessible and regularly updated.
Last updated
Was this helpful?