Model transparency controls
Model transparency is a core principle of GLBNXT Platform. Every AI model serving inference requests in your environment is clearly attributed, its provenance is documented, and your organisation retains full visibility into which model produced which output. There are no black-box AI layers operating behind anonymous endpoints, no undisclosed model substitutions, and no scenario in which your applications are routing requests to a model your team has not explicitly selected and approved.
For organisations deploying AI in regulated sectors, for teams building products that must meet emerging AI governance requirements, and for any organisation that takes its accountability obligations seriously, model transparency is not an optional extra. It is a prerequisite for responsible AI deployment. GLBNXT Platform is designed to make it straightforward to achieve.
Model Attribution
Every model available in your environment through the Model Hub is clearly identified. For each model, the following information is available.
Model identity: the name, version, and family of the model, identifying precisely which model is serving requests on a given endpoint. There is no scenario in which a request you believe is being processed by one model is silently routed to a different one.
Model provenance: the origin of the model, including its developer, training approach, and the source from which it was obtained, whether that is the open-source community, a model provider, or a custom training run conducted by your team.
Serving runtime: the inference runtime through which the model is served, either Ollama for open-source models or NVIDIA NIM for optimised production inference, along with the version of the serving runtime in use.
Version history: a record of previous model versions deployed on each endpoint in your environment, supporting traceability for any output produced during a prior version's deployment period.
Model attribution information is available in the Model Hub area of the platform console and is captured in the audit trail for every inference request.
No Training on Your Data
GLBNXT Platform does not use data processed through your environment to train, fine-tune, evaluate, or improve any AI model. Inference requests submitted by your applications and users are processed within your environment and returned to the requesting application. They do not flow to any external model provider, do not contribute to any training dataset, and do not result in any modification to model weights.
This guarantee applies to all models served through the platform, including open-source models served through Ollama, models served through NVIDIA NIM, and any custom models deployed into your environment. The data your organisation processes through AI models on GLBNXT Platform remains yours and is used exclusively to serve the requests you submit.
For organisations with contractual obligations to clients about how client data is handled in AI systems, this guarantee provides the foundation for those commitments. It is documented in the GLBNXT data processing agreement and service contract.
Explainability and Output Traceability
Transparent AI deployment requires not only knowing which model produced an output but being able to trace how that output was produced. GLBNXT Platform supports output traceability through the audit logging and LLM tracing capabilities available in your environment.
For every model interaction, the audit trail captures the model endpoint called, the input submitted, and the output returned. This creates a complete, immutable record of every AI-generated output produced in your environment, linked to the specific model version that produced it. If a question is raised about a specific output, your team can retrieve the exact input, model, and context that produced it.
For applications built on RAG pipelines or agent workflows, LLM tracing provides a deeper level of output traceability. Traces capture not only the final model output but every intermediate step: the retrieval results that were used as context, the tool calls the model made, the intermediate outputs produced at each reasoning step, and the full chain of events that led to the final response. This level of traceability is particularly important for regulated use cases where the basis for an AI-assisted decision may need to be explained and justified.
Model Governance
Model transparency controls extend to how models enter and change within your environment. GLBNXT Platform applies a managed model governance process that ensures your team always knows what is running in your environment and has approved it before it goes into production.
Model onboarding review: before a new model is made available in your Model Hub, GLBNXT validates the model against defined criteria covering its provenance, capability, and suitability for the platform environment. Models are not added to production environments without a review and deployment process that your team participates in.
Version change notification: when a model version in your environment is being updated, your team is notified in advance. Version changes are not applied silently. Your team has the opportunity to evaluate the new version against your quality criteria before it is promoted to the production endpoint.
Custom model governance: custom or fine-tuned models developed by your team and deployed into the Model Hub are subject to the same versioning and promotion process as other models. Each version is distinct, tracked, and deployed through a controlled process rather than pushed directly to production endpoints.
AI Act Alignment
The EU AI Act introduces a risk-based regulatory framework for AI systems, with transparency and documentation requirements that apply to providers and deployers of AI across different risk categories. GLBNXT Platform's model transparency controls are designed to support compliance with AI Act obligations as the regulation comes into effect.
Relevant capabilities that support AI Act alignment include:
Technical documentation: model attribution, provenance documentation, and version history provide the basis for the technical documentation requirements that apply to AI systems under the AI Act.
Logging and auditability: the platform's comprehensive audit trail and LLM tracing capabilities support the record-keeping requirements for high-risk AI systems, including the ability to demonstrate that AI outputs are traceable to specific model versions and inputs.
Human oversight: the platform architecture supports the implementation of human oversight mechanisms for AI systems where the AI Act requires that human review is possible and that outputs are not acted upon automatically in high-risk contexts.
Transparency to users: for AI systems deployed to end users, the platform's frontend and API components support the implementation of transparency measures required by the AI Act, including disclosure that users are interacting with an AI system.
The AI Act's requirements are phased in over time and apply differently depending on the risk category of the AI system in question. Your legal and compliance teams should assess the specific obligations applicable to each AI application your organisation deploys and ensure that application-level controls address those obligations in addition to the platform-level transparency controls described in this section.
Responsible AI Practices
Model transparency controls are one dimension of responsible AI deployment. GLBNXT Platform provides the technical foundation for transparent, accountable AI. Your organisation is responsible for the governance practices that make use of that foundation effectively.
Practices that complement the platform's transparency controls and contribute to responsible AI deployment include:
Maintaining an AI system register: documenting each AI application deployed on the platform, the model or models it uses, the purpose for which it is deployed, the data it processes, and the users or processes it affects. This register provides the organisational visibility that complements the technical audit trail.
Defining acceptable use policies: establishing clear guidelines for how AI applications in your environment should and should not be used, communicated to users and enforced through system prompt configuration and access controls.
Conducting pre-deployment risk assessments: evaluating the potential impacts of a new AI application before it is deployed to users, particularly for applications used in high-stakes contexts such as hiring, lending, healthcare, or legal processes.
Reviewing AI outputs in high-stakes contexts: implementing human review checkpoints for AI-assisted decisions that have significant consequences for individuals, ensuring that the efficiency benefits of AI do not come at the cost of appropriate human accountability.
Monitoring for bias and quality drift: using the evaluation and monitoring capabilities of the platform to track model output quality over time and identify patterns that may indicate bias, degradation, or unexpected behaviour in production.
For guidance on the audit and logging capabilities that underpin model transparency on GLBNXT Platform, see the Audit Logs and Query History section. For guidance on evaluation tooling that supports ongoing model quality monitoring, see the Model Evaluation and Versioning section.
Last updated
Was this helpful?