Frontend & Webchat components
Frontend and webchat components are the user-facing layer of AI applications built on GLBNXT Platform. They provide the interfaces through which end users interact with the AI capabilities your team has built, whether that is a conversational assistant embedded in a web application, a standalone chat interface deployed for an internal team, or a custom-branded AI product delivered to a client.
On GLBNXT Platform, frontend and webchat components are available as managed, configurable services that can be deployed and customised without requiring significant frontend development effort. For teams that need a fully custom interface, the platform provides the API layer that custom frontends connect to, giving engineering teams complete control over the user experience while GLBNXT manages the infrastructure behind it.
This section explains what frontend and webchat components are available on the platform, how they are configured and deployed, and how to approach the decision between managed components and custom frontend development.
What Frontend and Webchat Components Are For
Frontend and webchat components are the right choice when an AI capability needs to be accessible to users through a visual interface rather than through an API consumed by another system. They bridge the gap between the AI capabilities running in your platform environment and the people who benefit from those capabilities in their day-to-day work.
Common use cases in this category include:
Internal AI assistants deployed for employees to interact with organisational knowledge, policies, or data through a conversational interface
Client-facing AI tools embedded in a web application or portal, providing AI-powered support, guidance, or analysis to external users
Sector-specific AI interfaces for regulated industries such as healthcare, legal, and financial services, where the interface needs to reflect compliance requirements and professional context
Standalone AI workspaces deployed for teams that need a governed, managed environment for AI-assisted work without requiring integration with an existing product
Embedded webchat widgets integrated into existing websites or applications, adding AI capability to a product that was not originally built around AI
If your use case involves putting AI capability directly in front of human users, frontend and webchat components are the appropriate solution layer.
Managed Chat Interface
GLBNXT Platform provides a hosted, configurable chat interface available within your environment. It connects directly to any model endpoint in your Model Hub and provides a complete conversational frontend that can be configured and deployed without writing custom frontend code.
The managed chat interface is the fastest path to a working user-facing AI application. It is well suited for internal deployments, proof of concept demonstrations, and use cases where the conversational experience itself is the primary requirement and custom branding or interface design is not a priority.
Configuration
The managed chat interface is configured through the platform console. Key configuration options include:
Model selection: the model endpoint the interface connects to, selected from the models available in your Model Hub
System prompt: the instruction set that defines the assistant's role, scope, tone, and behavioural boundaries for every conversation conducted through the interface
Conversation history: whether the interface retains conversation history within a session, how many prior turns are included in the model context, and whether history is persisted across sessions
Access controls: which users or user groups are permitted to access the interface, managed through the role-based access controls configured for your environment
Interface customisation: basic visual customisation options including the assistant name, introductory message, and input placeholder text
Extending the Managed Interface
The managed chat interface can be extended with additional platform capabilities by connecting it to other components in your environment. Connecting a vector database and retrieval pipeline transforms the interface into a RAG-based assistant that grounds responses in your organisational data. Connecting workflow automation allows the interface to trigger processes, retrieve live data, or interact with external systems as part of handling user requests. These extensions are configured at the platform level and take effect without changes to the interface itself.
Webchat Widgets
For teams that want to embed AI conversational capability within an existing website or web application, GLBNXT Platform provides webchat widget components that can be integrated into external web properties with minimal development effort.
A webchat widget is a self-contained conversational interface delivered as an embeddable component. It is placed on an existing web page using a script tag or component integration, and it communicates with the AI backend in your platform environment through the platform API layer. The external website or application does not need to implement any AI logic itself. It simply hosts the widget.
Webchat widgets are suited for use cases where AI capability needs to be added to an existing product or website without a full redesign of the frontend, or where a client wants to offer AI-powered support or assistance to their own users through their existing web presence.
Integration
Embedding a webchat widget involves three steps. First, configure the widget in your platform environment, specifying the model endpoint, system prompt, access controls, and any visual customisation options. Second, generate the widget embed code from the platform console. Third, place the embed code in the target website or application at the point where the widget should appear.
The widget handles session management, conversation history, and communication with the platform API automatically. The host website is responsible only for placing the widget and, if authentication is required, passing the user identity to the widget through the integration configuration.
Authentication and User Context
For widgets deployed in authenticated web applications, user identity can be passed to the widget at initialisation time. This allows the widget to apply access controls based on the authenticated user, personalise the assistant experience based on user context, and attribute conversation activity in the audit trail to specific users rather than anonymous sessions.
For public-facing widgets deployed on unauthenticated web pages, the widget operates without user identity. In these configurations, rate limiting and abuse prevention should be considered to protect against unintended high-volume usage of the underlying model endpoint.
Custom Frontend Development
For use cases that require a fully customised user interface, specific interaction patterns not supported by the managed components, deep integration with an existing product design system, or complete control over the user experience, GLBNXT Platform provides the API layer that custom frontends connect to.
Custom frontend development allows your team to build any interface experience while relying on GLBNXT Platform for everything behind it: model serving, retrieval, agent execution, secrets management, observability, and compliance. The frontend communicates with the platform through standard REST API calls, and the platform returns responses in consistent, documented formats.
When to Choose Custom Frontend Development
Custom frontend development is appropriate when:
The user experience requirements are specific enough that the managed chat interface cannot meet them through configuration alone
The AI capability needs to be deeply integrated into an existing product interface where introducing a separate chat component would create a disjointed user experience
The interface needs to present AI outputs in formats other than conversational text, such as structured data visualisations, document annotations, comparison views, or interactive outputs
Branding, accessibility, or design system requirements mandate that the interface is built within the organisation's existing frontend architecture
Custom frontend development is not necessary for most internal deployments or proof of concept builds. The managed chat interface and webchat widgets cover the majority of common use cases without requiring frontend engineering effort.
Connecting a Custom Frontend to the Platform
A custom frontend communicates with GLBNXT Platform through API endpoints that your team defines and deploys within the platform environment. The custom frontend calls these endpoints over HTTPS with the appropriate authentication credentials, and the platform API handles all backend AI processing, retrieval, and data management.
This architecture keeps the AI logic and the frontend logic cleanly separated. Changes to the AI backend, such as updating a model, adjusting a system prompt, or modifying retrieval configuration, do not require frontend changes. Changes to the interface, such as redesigning the layout or adding new interaction patterns, do not require changes to the AI backend. Each layer can evolve independently.
For guidance on designing and building the API layer that a custom frontend connects to, see the APIs and Functions section.
Compliance and Governance for User Interfaces
User-facing interfaces for AI applications in enterprise environments require specific compliance and governance considerations, particularly in regulated sectors.
Data Handling in the Interface
Conversational interfaces frequently receive sensitive information from users, either because users deliberately provide it as part of their query or because the nature of the use case involves sensitive subject matter. The interface configuration should reflect the data handling requirements that apply to the content passing through it. For deployments subject to GDPR or sector-specific data protection requirements, conversation data retention policies should be defined explicitly and aligned with your organisation's data processing agreements.
User Transparency
Users interacting with AI interfaces should understand that they are interacting with an AI system. Interface design and configuration should make this clear through appropriate labelling, introductory messaging, and any required disclosures that apply in your sector or jurisdiction. For interfaces operating in regulated sectors such as healthcare or financial services, the specific transparency requirements may be defined by applicable regulation or professional standards.
Conversation History and Data Retention
Conversation history retained by the interface is subject to data protection obligations in the same way as any other personal data processed by your organisation. Define retention periods for conversation history that are appropriate for the use case and compliant with applicable data protection requirements. Configure the interface to apply those retention periods automatically rather than retaining conversation data indefinitely.
Access Controls
Every interface deployed through GLBNXT Platform should have access controls configured that restrict usage to authorised users. For internal deployments this typically means authentication through your organisation's identity provider. For client-facing deployments this means authentication through the client's own identity management or through access controls configured for the specific deployment.
Unrestricted access to AI interfaces that connect to sensitive data sources, execute agent workflows, or consume significant compute resources creates both security and operational risks. Access controls are not optional for production deployments.
Getting Started
The recommended starting point for frontend and webchat development on GLBNXT Platform is to use the managed chat interface for the initial deployment and validate the AI capability and user experience before investing in custom frontend development.
A practical first frontend deployment follows this sequence:
Configure the managed chat interface in the platform console, connecting it to the appropriate model endpoint and defining the system prompt for your use case
Extend the interface with retrieval or tool capabilities if your use case requires them
Test the interface with representative users from your target audience, gathering feedback on the conversational experience and the quality of AI outputs
Identify any interface requirements that cannot be met through configuration of the managed component
If custom frontend development is justified by the requirements identified, design the API layer that the custom frontend will connect to before beginning frontend implementation
Deploy the custom frontend, connecting it to the platform API layer and validating end-to-end behaviour before releasing to users
For guidance on connecting frontend interfaces to AI assistant architectures, see the AI Assistants and Chat Interfaces section. For guidance on the API layer that custom frontends connect to, see the APIs and Functions section.
Last updated
Was this helpful?