plug-circle-boltConnecting n8n Workflows to OpenWebUI via Function

This guide walks you through connecting any n8n AI workflow to OpenWebUI, so your team can chat with it like a regular AI assistant. No coding experience is required - you will copy a ready-made script and change one setting.


Why would I do this?

n8n lets you build powerful AI workflows - for example, a document search agent that answers questions based on your company files. By connecting that workflow to OpenWebUI, your team gets a familiar chat interface to interact with it. They simply type a question and get an answer, without ever needing to open n8n.


Before you start

Make sure you have the following in place:

  • A working n8n workflow with a Webhook node and a Respond to Webhook node. This is the workflow your users will chat with.

  • Access to OpenWebUI with admin permissions (you need to be able to add Functions).

  • The Webhook URL from your n8n workflow. You can find this by opening the Webhook node and clicking "Production URL."


Step 1 - Prepare your n8n workflow

Your n8n workflow needs two things to accept messages from OpenWebUI:

A Webhook node

This is the "front door" of your workflow. It receives the chat message from OpenWebUI. Make sure the HTTP Method is set to POST.

A Respond to Webhook node

This is what sends the answer back. It should be placed at the end of your workflow, after your AI Agent has generated a response.

The critical setting

Open your Webhook node and find the Respond dropdown. Set it to:

Using 'Respond to Webhook' Node

If this is set to "Immediately," OpenWebUI will get an empty response before your AI Agent has had time to process the question. This is the most common mistake.

Copy your Webhook URL

Click Production URL at the top of the Webhook node and copy the full URL. It will look something like this:

You will need this in the next step.


Step 2 - Download the Pipe script

A "Pipe" is a small script that tells OpenWebUI how to talk to your n8n workflow. GLBNXT provides a ready-made template that works out of the box.

  1. Go to the GLBNXT Sandbox.

  2. Find the file called n8n OpenWebUI Pipe (or a similar name in the templates section).

  3. Download it to your computer.

The file is a .py (Python) file. You do not need to understand Python to use it - you only need to change one setting inside it.

💡 Tip: You can also ask any basic LLM to generate a pipe script for you. Simply describe what your n8n workflow does, share the webhook URL, and ask it to write an OpenWebUI pipe function. GLBNXT agents can do this as well.


Step 3 - Update the Webhook URL in the script

Open the downloaded pipe script in any text editor (Notepad, TextEdit, or VS Code all work fine). Look for this line near the top:

Replace the URL between the quotes with the Production URL you copied in Step 1. Save the file.


Step 4 - Add the Pipe to OpenWebUI

  • Open OpenWebUI and log in as an admin.

  • Go to Admin PanelFunctions.

  • Click the "+" button to add a new function.

  • Paste the entire contents of the pipe script into the editor.

  • Click Save.

The function will now appear in your list with the name defined in the script (for example, "Breda RAG" or whatever you have named it).


Step 5 - Create a model that uses the Pipe

Now you need to tell OpenWebUI to use your new pipe as a "model" that people can chat with.

  • Go to WorkspaceModels.

  • Click Create a model.

  • Give it a clear name, for example: "Document Search" or "Company Knowledge Base."

  • Under the "Select a base model" configuration, select your pipe function as the backend.

  • Save.

Your team can now select this model from the model dropdown in any new chat and start asking questions.


Step 6 - Test it

  1. Open a new chat in OpenWebUI.

  2. Select the model you just created from the dropdown.

  3. Type a question and press Enter.

  4. You should see the response from your n8n AI workflow appear in the chat.

If it works, you are done. Your n8n workflow is now available as a chat assistant in OpenWebUI.


Adjusting settings after setup

After saving the pipe function, you can adjust settings without editing the script directly:

  1. Go to Admin PanelFunctions.

  2. Click the gear icon next to your pipe function.

  3. Here you can change the Webhook URL and the Timeout (how long OpenWebUI waits for a response before giving up).

The default timeout is 120 seconds. If your workflow processes large documents, you may want to increase this.


Troubleshooting

"The workflow returned an empty response"

Your Webhook node's Respond setting is probably set to "Immediately." Change it to "Using 'Respond to Webhook' Node" as described in Step 1.

"Could not reach the n8n webhook"

  • Check that your n8n instance is running.

  • Verify the Webhook URL is correct (Production URL, not Test URL).

  • Make sure the workflow is active (toggled on) in n8n.

"n8n returned an error: 404"

The webhook path does not exist. Double-check the URL and make sure the workflow is active.

"n8n returned an error: 500"

Something went wrong inside the workflow itself. Open n8n, go to the workflow's Executions tab, and check the most recent execution for error details.

The response takes very long

AI workflows that search through many documents can take time. You can increase the timeout in the pipe settings (the gear icon in Admin → Functions). You can also optimize your n8n workflow by reducing the number of documents retrieved or adjusting chunk sizes.


How it works (optional background)

For those who want to understand what happens behind the scenes:

  1. A user types a message in OpenWebUI.

  2. The pipe script packages the message as chatInput along with a sessionId (derived from the chat ID, so conversation history is maintained).

  3. It sends this as a POST request to your n8n webhook.

  4. n8n receives the message, runs it through your AI Agent (with tools like vector search, reranking, or document retrieval), and generates an answer.

  5. The Respond to Webhook node sends the answer back to the pipe.

  6. The pipe displays the answer in the OpenWebUI chat.

The user never interacts with n8n directly. To them, it feels like chatting with any other AI model.


Building your own pipe scripts

The template provided in the Sandbox covers the most common setup. However, if your workflow has a different input/output format, you may need a customised version.

You have several options:

  • Ask a basic LLM (like ChatGPT, Claude, or any model available on your GLBNXT platform) to write or modify a pipe script for you. Simply describe your webhook URL, what fields your workflow expects, and what format the response comes in.

  • Use a GLBNXT agent to generate the script. Agents on the GLBNXT platform are familiar with n8n and OpenWebUI patterns and can produce working pipe scripts quickly.

  • Modify the template yourself. The key parts to change are the webhook URL, the field names in the payload (chatInput, sessionId), and the response parsing logic.


Summary

Step
What to do

1

Set your n8n Webhook to respond using the Respond to Webhook node

2

Download the pipe script template from the GLBNXT Sandbox

3

Paste your n8n Production Webhook URL into the script

4

Add the script as a Function in OpenWebUI (Admin → Functions)

5

Create a model in OpenWebUI that uses the pipe

6

Test by chatting with the new model


Need help? Contact the GLBNXT support team or ask a GLBNXT agent to walk you through the setup.

Last updated

Was this helpful?