Run the n8n Demo
Spin up a fully configured demo on your machine — n8n with OpenBox governance pre-wired, a sample workflow, and a local LLM via Ollama.
Prerequisites
- Docker Desktop running
- Ollama installed with at least one model pulled:
ollama pull llama3.2
ollama serve
- An OpenBox API key. Register an agent in the OpenBox Dashboard to obtain one.
Clone and Configure
git clone https://github.com/OpenBox-AI/n8n-openbox-poc.git
cd n8n-openbox-poc
Copy .env.example to .env and set your API key:
.env
OPENBOX_API_KEY=obx_live_your_key_here
Start
macOS / Linux:
./start.sh
Windows:
start.bat
This starts n8n and Postgres via Docker, installs the OpenBox node, creates a demo account, and activates the showcase workflow. Takes about a minute on first run.
Open n8n and Chat
Navigate to http://localhost:5678 and log in:
- Email:
admin@openbox.ai - Password:
Openbox123
Open the OpenBox SDK Showcase workflow and click Chat. Each message passes through full OpenBox governance before reaching the LLM.
See It in the Dashboard
Open the OpenBox Dashboard:
- Navigate to Agents → click your agent
- On the Overview tab, find the session that corresponds to your workflow run
- Click Details to open the Event Log Timeline
- Scroll through the timeline — you'll see every governance stage:
- Pre-call governance decision on your input
- The LLM call with request and response
- Post-call governance decision on the output
- The final verdict returned to n8n
What Just Happened?
When you chatted with the agent, the OpenBox govern() function:
- Checked the input before the LLM call — your message was evaluated against configured guardrails and policies (PII detection, toxicity filtering, banned terms)
- Ran your LLM call — the governed input was passed to the Ollama model
- Checked the output after the LLM call — the LLM response was evaluated against post-call guardrails and policies
- Recorded a governance decision for every stage — giving you a complete audit trail in the dashboard
Reset
To tear down and restart from scratch:
docker compose down -v --rmi local
-vremoves volumes (database data, etc.)--rmi localremoves images that were built locally, not pulled from a registry
Then run ./start.sh (macOS/Linux) or start.bat (Windows) again.
Next Steps
- Wrap an Existing n8n Workflow — Add governance to your own n8n workflow
- n8n Integration Guide — Deep dive into configuration, error handling, and the governance pipeline
- Configure Guardrails — Set up PII detection, toxicity filtering, and content classification