Troubleshooting
Common issues and solutions when integrating with OpenBox.
Worker Not Connecting to OpenBox
Check that your environment variables are set:
[ -n "$OPENBOX_URL" ] && echo "OPENBOX_URL is set" || echo "OPENBOX_URL is NOT set"
[ -n "$OPENBOX_API_KEY" ] && echo "OPENBOX_API_KEY is set" || echo "OPENBOX_API_KEY is NOT set"
Verify step by step:
- Confirm
OPENBOX_URLandOPENBOX_API_KEYare set in the worker environment - Start the worker and check logs for OpenBox initialization errors
- Trigger a workflow and confirm a session appears in the OpenBox dashboard
No Sessions in Dashboard
If sessions don't appear after running a workflow:
- Ensure the worker is running and connected to OpenBox (check for
OpenBox SDK initialized successfullyin the worker logs) - Confirm the workflow completed — check the Temporal UI at http://localhost:8233
- Verify the API key matches the agent registered in OpenBox
Governance Blocks or Stops Your Agent
When a behavioral rule or policy triggers, the SDK raises a GovernanceStop exception. This is expected — it means governance is working.
To investigate:
- Open the OpenBox Dashboard
- Go to your agent → Overview tab
- Open the session to see which rule triggered the block
See Error Handling for how to handle GovernanceStop and other trust exceptions in your code.
Approval Requests Not Appearing
If your agent is paused waiting for approval but nothing shows in the Approvals page:
- Confirm the behavioral rule is set to Require Approval (not Block)
- Check that the agent's trust tier matches the rule conditions
- Verify the approval timeout hasn't already expired
See Approvals for how the approval queue works.
LLM API Errors
The demo uses LiteLLM for model routing. The LLM_MODEL format is provider/model-name.
Common models:
| Provider | Example LLM_MODEL value |
|---|---|
| OpenAI | openai/gpt-4o |
| Anthropic | anthropic/claude-sonnet-4-5-20250929 |
| Google AI | gemini/gemini-2.0-flash |
If you're seeing LLM errors, check that LLM_MODEL and LLM_KEY are correct in your .env.
To test your LLM configuration, run this from your project directory:
- uv
- pip (venv)
uv run python3 -c "
import os
from dotenv import load_dotenv
load_dotenv()
from litellm import completion
response = completion(
model=os.getenv('LLM_MODEL'),
api_key=os.getenv('LLM_KEY'),
messages=[{'role': 'user', 'content': 'test'}]
)
print(response.choices[0].message.content)
"
# Activate your virtual environment first
# source .venv/bin/activate
python3 -c "
import os
from dotenv import load_dotenv
load_dotenv()
from litellm import completion
response = completion(
model=os.getenv('LLM_MODEL'),
api_key=os.getenv('LLM_KEY'),
messages=[{'role': 'user', 'content': 'test'}]
)
print(response.choices[0].message.content)
"
Use the LLM_MODEL and LLM_KEY values from your .env. See the LiteLLM providers list for supported models and formats.
Temporal Server Not Running
If the worker can't connect to Temporal:
Connection refused: localhost:7233
Start the Temporal dev server:
temporal server start-dev
The Temporal UI will be available at http://localhost:8233.