Skip to main content
Last updated on

Troubleshooting

Common issues and solutions when integrating with OpenBox.


Worker Not Connecting to OpenBox

Check that your environment variables are set:

[ -n "$OPENBOX_URL" ] && echo "OPENBOX_URL is set" || echo "OPENBOX_URL is NOT set"
[ -n "$OPENBOX_API_KEY" ] && echo "OPENBOX_API_KEY is set" || echo "OPENBOX_API_KEY is NOT set"

Verify step by step:

  1. Confirm OPENBOX_URL and OPENBOX_API_KEY are set in the worker environment
  2. Start the worker and check logs for OpenBox initialization errors
  3. Trigger a workflow and confirm a session appears in the OpenBox dashboard

No Sessions in Dashboard

If sessions don't appear after running a workflow:

  1. Ensure the worker is running and connected to OpenBox (check for OpenBox SDK initialized successfully in the worker logs)
  2. Confirm the workflow completed — check the Temporal UI at http://localhost:8233
  3. Verify the API key matches the agent registered in OpenBox

Governance Blocks or Stops Your Agent

When a behavioral rule or policy triggers, the SDK raises a GovernanceStop exception. This is expected — it means governance is working.

To investigate:

  1. Open the OpenBox Dashboard
  2. Go to your agent → Overview tab
  3. Open the session to see which rule triggered the block

See Error Handling for how to handle GovernanceStop and other trust exceptions in your code.


Approval Requests Not Appearing

If your agent is paused waiting for approval but nothing shows in the Approvals page:

  1. Confirm the behavioral rule is set to Require Approval (not Block)
  2. Check that the agent's trust tier matches the rule conditions
  3. Verify the approval timeout hasn't already expired

See Approvals for how the approval queue works.


LLM API Errors

The demo uses LiteLLM for model routing. The LLM_MODEL format is provider/model-name.

Common models:

ProviderExample LLM_MODEL value
OpenAIopenai/gpt-4o
Anthropicanthropic/claude-sonnet-4-5-20250929
Google AIgemini/gemini-2.0-flash

If you're seeing LLM errors, check that LLM_MODEL and LLM_KEY are correct in your .env.

To test your LLM configuration, run this from your project directory:

uv run python3 -c "
import os
from dotenv import load_dotenv
load_dotenv()
from litellm import completion
response = completion(
model=os.getenv('LLM_MODEL'),
api_key=os.getenv('LLM_KEY'),
messages=[{'role': 'user', 'content': 'test'}]
)
print(response.choices[0].message.content)
"

Use the LLM_MODEL and LLM_KEY values from your .env. See the LiteLLM providers list for supported models and formats.


Temporal Server Not Running

If the worker can't connect to Temporal:

Connection refused: localhost:7233

Start the Temporal dev server:

temporal server start-dev

The Temporal UI will be available at http://localhost:8233.