This guide walks you through starting the Agent Manager backend, executing your first agent run, and launching the UI. By the end you will have a working local environment and a real API response in hand.Documentation Index
Fetch the complete documentation index at: https://operativusai.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisites
Before you start, make sure you have the following:- Java 21 or later — confirm with
java -version.JAVA_HOMEmust be set. - Docker Desktop — running and accessible. It is required for PostgreSQL and the Python code-execution sandbox.
- An LLM API key — at least one of
OPENAI_API_KEY,ANTHROPIC_API_KEY, orGOOGLE_API_KEY. - Node.js 20 or later — only needed for the UI.
Start the backend
Export your API key
Set at least one provider key in your shell. Agent Manager activates whichever providers have valid keys on startup.
Start the database
From the backend directory, bring up PostgreSQL:This starts PostgreSQL on port
5432. Wait a few seconds for it to be ready before proceeding.Make your first API call
Once the server is running, send a synchronous run request to one of the seeded agents. The endpoint isPOST /api/agents/{agentId}/runs.
session_id field is optional. If you omit it, the server generates a new UUID for you and includes it in the response — use it in subsequent requests to continue the same conversation.
Understand the response
A successful run returns aRunResponse object:
| Field | Description |
|---|---|
runId | Unique identifier for this execution. |
sessionId | Conversation context. Pass this back in the next request to continue the session. |
content | The agent’s final answer in Markdown. |
status | COMPLETED, FAILED, PAUSED, or CANCELLED. |
tools | Ordered list of tool calls the agent made, with inputs and outputs. |
reasoningSteps | The agent’s internal reasoning trace, captured before the final answer. |
metadata | Model name and token usage for this run. |
metrics | Execution timing and other per-run telemetry. |
Start the UI
The Agent Manager UI runs as a separate Node.js development server. Open a new terminal, navigate to the UI directory, and run:http://localhost:5173. The UI connects to the backend at http://localhost:8080/api by default.
From the UI you can:
- Chat with any registered agent and see streaming tokens and reasoning steps in real time.
- Browse and configure agents in the registry.
- Upload PDFs or trigger URL ingestion for the knowledge base.
- Inspect session history and long-term user memories.
- Approve or reject paused HITL runs with a single click.
Next steps
Key concepts
Learn what agents, runs, sessions, and memory mean in Agent Manager.
Multi-agent teams
Coordinate multiple agents using Coordinator or Router orchestration.
Knowledge base
Ingest documents and let agents search them with RAG.
API reference
Full request and response documentation for every endpoint.