Skip to main content

Documentation Index

Fetch the complete documentation index at: https://operativusai.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Agent Manager is built around a small set of concepts that compose into complex agentic systems. Understanding them makes it easier to design agents, debug runs, and build reliable workflows. This page defines each concept and explains how they relate to one another.

Agent

An agent is an LLM instance configured with a model, a system prompt, and a set of tools and knowledge sources. Each agent has a unique agentId that you use in API calls. Agents are registered in the agent registry and persisted in the database. On startup, Agent Manager seeds three default agents:
Agent IDPurpose
procurator_assistantExpert on the Operativus framework. Knows the Agno documentation.
finance_agentRetrieves live stock prices.
web_agentPerforms general web searches.
You can define additional agents through the UI or the API. An agent can be a standard single-agent or a team (see below).
Agents cannot reach the host file system or network directly. Code execution happens inside ephemeral Docker containers with zero network access by default.

Run

A run is a single execution of an agent against a message. When you call POST /api/agents/{agentId}/runs, Agent Manager creates a run, processes the message through the agent’s advisor chain, calls any required tools, and returns a RunResponse. Every run has a runId and a status:
StatusMeaning
RUNNINGThe agent is actively processing.
COMPLETEDThe agent finished and returned a response.
FAILEDAn unrecoverable error occurred during execution.
PAUSEDThe run hit a HITL checkpoint and is waiting for human input.
CANCELLEDThe run was cancelled before completion.
Runs can be executed three ways:
  • Synchronous (POST /api/agents/{agentId}/runs) — blocks until complete, returns the full RunResponse.
  • Streaming (POST /api/agents/{agentId}/runs/stream) — returns tokens as Server-Sent Events in real time.
  • Background (POST /api/agents/{agentId}/runs/background) — queues the run and returns a runId immediately. Poll GET /api/agents/{agentId}/runs/{runId}/status to check progress.

Session

A session groups multiple runs into a conversation. It provides short-term episodic memory: the agent receives the message history from the session as context, so it can answer follow-up questions and refer back to earlier parts of the conversation. Pass a session_id in your run request to continue an existing session. If you omit it, a new session is created automatically and its ID is returned in the response.
{
  "message": "What was the stock price you just looked up?",
  "session_id": "550e8400-e29b-41d4-a716-446655440000"
}
Sessions are stored in the database and can be listed, inspected, and deleted via the Sessions API (/api/sessions).
Session memory is short-term and scoped to a single conversation thread. For facts that should persist across separate sessions, see Memory below.

Knowledge base

A knowledge base is a collection of documents (PDFs, plain text files, or scraped URLs) that have been processed into vector embeddings. Agents search the knowledge base using the search_knowledge_base tool when they need factual grounding — this pattern is called Retrieval-Augmented Generation (RAG). You can manage documents through the Knowledge API (/api/knowledge):
  • Upload a file: POST /api/knowledge/upload
  • Ingest a URL for a specific agent: POST /api/agents/{agentId}/knowledge/load
  • Search the vector store: GET /api/knowledge/search?query=...
  • Delete a document: DELETE /api/knowledge/{id}

Memory

Memory is long-term semantic storage for user-specific facts. Unlike session history, memory persists across sessions. When a user mentions something important (“I prefer Python over JavaScript”), the agent can store that fact as a memory and retrieve it in future conversations, even in a different session. Memory is stored per-user in vector storage. You can manage it via the Memory API (/api/memories):
  • Add a fact manually: POST /api/memories
  • Search semantically: GET /api/memories?query=...
  • Optimize (consolidate and deduplicate): POST /api/memories/optimize
  • Inspect statistics: GET /api/memories/stats

Team

A team is a group of agents that collaborate to answer a request. Teams are configured with one of two orchestration strategies:

Coordinator

A leader agent decomposes the request into subtasks and delegates each one to a specialist worker agent. Results are aggregated into a final answer.

Router

The router classifies the intent of the incoming message and forwards the entire request to the single most appropriate specialist agent.
Teams expose the same run API as standard agents. From the caller’s perspective, a team behaves like a single agent — you send a message to the team’s agentId and receive a unified response.

Workflow

A workflow is a multi-step pipeline of agent actions defined in advance. Each step in a workflow can call an agent, transform data, or apply conditions. Workflows are useful for repeatable, structured processes that go beyond a single prompt-and-response cycle. Workflows are managed via the Workflows API and can be triggered manually or on a schedule.

Human-in-the-Loop (HITL)

Human-in-the-Loop is a safety mechanism that pauses a run before the agent executes a sensitive tool call. When an agent attempts to call a tool marked as requiring approval (such as a database deletion), it throws an internal signal that transitions the run to PAUSED status. The run stays paused until you respond:
curl -X POST http://localhost:8080/api/agents/{agentId}/runs/{runId}/continue \
  -H "Content-Type: application/json" \
  -d '{"action": "APPROVE"}'
Send "action": "REJECT" to cancel the tool call and let the agent continue without executing it. The Agent Manager UI surfaces paused runs with Approve and Reject buttons so non-technical users can handle approvals without using the API directly.

MCP

Model Context Protocol (MCP) is an open standard for connecting external tools to LLM agents. Agent Manager includes a built-in MCP server that other tools and platforms can connect to.
EndpointDescription
GET /mcp/sseMCP handshake — returns an SSE stream.
POST /mcp/messagesJSON-RPC message handler — follows the MCP specification.
MCP lets you expose Agent Manager’s agents as callable tools to external orchestrators, or integrate third-party tool servers into your agents without writing custom integrations.