Skip to main content

Documentation Index

Fetch the complete documentation index at: https://operativusai.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Agent Manager is a backend runtime and UI for orchestrating LLM-based agents inside your own infrastructure. It connects to OpenAI, Anthropic, and Google models while keeping all conversation history, memory, and knowledge embeddings in a PostgreSQL instance you control. Nothing leaves your environment unless you configure it to.

Key capabilities

Streaming responses

Agents stream tokens back to clients in real time over Server-Sent Events. Internal reasoning steps (“thinking”) are surfaced separately from the final answer.

Human-in-the-Loop (HITL)

Sensitive tool calls pause a run automatically. You approve or reject the action before the agent continues — with a full audit trail.

Retrieval-Augmented Generation

Upload PDFs and URLs into a pgvector knowledge base. Agents search it on demand using the search_knowledge_base tool, so answers stay grounded in your documents.

Multi-agent teams

Compose agents into teams using Coordinator or Router orchestration. A coordinator delegates subtasks to worker agents; a router classifies intent and forwards each query to the best specialist.

Model Context Protocol (MCP)

Connect external tools to any agent using the MCP standard. The built-in MCP server exposes an SSE handshake and JSON-RPC message handler.

FinOps and observability

Track token spend per agent and per user. Prometheus metrics at /actuator/prometheus expose run counts, tool call rates, and model usage for your monitoring stack.

LLM providers

Agent Manager supports OpenAI, Anthropic, and Google out of the box. Set the corresponding API key and the platform activates that provider automatically on startup — no code changes required.
ProviderEnvironment variable
OpenAIOPENAI_API_KEY
AnthropicANTHROPIC_API_KEY
GoogleGOOGLE_API_KEY
You can have multiple providers active simultaneously. Each agent is configured with a specific model; the runtime resolves which provider to call based on that model name.

Private by design

All conversation history, session memory, long-term facts, and vector embeddings are stored exclusively in your own PostgreSQL database. Agent Manager does not transmit user data to any external observability or analytics platform by default.
Code execution requested by agents runs inside ephemeral Docker containers with no network access and no access to the host file system. PII (emails, phone numbers) is redacted from all prompts before they are forwarded to an LLM. Every run and tool call is logged to the database for compliance and audit purposes.

Components

Agent Manager ships as two components:
  • Backend — handles agent execution, session management, knowledge ingestion, memory storage, and the REST API. Start it with ./mvnw spring-boot:run.
  • UI — a single-page application at http://localhost:5173. Provides a chat interface, agent registry browser, knowledge upload center, and session/memory inspector. Start it with npm run dev.
Both components are required for the full experience, though the backend can be used independently via its REST API.