Agent Manager gives you a complete platform for running AI agents in production. Connect to OpenAI, Anthropic, or Google models; equip agents with tools and knowledge; orchestrate multi-agent teams; and monitor every run — all within your own infrastructure, with no data leaving to third-party platforms.Documentation Index
Fetch the complete documentation index at: https://operativusai.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Quick Start
Make your first agent run in under five minutes
Core Concepts
Understand agents, runs, sessions, and memory
API Reference
Full REST API with request and response examples
Multi-Agent Teams
Coordinate and route work across multiple agents
What you can build
Streaming Chat
Real-time token streaming with reasoning traces visible to users
Knowledge Base (RAG)
Upload PDFs and URLs; agents search your docs automatically
Automated Workflows
Chain agent steps into repeatable, durable pipelines
Human-in-the-Loop
Pause sensitive operations and require human approval to continue
Get started
Configure your LLM provider
Set your
OPENAI_API_KEY, ANTHROPIC_API_KEY, or GOOGLE_API_KEY. Agent Manager automatically activates the providers with valid keys.Start the server
Boot the backend with
./mvnw spring-boot:run. It seeds default agents and connects to PostgreSQL on startup.Run your first agent
Send a POST request to
/api/agents/{agentId}/runs with a message. Get back a complete response with tool traces.Agent Manager is private by design — all conversation history, memory, and knowledge embeddings are stored exclusively in your PostgreSQL instance. No user data is sent to external observability platforms unless you explicitly configure it.