Skip to main content

Documentation Index

Fetch the complete documentation index at: https://operativusai.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Agent Manager gives you a complete platform for running AI agents in production. Connect to OpenAI, Anthropic, or Google models; equip agents with tools and knowledge; orchestrate multi-agent teams; and monitor every run — all within your own infrastructure, with no data leaving to third-party platforms.

Quick Start

Make your first agent run in under five minutes

Core Concepts

Understand agents, runs, sessions, and memory

API Reference

Full REST API with request and response examples

Multi-Agent Teams

Coordinate and route work across multiple agents

What you can build

Streaming Chat

Real-time token streaming with reasoning traces visible to users

Knowledge Base (RAG)

Upload PDFs and URLs; agents search your docs automatically

Automated Workflows

Chain agent steps into repeatable, durable pipelines

Human-in-the-Loop

Pause sensitive operations and require human approval to continue

Get started

1

Configure your LLM provider

Set your OPENAI_API_KEY, ANTHROPIC_API_KEY, or GOOGLE_API_KEY. Agent Manager automatically activates the providers with valid keys.
2

Start the server

Boot the backend with ./mvnw spring-boot:run. It seeds default agents and connects to PostgreSQL on startup.
3

Run your first agent

Send a POST request to /api/agents/{agentId}/runs with a message. Get back a complete response with tool traces.
4

Explore the UI

Open the Agent Manager UI at http://localhost:5173 to chat, manage knowledge, and inspect runs visually.
Agent Manager is private by design — all conversation history, memory, and knowledge embeddings are stored exclusively in your PostgreSQL instance. No user data is sent to external observability platforms unless you explicitly configure it.