Skip to main content

Documentation Index

Fetch the complete documentation index at: https://operativusai.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Agent Manager supports multiple LLM providers simultaneously. Provider activation is automatic — set the corresponding API key in your environment and the provider becomes available. Each agent can use a different model, and you can switch models per run without changing agent configuration.

Supported providers

ProviderEnvironment variableExample models
OpenAIOPENAI_API_KEYGPT-4, GPT-4o
AnthropicANTHROPIC_API_KEYClaude 3.5 Sonnet
GoogleGOOGLE_API_KEYGemini Pro
Providers without a valid key are automatically disabled at startup. Providers with a valid key are available for selection immediately — no restart or additional configuration is required.
At least one provider key must be set before starting Agent Manager. The application will start without a key, but all agent runs will fail until a valid key is configured.

Activating a provider

Set the API key as an environment variable before starting Agent Manager:
export OPENAI_API_KEY=sk-...
When Agent Manager starts, it detects which keys are present and prints a provider initialization report to the console. Providers with keys will show as active; those without will be disabled.

Viewing available models

To list all configured model definitions:
GET /api/models
To retrieve a specific model by ID:
GET /api/models/{id}
To check the health status of each provider:
GET /api/models/providers/status
The provider status endpoint returns one row per provider with counts of available, unavailable, and never-probed models, along with the last probe timestamp. It reads from cached probe results — it does not re-ping providers on each call.

Registering a model

You can register a new model configuration via the API:
curl -X POST http://your-host/api/models \
  -H "Authorization: Bearer {token}" \
  -H "Content-Type: application/json" \
  -d '{
    "provider": "OPENAI",
    "name": "GPT-4o Production",
    "modelId": "gpt-4o",
    "apiKey": "sk-..."
  }'
To test connectivity for a model configuration before saving it:
POST /api/models/test
To test an already-saved model by ID:
POST /api/models/{id}/test
The test endpoint always returns 200 OK — pass or fail is encoded in the response body’s available field along with any error message, so your client can display rich diagnostics without parsing exception responses.

Selecting a model per run

Pass model configuration in the RunRequest body when submitting a run:
{
  "message": "Generate a weekly summary",
  "modelOptions": {
    "provider": "ANTHROPIC",
    "modelId": "claude-3-5-sonnet-20241022"
  }
}
If no model is specified, the agent uses its default model as configured in its definition.

Managing model configuration

PATCH /api/models/{id}
Send only the fields you want to change. The update is applied immediately.
POST /api/models/{id}/clone?newName=My+Clone
Creates a copy of the model configuration with a new ID. The clone inherits provider settings and capability flags but starts unprobed and without a default-slot assignment.
DELETE /api/models/{id}
Returns 204 No Content on success. Returns 409 Conflict if the model cannot be deleted because it is in use.
If you’ve changed model configuration and want it reflected immediately without waiting for the cache TTL to expire:
POST /api/models/cache/invalidate
Requires ROLE_ADMIN. The cache is rebuilt lazily on the next read.
You can have multiple providers active simultaneously. This lets you run different agents on different providers — for example, using Claude 3.5 Sonnet for reasoning-heavy tasks and GPT-4o for code generation — within the same Agent Manager deployment.