Agent Manager streams responses using Server-Sent Events (SSE), so your application can render the agent’s answer incrementally as tokens arrive rather than waiting for the full response. The same stream also exposes the agent’s internal reasoning and tool activity, giving you full visibility into how the agent is working.Documentation Index
Fetch the complete documentation index at: https://operativusai.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
How streaming works
When you call the stream endpoint, the server holds the connection open and pushes a sequence ofAgentStreamEvent objects as newline-delimited SSE messages. Each event has a discriminator field (event) that tells you what kind of data it carries:
Accept header.
Event structure
Every SSE message contains a singleAgentStreamEvent:
The event type. See the table below for all possible values.
The payload for this event. For delta events this is a text fragment; for tool events it is a JSON string.
Unix timestamp (milliseconds) when the event was emitted by the server.
Event types
| Event | Description |
|---|---|
START | The stream has been initialized. No user-visible data. |
REASONING_DELTA | A fragment of the agent’s inner reasoning — what it is thinking before calling a tool. |
CONTENT_DELTA | A fragment of the final answer text. Concatenate these to build the full response. |
TOOL_START | The agent is about to call a tool. data contains the tool name and input as JSON. |
TOOL_END | A tool call has completed. data contains the tool output as JSON. |
STOP | The stream is complete. No more events will follow. |
ERROR | An error occurred. data contains an error message. |
REASONING_DELTA events expose the agent’s “inner thoughts” — the chain-of-thought reasoning it produces before deciding which tool to call. You can render these separately (for example, in a collapsible “Thinking…” section) to show users how the agent reached its conclusion.Consuming the stream in TypeScript
The example below uses theEventSource API (or a polyfill that supports POST with a body) to consume the stream and separate reasoning from content:
Multimodal input
For agents that support vision, you can include image attachments in the request body alongside your message. Images must be base64-encoded or referenced by a publicly accessible URL.Request body with media
The MIME type of the attachment, such as
image/png or image/jpeg.Base64-encoded binary content of the file, or an HTTP/HTTPS URL pointing to the image.
Full streaming example
The following end-to-end example opens a stream, collects reasoning and content separately, and prints them to the console:Built-in chat UI
The Agent Manager UI (http://localhost:5173) provides a production-ready chat interface with:
Live streaming
Tokens render incrementally as they arrive. Reasoning steps appear in a collapsible “Thinking…” panel above the final answer.
Markdown rendering
Responses are rendered with full Markdown support: tables, code blocks with syntax highlighting, and lists.
Multimodal input
Drag and drop image files directly into the chat input. The UI handles base64 encoding automatically.
HITL controls
When a run is paused for approval, the UI displays a visual indicator with “Approve” and “Reject” buttons.