Prompt to
MCP Execution
Tracing the lifecycle of a request from client-side submission to deterministic tool execution and final response.
Optimized Pipeline
Thin system prompt, bounded history, persisted memory injected before routing into model-only or tool-enabled execution.
Pipeline overview
How the request is prepared and routed.
Platform context (Supabase + Gateway + MCP)
Same source as Architecture: where auth, quotas, and chat persistence sit relative to model and tool execution.
Context retention
How incident threads stay coherent when transcripts are trimmed: a JSON envelope in chat_sessions, server-side memory injection, and CAN grounding. See also Architecture → Conversation context retention.
Retention flow
Load envelope, merge facts, optional CAN short-circuit, trim, then generate with memory in the system prompt.
Memory sequence
Supabase read/update of envelope.memory around the model call.
Runtime Sequence
Detailed sequence from chat submit through auth, optional envelope read, quota, trim, memory persistence, decision gates, optional MCP calls, and final JSON response.
End-to-end runtime sequence
The complete lifecycle of a single user interaction.
Tool Selection Loop
How tool schemas are supplied, when a tool is chosen, and how tool results are fed back into generation.
Tool-selection loop
Recursive tool execution within the AI SDK loop.
Decision logic explained
Prompt submission
The client posts the new message, a window of conversation history, and a conversationId when the session is saved.
Envelope + memory
With conversationId, the API loads the JSON envelope from Postgres, updates incident summary and key facts from your text, and may return early for CAN requests with incomplete facts.
Context extraction
The route runs intent/entity extraction and reads prompt runtime settings.
History bounding
The latest user turn is deduplicated when it already appears at the end of history; LangChain trimMessages enforces the token budget; memory is re-injected into the system string.
Conversational vs actionable
Regex routing and action entity detection classify the request.
MCP execution path
The route ensures MCP connectivity and exposes tools to the AI SDK.
Response delivery
Final assistant text plus metadata are returned; the client debounces saving the full transcript under RLS.
lib helper relationships
Logical map of config, engine, and client-side helpers.
Helper Modules
Prompt assembly, intent hints, and editable config are implemented as small TypeScript modules—not separate services.
View Server Helpers Table→Common Questions
Does every request hit MCP?
No. Only actionable requests enter the tool-enabled path.
What counts as actionable?
Messages matching actionable regexes or containing action entities like create/update/search.
Who selects the tool?
The model selects tools from registered MCP schemas during generation.
What happens if MCP is down?
The API returns a 503 with remediation guidance; the failure is explicit.