- The Responses API does not use 'response.item.delta' for tool calls.
- It uses 'response.output_item.added' to initialize the function call.
- It uses 'response.function_call_arguments.delta' for the payload stream.
- Updated the streaming parser to catch these events and correctly yield ToolCallDelta objects.
- This restores proper streaming of tool calls back to the client.
- The Responses API ends streams without a final '[DONE]' message.
- This causes reqwest_eventsource to return Error::StreamEnded.
- Previously, this was treated as a premature termination, triggering an error probe.
- We now explicitly match and break on Err(StreamEnded) for normal completion.
- The OpenAI Responses API actually requires the 'stream: true'
parameter in the JSON body, contrary to some documentation summaries.
- Omitting it caused the API to return a full application/json
response instead of SSE text/event-stream, leading to stream failures
and probe warnings in the proxy logs.
- Implement final buffer flush in streaming path to prevent data loss
- Increase probe response body logging to 500 characters
- Ensure internal metadata is stripped even on final flush
- Fix potential hang when stream ends without explicit [DONE] event
- Add strip_internal_metadata helper to remove prefixes like 'to=multi_tool_use.parallel'
- Clean up Thai text preambles reported in the journal
- Apply metadata stripping to both synchronous and streaming response paths
- Improve visual quality of proxied model responses
- Implement cross-delta content buffering in streaming Responses API
- Wait for full 'tool_uses' JSON block before yielding to client
- Handle 'to=multi_tool_use.parallel' preamble by buffering
- Fix stream error probe to not request a new stream
- Remove raw JSON leakage from streaming content
- Add static parse_tool_uses_json helper to extract embedded tool calls
- Update synchronous and streaming Responses API parsers to detect tool_uses blocks
- Strip tool_uses JSON from content to prevent raw JSON leakage to client
- Resolve lifetime issues by avoiding &self capture in streaming closure
- Add 'tools' and 'tool_choice' parameters to streaming Responses API
- Include 'name' field in message items for Responses API input
- Use string content for text-only messages to improve instruction following
- Fix subagents not triggering and files not being created
Updated OpenAI Responses API to use a structured input format (array of objects) for better compatibility. Added a proactive error probe to chat_responses_stream to capture and log API error bodies on failure.
This commit adds support for the OpenAI Responses API in both streaming and non-streaming modes. It also implements proactive routing for gpt-5 and codex models and cleans up unused 'session' variable warnings across the dashboard source files.
This commit introduces:
- AES-256-GCM encryption for LLM provider API keys in the database.
- HMAC-SHA256 signed session tokens with activity-based refresh logic.
- Standardized frontend XSS protection using a global escapeHtml utility.
- Hardened security headers and request body size limits.
- Improved database integrity with foreign key enforcement and atomic transactions.
- Integration tests for the full encrypted key storage and proxy usage lifecycle.
This commit fixes the Gemini API 'Invalid value at thought_signature' error by ensuring synthetic 'call_' IDs are not passed into the TYPE_BYTES field. It also adds a pre-pass to correctly resolve function names from tool call IDs for tool responses.
- Ensure ALL assistant messages in history have reasoning_content and string content.
- Use a single space as a professional minimal placeholder for missing reasoning.
- Log full offending request bodies at ERROR level for detailed debugging.
- Fix DeepSeek R1 (reasoner) 400 errors by ensuring assistant messages with
tool_calls in history always have non-null 'content' and 'reasoning_content'.
- Implement deterministic tool call ID truncation (max 40 chars) for OpenAI
compatibility (fixes errors when history contains long Gemini signatures).
- Automatic transition from 'max_tokens' to 'max_completion_tokens' for newer
OpenAI models (o1, o3, gpt-5-nano).
- Added 'reasoning' and 'thought' aliases to reasoning_content for robust
deserialization from various clients.
Newer OpenAI models (o1, o3, gpt-5) have deprecated 'max_tokens' in favor of
'max_completion_tokens'. The provider now automatically maps this parameter
to ensure compatibility and avoid 400 errors.
OpenAI has a strict 40-character limit for tool call IDs. Long IDs (like
Gemini's 56-char thought signatures) are now deterministically truncated
to 40 characters when sending history back to OpenAI-compatible providers.
- Move DeepSeek R1 reasoning_content and content:"" fixes to DeepSeekProvider only.
- Restore OpenAI-standard null content for tool calls in helpers.rs.
- Fixes 400 Bad Request for gpt-5-nano and other strict OpenAI models.
- Ensure assistant tool calls always have content: "" instead of null.
- Add debug-level logging of the full request body when DeepSeek returns 400.
- Improved R1 placeholder injection to include content sanitation.
- Added aliases 'reasoning' and 'thought' to reasoning_content field for better client compatibility.
- Implemented automatic 'Thinking...' placeholder injection for deepseek-reasoner history messages that contain tool_calls but lack reasoning_content.
- Restored strict parameter sanitation for deepseek-reasoner.
Gemini often sends tool calls in one chunk and then 'STOP' in a final chunk.
If we pass the raw 'stop' at the end, clients stop and ignore the previously
received tool calls. We now track if any tools were seen and override the
final 'stop' to 'tool_calls'.
Gemini often reports 'STOP' even when tool calls are generated. To remain
OpenAI-compatible and ensure clients execute tools and continue, we must
report 'tool_calls' as the finish_reason when tools are present.
- Remove from as it is rejected by the API inside .
- Ensure is always a JSON object (google.protobuf.Struct), wrapping non-object tool results in .
- Update extraction logic to only look for in sibling fields.
- Ensure is preserved in conversation history for Gemini 3 models.
- Support multiple naming conventions (snake_case and camelCase).
- Implement stable tool call ID tracking during streaming using a stateful map.
- Improve extraction from both Gemini parts and function calls.
- Fix incorrect tool call indices during streaming.
- The native Gemini REST API requires camelCase 'thoughtSignature' as a sibling to functionCall.
- Explicitly rename the field to match this requirement, resolving the 'missing thought_signature' 400 error.
- Only restore thought_signature if the tool call ID doesn't start with 'call_'.
- This ensures proxy-generated UUIDs are never sent back to Gemini as signatures, which was causing base64 decoding failures.