- Fix DeepSeek R1 (reasoner) 400 errors by ensuring assistant messages with
tool_calls in history always have non-null 'content' and 'reasoning_content'.
- Implement deterministic tool call ID truncation (max 40 chars) for OpenAI
compatibility (fixes errors when history contains long Gemini signatures).
- Automatic transition from 'max_tokens' to 'max_completion_tokens' for newer
OpenAI models (o1, o3, gpt-5-nano).
- Added 'reasoning' and 'thought' aliases to reasoning_content for robust
deserialization from various clients.
OpenAI has a strict 40-character limit for tool call IDs. Long IDs (like
Gemini's 56-char thought signatures) are now deterministically truncated
to 40 characters when sending history back to OpenAI-compatible providers.
- Move DeepSeek R1 reasoning_content and content:"" fixes to DeepSeekProvider only.
- Restore OpenAI-standard null content for tool calls in helpers.rs.
- Fixes 400 Bad Request for gpt-5-nano and other strict OpenAI models.
- Ensure assistant tool calls always have content: "" instead of null.
- Add debug-level logging of the full request body when DeepSeek returns 400.
- Improved R1 placeholder injection to include content sanitation.
Track cache_read_tokens and cache_write_tokens end-to-end: parse from
provider responses (OpenAI, DeepSeek, Grok, Gemini), persist to SQLite,
apply cache-aware pricing from the model registry, and surface in API
responses and the dashboard.
- Add cache fields to ProviderResponse, StreamUsage, RequestLog structs
- Parse cached_tokens (OpenAI/Grok), prompt_cache_hit/miss (DeepSeek),
cachedContentTokenCount (Gemini) from provider responses
- Send stream_options.include_usage for streaming; capture real usage
from final SSE chunk in AggregatingStream
- ALTER TABLE migration for cache_read_tokens/cache_write_tokens columns
- Cache-aware cost formula using registry cache_read/cache_write rates
- Update Provider trait calculate_cost signature across all providers
- Add cache_read_tokens/cache_write_tokens to Usage API response
- Dashboard: cache hit rate card, cache columns in pricing and usage
tables, cache token aggregation in SQL queries
- Remove API debug panel and verbose console logging from api.js
- Bump static asset cache-bust to v5
messages_to_openai_json_text_only() was missing tool-calling support,
causing DeepSeek 400 errors when conversations included tool turns.
Now mirrors messages_to_openai_json() logic for tool-role messages
(tool_call_id, name) and assistant tool_calls, with images replaced
by "[Image]" text.
Implement full OpenAI-compatible tool-calling support across the proxy,
enabling OpenCode to use llm-proxy as its sole LLM backend.
- Add 9 tool-calling types (Tool, FunctionDef, ToolChoice, ToolCall, etc.)
- Update ChatCompletionRequest/ChatMessage/ChatStreamDelta with tool fields
- Update UnifiedRequest/UnifiedMessage to carry tool data through the pipeline
- Shared helpers: messages_to_openai_json handles tool messages, build_openai_body
includes tools/tool_choice, parse/stream extract tool_calls from responses
- Gemini: full OpenAI<->Gemini format translation (functionDeclarations,
functionCall/functionResponse, synthetic call IDs, tool_config mapping)
- Gemini: extract duplicated message-conversion into shared convert_messages()
- Server: SSE streams include tool_calls deltas, finish_reason='tool_calls'
- AggregatingStream: accumulate tool call deltas across stream chunks
- OpenAI provider: add o4- prefix to supports_model()