Implement full OpenAI-compatible tool-calling support across the proxy,
enabling OpenCode to use llm-proxy as its sole LLM backend.
- Add 9 tool-calling types (Tool, FunctionDef, ToolChoice, ToolCall, etc.)
- Update ChatCompletionRequest/ChatMessage/ChatStreamDelta with tool fields
- Update UnifiedRequest/UnifiedMessage to carry tool data through the pipeline
- Shared helpers: messages_to_openai_json handles tool messages, build_openai_body
includes tools/tool_choice, parse/stream extract tool_calls from responses
- Gemini: full OpenAI<->Gemini format translation (functionDeclarations,
functionCall/functionResponse, synthetic call IDs, tool_config mapping)
- Gemini: extract duplicated message-conversion into shared convert_messages()
- Server: SSE streams include tool_calls deltas, finish_reason='tool_calls'
- AggregatingStream: accumulate tool call deltas across stream chunks
- OpenAI provider: add o4- prefix to supports_model()
- Added 'users' table to database with bcrypt hashing.
- Refactored login to verify against the database.
- Implemented 'Security' section in settings to allow changing the admin password.
- Initialized system with default user 'admin' and password 'admin'.
- Added 'credit_balance' and 'low_credit_threshold' to 'provider_configs' table.
- Updated dashboard backend to support reading and updating provider credits.
- Implemented real-time credit deduction from provider balances on successful requests.
- Added visual balance indicators and configuration modal to the 'Providers' dashboard tab.
- Added 'provider_configs' and 'model_configs' tables to database.
- Refactored ProviderManager to support thread-safe dynamic updates and database overrides.
- Implemented 'Models' tab in dashboard to manage model visibility, mapping, and pricing.
- Added provider configuration modal to 'Providers' tab.
- Integrated database overrides into chat completion logic (enabled state, mapping, and cost).
- Set Grok to enabled: true by default.
- Updated AppState to include raw AppConfig.
- Refactored dashboard to show all supported providers, including their configuration and initialization status (online, disabled, or error).
- Switched from mock data to real backend APIs.
- Implemented unified ApiClient for consistent frontend data fetching.
- Refactored dashboard structure and styles for a modern SaaS aesthetic.
- Fixed Axum 0.8+ routing and parameter syntax issues.
- Implemented real client creation/deletion and provider health monitoring.
- Synchronized WebSocket event structures between backend and frontend.
- Added strict token validation against LLM_PROXY__SERVER__AUTH_TOKENS.
- Integrated 'reasoning_content' support into providers and server responses.
- Updated AppState to carry valid auth tokens for request-time validation.