Compare commits

...

46 Commits

Author SHA1 Message Date
9375448087 fix(moonshot): resolve 401 Unauthorized errors by trimming API keys and improving request compatibility
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
2026-03-26 17:09:27 +00:00
5be2f6f7aa fix: use Moonshot test model
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
2026-03-26 10:12:44 -04:00
eebcadcba1 fix: surface moonshot on providers page
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
2026-03-25 09:35:41 -04:00
6b2bd13903 chore: remove tracked binary gophergate
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
2026-03-25 13:32:51 +00:00
5dfda0a10c merge: resolve conflicts in server.go and integrate moonshot support 2026-03-25 13:32:40 +00:00
a8a02d9e1c feat: add moonshot kimi k2.5 support
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
2026-03-25 09:28:52 -04:00
bd1d17cc4d feat: add moonshot kimi k2.5 support
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
2026-03-25 09:27:46 -04:00
9207a7231c chore: update all grok-2 references to grok-4
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
2026-03-25 13:17:06 +00:00
c6efff9034 fix: update grok test model to grok-4-1-fast-non-reasoning
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
2026-03-25 13:14:31 +00:00
27fbd8ed15 chore: cleanup repository and update gitignore
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
- Removed binary 'gophergate'
- Removed 'data/llm_proxy.db' from source control (kept locally)
- Removed old database backups in 'data/backups/'
- Updated .gitignore to exclude data directory and gophergate binary
2026-03-25 13:08:33 +00:00
348341f304 fix: prioritize database provider configs and implement API key encryption
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
- Added AES-GCM encryption/decryption for provider API keys in the database.
- Implemented RefreshProviders to load provider configs from the database with precedence over environment variables.
- Updated dashboard handlers to encrypt keys on save and trigger in-memory provider refresh.
- Updated Grok test model to grok-3-mini for better compatibility.
2026-03-25 13:04:26 +00:00
9380580504 fix: resolve dashboard websocket 'disconnected' status
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
- Fixed status indicator UI mapping in websocket.js and index.html.
- Added missing CSS for connection status indicator and pulse animation.
- Made initial model registry fetch asynchronous to prevent blocking server startup.
- Improved configuration loading to correctly handle LLM_PROXY__SERVER__PORT from environment.
2026-03-19 14:32:34 -04:00
08cf5cc1d9 fix: improve cost tracking accuracy for modern models
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
- Added support for reasoning tokens in cost calculations.
- Fixed DeepSeek cache-write token mapping (PromptCacheMissTokens).
- Improved CalculateCost debug logging to trace all pricing variables.
2026-03-19 14:14:54 -04:00
0f0486d8d4 fix: resolve user dashboard field mapping and session consistency
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Added JSON tags to the User struct to match frontend expectations and excluded sensitive fields.
Updated session management to include and persist DisplayName.
Unified user field names (using display_name) across backend, sessions, and frontend UI.
2026-03-19 14:01:59 -04:00
0ea2a3a985 fix: improve provider and model data accuracy in dashboard
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Updated handleGetProviders to include available models and last-used timestamps. Refined Model Pricing table to strictly filter by core providers and actual usage.
2026-03-19 13:51:46 -04:00
21e5908c35 fix: resolve sidebar overlap and top-bar layout
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Added padding-left to main-content and implemented missing top-bar and content-body styles to ensure correct layout with fixed sidebar.
2026-03-19 13:48:24 -04:00
6f0a159245 fix: resolve login visibility issues and improve sidebar layout
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Corrected element ID mismatches between index.html and auth.js. Improved sidebar CSS to handle collapsed state and logo visibility correctly.
2026-03-19 13:45:55 -04:00
4120a83b67 fix: correct login button selector in auth.js
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Changed querySelector('.login-btn') to getElementById('login-btn') to match index.html.
2026-03-19 13:43:02 -04:00
742cd9e921 fix: resolve login button TypeError and add favicon
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Added id='login-btn' to index.html and created a placeholder favicon.ico.
2026-03-19 13:41:58 -04:00
593971ecb5 fix: resolve TypeError in login error display
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Added missing span element to login-error div to ensure compatibility with auth.js.
2026-03-19 13:39:51 -04:00
03dca998df chore: rebrand project to GopherGate
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Updated all naming from LLM Proxy to GopherGate. Implemented new CSS-based branding and updated Go module/binary naming.
2026-03-19 13:37:05 -04:00
0ce5f4f490 docs: finalize documentation for Go migration
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Updated README, architecture, and TODO to reflect full feature parity, system metrics, and registry integration.
2026-03-19 13:26:31 -04:00
dec4b927dc feat: implement system metrics and fix monitoring charts
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Added /api/system/metrics with CPU/Mem/Disk/Load data using gopsutil. Updated Hub to track active WebSocket listeners. Verified log format for monitoring charts.
2026-03-19 13:15:48 -04:00
3f1e6d3407 fix: restrict Model Pricing table to core providers and actual usage
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Filtered registry iteration to only include openai, gemini, deepseek, and grok. Improved used_only logic to match specific (model, provider) pairs from logs.
2026-03-19 13:10:50 -04:00
f02fd6c249 fix: normalize provider names in model pricing table
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Mapped registry provider IDs (google, xai) to proxy-internal names (gemini, grok) for better dashboard consistency.
2026-03-19 13:06:52 -04:00
f23796f0cc fix: restrict Model Pricing table to used models and fix cost stats
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Implemented used_only filter for /api/models. Added missing cache token and cost fields to usage summary and provider usage endpoints.
2026-03-19 13:02:45 -04:00
3f76a544e0 fix: improve analytics accuracy and cost calculation
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Refined CalculateCost to correctly handle cached token discounts. Added fuzzy matching to model lookup. Robustified SQL date extraction using SUBSTR and LIKE for better SQLite compatibility.
2026-03-19 12:58:08 -04:00
e474549940 fix: resolve zero-time dashboard display and improve SQL robustness
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Fixed '2025 years ago' issue by correctly handling zero-value timestamps. Improved SQL scanning logic to handle NULL values more safely across all analytics handlers.
2026-03-19 12:42:41 -04:00
b7e37b0399 fix: resolve dashboard SQL scan errors and 401 noise
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Robustified all analytics queries to handle empty datasets and NULL values. Restricted AuthMiddleware to /v1 group only.
2026-03-19 12:39:48 -04:00
263c0f0dc9 fix: resolve dashboard 401 and 500 errors
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Restricted AuthMiddleware to /v1 group to prevent dashboard session interference. Robustified analytics SQL queries with COALESCE to handle empty datasets.
2026-03-19 12:35:14 -04:00
26d8431998 feat: implement /api/usage/clients endpoint
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Added client-specific usage aggregation for the analytics dashboard.
2026-03-19 12:31:11 -04:00
1f3adceda4 fix: robustify analytics handlers and fix auth middleware scope
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Moved AuthMiddleware to /v1 group only. Added COALESCE and empty result handling to analytics SQL queries to prevent 500 errors on empty databases.
2026-03-19 12:28:56 -04:00
9c64a8fe42 fix: restore analytics page functionality
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Implemented missing /api/usage/detailed endpoint and ensured analytics breakdown and time-series return data in the expected format.
2026-03-19 12:24:58 -04:00
b04b794705 fix: restore clients page functionality
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Updated handleGetClients to return UI-compatible data format and implemented handleGetClient/handleUpdateClient endpoints.
2026-03-19 12:06:52 -04:00
0f3c5b6eb4 feat: enhance usage and cost tracking accuracy
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Improved extraction of reasoning and cached tokens from OpenAI and DeepSeek responses (including streams). Ensured accurate cost calculation using registry metadata.
2026-03-19 11:56:26 -04:00
66a1643bca chore: filter /v1/models to allowed providers
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Restricted model listing to OpenAI, Google (Gemini), DeepSeek, and xAI (Grok) to match available access.
2026-03-19 11:33:47 -04:00
edc6445d70 feat: implement /v1/models endpoint
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Added OpenAI-compatible model listing endpoint using the registry data.
2026-03-19 11:31:26 -04:00
2d8f1a1fd0 chore: use newest cheap models for provider tests
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Updated OpenAI test model to gpt-4o-mini and verified Gemini is using gemini-2.0-flash.
2026-03-19 11:27:12 -04:00
cd1a1b45aa fix: restore models page functionality
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Updated handleGetModels to merge registry data with DB overrides and implemented handleUpdateModel. Verified API response format matches frontend requirements.
2026-03-19 11:26:13 -04:00
246a6d88f0 fix: update grok default model to grok-2
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Changed grok-beta to grok-2 across backend config, dashboard tests, and frontend monitoring.
2026-03-19 11:23:56 -04:00
7d43b2c31b fix: restore default admin password and add reset flag
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Restored 'admin123' as the default password in db init and added a -reset-admin flag to main.go.
2026-03-19 11:22:11 -04:00
45c2d5e643 fix: implement provider test endpoint and fix static asset routing
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Added handleTestProvider to dashboard and verified static file mapping for /css, /js, and /img.
2026-03-19 11:19:20 -04:00
1d032c6732 feat: complete dashboard API migration
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Implemented missing system, analytics, and auth endpoints. Verified parity with frontend expectations.
2026-03-19 11:14:28 -04:00
2245cca67a fix: correct static file routing for dashboard assets
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Mapped /css, /js, and /img to their respective subdirectories in ./static to resolve 404 errors.
2026-03-19 11:07:29 -04:00
c7c244992a fix: ensure LLM_PROXY__ENCRYPTION_KEY is correctly loaded from env
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Explicitly bound the encryption_key to handle the double underscore convention in Viper.
2026-03-19 11:04:57 -04:00
4f5b55d40f chore: remove obsolete files and update CI to Go
Some checks failed
CI / Lint (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Build (push) Has been cancelled
Removed old Rust-era documentation, scripts, and migrations. Updated GitHub Actions workflow to use Go 1.22.
2026-03-19 10:46:23 -04:00
49 changed files with 1866 additions and 2296 deletions

View File

@@ -1,4 +1,4 @@
# LLM Proxy Gateway Configuration Example
# GopherGate Configuration Example
# Copy this file to .env and fill in your values
# ==============================================================================
@@ -15,6 +15,7 @@ LLM_PROXY__ENCRYPTION_KEY=your_secure_32_byte_key_here
OPENAI_API_KEY=sk-...
GEMINI_API_KEY=AIza...
DEEPSEEK_API_KEY=sk-...
MOONSHOT_API_KEY=sk-...
GROK_API_KEY=xai-...
# ==============================================================================
@@ -38,6 +39,9 @@ LLM_PROXY__DATABASE__MAX_CONNECTIONS=10
# ==============================================================================
# LLM_PROXY__PROVIDERS__OPENAI__BASE_URL=https://api.openai.com/v1
# LLM_PROXY__PROVIDERS__GEMINI__ENABLED=true
# LLM_PROXY__PROVIDERS__MOONSHOT__BASE_URL=https://api.moonshot.ai/v1
# LLM_PROXY__PROVIDERS__MOONSHOT__ENABLED=true
# LLM_PROXY__PROVIDERS__MOONSHOT__DEFAULT_MODEL=kimi-k2.5
# LLM_PROXY__PROVIDERS__OLLAMA__BASE_URL=http://localhost:11434/v1
# LLM_PROXY__PROVIDERS__OLLAMA__ENABLED=true
# LLM_PROXY__PROVIDERS__OLLAMA__MODELS=llama3,mistral,llava

View File

@@ -6,56 +6,44 @@ on:
pull_request:
branches: [main]
env:
CARGO_TERM_COLOR: always
RUST_BACKTRACE: 1
jobs:
check:
name: Check
lint:
name: Lint
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2
- run: cargo check --all-targets
clippy:
name: Clippy
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- name: Set up Go
uses: actions/setup-go@v5
with:
components: clippy
- uses: Swatinem/rust-cache@v2
- run: cargo clippy --all-targets -- -D warnings
fmt:
name: Formatting
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
go-version: '1.22'
cache: true
- name: golangci-lint
uses: golangci/golangci-lint-action@v4
with:
components: rustfmt
- run: cargo fmt --all -- --check
version: latest
test:
name: Test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2
- run: cargo test --all-targets
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.22'
cache: true
- name: Run Tests
run: go test -v ./...
build-release:
name: Release Build
build:
name: Build
runs-on: ubuntu-latest
needs: [check, clippy, test]
steps:
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2
- run: cargo build --release
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: '1.22'
cache: true
- name: Build
run: go build -v -o gophergate ./cmd/gophergate

2
.gitignore vendored
View File

@@ -4,6 +4,8 @@
/target
/llm-proxy
/llm-proxy-go
/gophergate
/data/
*.db
*.db-shm
*.db-wal

View File

@@ -1,6 +1,6 @@
# Backend Architecture (Go)
The LLM Proxy backend is implemented in Go, focusing on high performance, clear concurrency patterns, and maintainability.
The GopherGate backend is implemented in Go, focusing on high performance, clear concurrency patterns, and maintainability.
## Core Technologies
@@ -9,12 +9,13 @@ The LLM Proxy backend is implemented in Go, focusing on high performance, clear
- **Database:** [sqlx](https://github.com/jmoiron/sqlx) - Lightweight wrapper for standard `database/sql`.
- **SQLite Driver:** [modernc.org/sqlite](https://modernc.org/sqlite) - CGO-free SQLite implementation for ease of cross-compilation.
- **Config:** [Viper](https://github.com/spf13/viper) - Robust configuration management supporting environment variables and files.
- **Metrics:** [gopsutil](https://github.com/shirou/gopsutil) - System-level resource monitoring.
## Project Structure
```text
├── cmd/
│ └── llm-proxy/ # Entry point (main.go)
│ └── gophergate/ # Entry point (main.go)
├── internal/
│ ├── config/ # Configuration loading and validation
│ ├── db/ # Database schema, migrations, and models
@@ -22,40 +23,40 @@ The LLM Proxy backend is implemented in Go, focusing on high performance, clear
│ ├── models/ # Unified request/response structs
│ ├── providers/ # LLM provider implementations (OpenAI, Gemini, etc.)
│ ├── server/ # HTTP server, dashboard handlers, and WebSocket hub
│ └── utils/ # Common utilities (multimodal, etc.)
│ └── utils/ # Common utilities (registry, pricing, etc.)
└── static/ # Frontend assets (served by the backend)
```
## Key Components
### 1. Provider Interface (`internal/providers/provider.go`)
Standardized interface for all LLM backends:
```go
type Provider interface {
Name() string
ChatCompletion(ctx context.Context, req *models.UnifiedRequest) (*models.ChatCompletionResponse, error)
ChatCompletionStream(ctx context.Context, req *models.UnifiedRequest) (<-chan *models.ChatCompletionStreamResponse, error)
}
```
Standardized interface for all LLM backends. Implementations handle mapping between the unified format and provider-specific APIs (OpenAI, Gemini, DeepSeek, Grok).
### 2. Asynchronous Logging (`internal/server/logging.go`)
### 2. Model Registry & Pricing (`internal/utils/registry.go`)
Integrates with `models.dev/api.json` to provide real-time model metadata and pricing.
- **Fuzzy Matching:** Supports matching versioned model IDs (e.g., `gpt-4o-2024-08-06`) to base registry entries.
- **Automatic Refreshes:** The registry is fetched at startup and refreshed every 24 hours via a background goroutine.
### 3. Asynchronous Logging (`internal/server/logging.go`)
Uses a buffered channel and background worker to log every request to SQLite without blocking the client response. It also broadcasts logs to the WebSocket hub for real-time dashboard updates.
### 3. Session Management (`internal/server/sessions.go`)
Implements HMAC-SHA256 signed tokens for dashboard authentication. Sessions are stored in-memory with configurable TTL.
### 4. Session Management (`internal/server/sessions.go`)
Implements HMAC-SHA256 signed tokens for dashboard authentication. Tokens secure the management interface while standard Bearer tokens are used for LLM API access.
### 4. WebSocket Hub (`internal/server/websocket.go`)
A centralized hub for managing WebSocket connections, allowing real-time broadcast of system events and request logs to the dashboard.
### 5. WebSocket Hub (`internal/server/websocket.go`)
A centralized hub for managing WebSocket connections, allowing real-time broadcast of system events, system metrics, and request logs to the dashboard.
## Concurrency Model
Go's goroutines and channels are used extensively:
- **Streaming:** Each streaming request uses a goroutine to read and parse the provider's response, feeding chunks into a channel.
- **Logging:** A single background worker processes the `logChan` to perform database writes.
- **Streaming:** Each streaming request uses a goroutine to read and parse the provider's response, feeding chunks into a channel for SSE delivery.
- **Logging:** A single background worker processes the `logChan` to perform serial database writes.
- **WebSocket:** The `Hub` runs in a dedicated goroutine, handling registration and broadcasting.
- **Maintenance:** Background tasks handle registry refreshes and status monitoring.
## Security
- **Encryption Key:** A mandatory 32-byte key is used for both session signing and encryption of sensitive data in the database.
- **Auth Middleware:** Verifies client API keys against the database before proxying requests to LLM providers.
- **Encryption Key:** A mandatory 32-byte key is used for both session signing and encryption of sensitive data.
- **Auth Middleware:** Scoped to `/v1` routes to verify client API keys against the database.
- **Bcrypt:** Passwords for dashboard users are hashed using Bcrypt with a work factor of 12.
- **Database Hardening:** Automatic migrations ensure the schema is always current with the code.

View File

@@ -1,65 +0,0 @@
# LLM Proxy Code Review Plan
## Overview
The **LLM Proxy** project is a Rust-based middleware designed to provide a unified interface for multiple Large Language Models (LLMs). Based on the repository structure, the project aims to implement a high-performance proxy server (`src/`) that handles request routing, usage tracking, and billing logic. A static dashboard (`static/`) provides a management interface for monitoring consumption and managing API keys. The architecture leverages Rust's async capabilities for efficient request handling and SQLite for persistent state management.
## Review Phases
### Phase 1: Backend Architecture & Rust Logic (@code-reviewer)
- **Focus on:**
- **Core Proxy Logic:** Efficiency of the request/response pipeline and streaming support.
- **State Management:** Thread-safety and shared state patterns using `Arc` and `Mutex`/`RwLock`.
- **Error Handling:** Use of idiomatic Rust error types and propagation.
- **Async Performance:** Proper use of `tokio` or similar runtimes to avoid blocking the executor.
- **Rust Idioms:** Adherence to Clippy suggestions and standard Rust naming conventions.
### Phase 2: Security & Authentication Audit (@security-auditor)
- **Focus on:**
- **API Key Management:** Secure storage, masking in logs, and rotation mechanisms.
- **JWT Handling:** Validation logic, signature verification, and expiration checks.
- **Input Validation:** Sanitization of prompts and configuration parameters to prevent injection.
- **Dependency Audit:** Scanning for known vulnerabilities in the `Cargo.lock` using `cargo-audit`.
### Phase 3: Database & Data Integrity Review (@database-optimizer)
- **Focus on:**
- **Schema Design:** Efficiency of the SQLite schema for usage tracking and billing.
- **Migration Strategy:** Robustness of the migration scripts to prevent data loss.
- **Usage Tracking:** Accuracy of token counting and concurrency handling during increments.
- **Query Optimization:** Identifying potential bottlenecks in reporting queries.
### Phase 4: Frontend & Dashboard Review (@frontend-developer)
- **Focus on:**
- **Vanilla JS Patterns:** Review of Web Components and modular JS in `static/js`.
- **Security:** Protection against XSS in the dashboard and secure handling of local storage.
- **UI/UX Consistency:** Ensuring the management interface is intuitive and responsive.
- **API Integration:** Robustness of the frontend's communication with the Rust backend.
### Phase 5: Infrastructure & Deployment Review (@devops-engineer)
- **Focus on:**
- **Dockerfile Optimization:** Multi-stage builds to minimize image size and attack surface.
- **Resource Limits:** Configuration of CPU/Memory limits for the proxy container.
- **Deployment Docs:** Clarity of the setup process and environment variable documentation.
## Timeline (Gantt)
```mermaid
gantt
title LLM Proxy Code Review Timeline (March 2026)
dateFormat YYYY-MM-DD
section Backend & Security
Architecture & Rust Logic (Phase 1) :active, p1, 2026-03-06, 1d
Security & Auth Audit (Phase 2) :p2, 2026-03-07, 1d
section Data & Frontend
Database & Integrity (Phase 3) :p3, 2026-03-07, 1d
Frontend & Dashboard (Phase 4) :p4, 2026-03-08, 1d
section DevOps
Infra & Deployment (Phase 5) :p5, 2026-03-08, 1d
Final Review & Sign-off :2026-03-08, 4h
```
## Success Criteria
- **Security:** Zero high-priority vulnerabilities identified; all API keys masked in logs.
- **Performance:** Proxy overhead is minimal (<10ms latency addition); queries are indexed.
- **Maintainability:** Code passes all linting (`cargo clippy`) and formatting (`cargo fmt`) checks.
- **Documentation:** README and deployment guides are up-to-date and accurate.
- **Reliability:** Usage tracking matches actual API consumption with 99.9% accuracy.

View File

@@ -1,220 +0,0 @@
# LLM Proxy Gateway - Admin Dashboard
## Overview
This is a comprehensive admin dashboard for the LLM Proxy Gateway, providing real-time monitoring, analytics, and management capabilities for the proxy service.
## Features
### 1. Dashboard Overview
- Real-time request counters and statistics
- System health indicators
- Provider status monitoring
- Recent requests stream
### 2. Usage Analytics
- Time series charts for requests, tokens, and costs
- Filter by date range, client, provider, and model
- Top clients and models analysis
- Export functionality to CSV/JSON
### 3. Cost Management
- Cost breakdown by provider, client, and model
- Budget tracking with alerts
- Cost projections
- Pricing configuration management
### 4. Client Management
- List, create, revoke, and rotate API tokens
- Client-specific rate limits
- Usage statistics per client
- Token management interface
### 5. Provider Configuration
- Enable/disable LLM providers
- Configure API keys (masked display)
- Test provider connections
- Model availability management
### 6. User Management (RBAC)
- **Admin Role:** Full access to all dashboard features, user management, system configuration
- **Viewer Role:** Read-only access to usage analytics, costs, and monitoring
- Create/manage dashboard users with role assignment
- Secure password management
### 7. Real-time Monitoring
- Live request stream via WebSocket
- System metrics dashboard
- Response time and error rate tracking
- Live system logs
### 7. **System Settings**
- General configuration
- Database management
- Logging settings
- Security settings
## Technology Stack
### Frontend
- **HTML5/CSS3**: Modern, responsive design with CSS Grid/Flexbox
- **JavaScript (ES6+)**: Vanilla JavaScript with modular architecture
- **Chart.js**: Interactive data visualizations
- **Luxon**: Date/time manipulation
- **WebSocket API**: Real-time updates
### Backend (Rust/Axum)
- **Axum**: Web framework with WebSocket support
- **Tokio**: Async runtime
- **Serde**: JSON serialization/deserialization
- **Broadcast channels**: Real-time event distribution
## Installation & Setup
### 1. Build and Run the Server
```bash
# Build the project
cargo build --release
# Run the server
cargo run --release
```
### 2. Access the Dashboard
Once the server is running, access the dashboard at:
```
http://localhost:8080
```
### 3. Default Login Credentials
- **Username**: `admin`
- **Password**: `admin123`
## API Endpoints
### Authentication
- `POST /api/auth/login` - Dashboard login
- `GET /api/auth/status` - Authentication status
### Analytics
- `GET /api/usage/summary` - Overall usage summary
- `GET /api/usage/time-series` - Time series data
- `GET /api/usage/clients` - Client breakdown
- `GET /api/usage/providers` - Provider breakdown
### Clients
- `GET /api/clients` - List all clients
- `POST /api/clients` - Create new client
- `PUT /api/clients/{id}` - Update client
- `DELETE /api/clients/{id}` - Revoke client
- `GET /api/clients/{id}/usage` - Client-specific usage
### Users (RBAC)
- `GET /api/users` - List all dashboard users
- `POST /api/users` - Create new user
- `PUT /api/users/{id}` - Update user (admin only)
- `DELETE /api/users/{id}` - Delete user (admin only)
### Providers
- `GET /api/providers` - List providers and status
- `PUT /api/providers/{name}` - Update provider config
- `POST /api/providers/{name}/test` - Test provider connection
### System
- `GET /api/system/health` - System health
- `GET /api/system/logs` - Recent logs
- `POST /api/system/backup` - Trigger backup
### WebSocket
- `GET /ws` - WebSocket endpoint for real-time updates
## Project Structure
```
llm-proxy/
├── src/
│ ├── dashboard/ # Dashboard backend module
│ │ └── mod.rs # Dashboard routes and handlers
│ ├── server/ # Main proxy server
│ ├── providers/ # LLM provider implementations
│ └── ... # Other modules
├── static/ # Frontend dashboard files
│ ├── index.html # Main dashboard HTML
│ ├── css/
│ │ └── dashboard.css # Dashboard styles
│ ├── js/
│ │ ├── auth.js # Authentication module
│ │ ├── dashboard.js # Main dashboard controller
│ │ ├── websocket.js # WebSocket manager
│ │ ├── charts.js # Chart.js utilities
│ │ └── pages/ # Page-specific modules
│ │ ├── overview.js
│ │ ├── analytics.js
│ │ ├── costs.js
│ │ ├── clients.js
│ │ ├── providers.js
│ │ ├── monitoring.js
│ │ ├── settings.js
│ │ └── logs.js
│ ├── img/ # Images and icons
│ └── fonts/ # Font files
└── Cargo.toml # Rust dependencies
```
## Development
### Adding New Pages
1. Create a new JavaScript module in `static/js/pages/`
2. Implement the page class with `init()` method
3. Register the page in `dashboard.js`
4. Add menu item in `index.html`
### Adding New API Endpoints
1. Add route in `src/dashboard/mod.rs`
2. Implement handler function
3. Update frontend JavaScript to call the endpoint
### Styling Guidelines
- Use CSS custom properties (variables) from `:root`
- Follow mobile-first responsive design
- Use BEM-like naming convention for CSS classes
- Maintain consistent spacing with CSS variables
## Security Considerations
1. **Authentication**: Simple password-based auth for demo; replace with proper auth in production
2. **API Keys**: Tokens are masked in the UI (only last 4 characters shown)
3. **CORS**: Configure appropriate CORS headers for production
4. **Rate Limiting**: Implement rate limiting for API endpoints
5. **HTTPS**: Always use HTTPS in production
## Performance Optimizations
1. **Code Splitting**: JavaScript modules are loaded on-demand
2. **Caching**: Static assets are served with cache headers
3. **WebSocket**: Real-time updates reduce polling overhead
4. **Lazy Loading**: Charts and tables load data as needed
5. **Compression**: Enable gzip/brotli compression for static files
## Browser Support
- Chrome 60+
- Firefox 55+
- Safari 11+
- Edge 79+
## License
MIT License - See LICENSE file for details.
## Contributing
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests if applicable
5. Submit a pull request
## Support
For issues and feature requests, please use the GitHub issue tracker.

View File

@@ -1,480 +0,0 @@
# Database Review Report for LLM-Proxy Repository
**Review Date:** 2025-03-06
**Reviewer:** Database Optimization Expert
**Repository:** llm-proxy
**Focus Areas:** Schema Design, Query Optimization, Migration Strategy, Data Integrity, Usage Tracking Accuracy
## Executive Summary
The llm-proxy database implementation demonstrates solid foundation with appropriate table structures and clear separation of concerns. However, several areas require improvement to ensure scalability, data consistency, and performance as usage grows. Key findings include:
1. **Schema Design**: Generally normalized but missing foreign key enforcement and some critical indexes.
2. **Query Optimization**: Well-optimized for most queries but missing composite indexes for common filtering patterns.
3. **Migration Strategy**: Ad-hoc migration approach that may cause issues with schema evolution.
4. **Data Integrity**: Potential race conditions in usage tracking and missing transaction boundaries.
5. **Usage Tracking**: Generally accurate but risk of inconsistent state between related tables.
This report provides detailed analysis and actionable recommendations for each area.
## 1. Schema Design Review
### Tables Overview
The database consists of 6 main tables:
1. **clients**: Client management with usage aggregates
2. **llm_requests**: Request logging with token counts and costs
3. **provider_configs**: Provider configuration and credit balances
4. **model_configs**: Model-specific configuration and cost overrides
5. **users**: Dashboard user authentication
6. **client_tokens**: API token storage for client authentication
### Normalization Assessment
**Strengths:**
- Tables follow 3rd Normal Form (3NF) with appropriate separation
- Foreign key relationships properly defined
- No obvious data duplication across tables
**Areas for Improvement:**
- **Denormalized aggregates**: `clients.total_requests`, `total_tokens`, `total_cost` are derived from `llm_requests`. This introduces risk of inconsistency.
- **Provider credit balance**: Stored in `provider_configs` but also updated based on `llm_requests`. No audit trail for balance changes.
### Data Type Analysis
**Appropriate Choices:**
- INTEGER for token counts (cast from u32 to i64)
- REAL for monetary values
- DATETIME for timestamps using SQLite's CURRENT_TIMESTAMP
- TEXT for identifiers with appropriate length
**Potential Issues:**
- `llm_requests.request_body` and `response_body` defined as TEXT but always set to NULL - consider removing or making optional columns.
- `provider_configs.billing_mode` added via migration but default value not consistently applied to existing rows.
### Constraints and Foreign Keys
**Current Constraints:**
- Primary keys defined for all tables
- UNIQUE constraints on `clients.client_id`, `users.username`, `client_tokens.token`
- Foreign key definitions present but **not enforced** (SQLite default)
**Missing Constraints:**
- NOT NULL constraints missing on several columns where nullability not intended
- CHECK constraints for positive values (`credit_balance >= 0`)
- Foreign key enforcement not enabled
## 2. Query Optimization Analysis
### Indexing Strategy
**Existing Indexes:**
- `idx_clients_client_id` - Essential for client lookups
- `idx_clients_created_at` - Useful for chronological listing
- `idx_llm_requests_timestamp` - Critical for time-based queries
- `idx_llm_requests_client_id` - Supports client-specific queries
- `idx_llm_requests_provider` - Good for provider breakdowns
- `idx_llm_requests_status` - Low cardinality but acceptable
- `idx_client_tokens_token` UNIQUE - Essential for authentication
- `idx_client_tokens_client_id` - Supports token management
**Missing Critical Indexes:**
1. `model_configs.provider_id` - Foreign key column used in JOINs
2. `llm_requests(client_id, timestamp)` - Composite index for client time-series queries
3. `llm_requests(provider, timestamp)` - For provider performance analysis
4. `llm_requests(status, timestamp)` - For error trend analysis
### N+1 Query Detection
**Well-Optimized Areas:**
- Model configuration caching prevents repeated database hits
- Provider configs loaded in batch for dashboard display
- Client listing uses single efficient query
**Potential N+1 Patterns:**
- In `server/mod.rs` list_models function, cache lookup per model but this is in-memory
- No significant database N+1 issues identified
### Inefficient Query Patterns
**Query 1: Time-series aggregation with strftime()**
```sql
SELECT strftime('%Y-%m-%d', timestamp) as date, ...
FROM llm_requests
WHERE 1=1 {}
GROUP BY date, client_id, provider, model
ORDER BY date DESC
LIMIT 200
```
**Issue:** Function on indexed column prevents index utilization for the WHERE clause when filtering by timestamp range.
**Recommendation:** Store computed date column or use range queries on timestamp directly.
**Query 2: Today's stats using strftime()**
```sql
WHERE strftime('%Y-%m-%d', timestamp) = ?
```
**Issue:** Non-sargable query prevents index usage.
**Recommendation:** Use range query:
```sql
WHERE timestamp >= date(?) AND timestamp < date(?, '+1 day')
```
### Recommended Index Additions
```sql
-- Composite indexes for common query patterns
CREATE INDEX idx_llm_requests_client_timestamp ON llm_requests(client_id, timestamp);
CREATE INDEX idx_llm_requests_provider_timestamp ON llm_requests(provider, timestamp);
CREATE INDEX idx_llm_requests_status_timestamp ON llm_requests(status, timestamp);
-- Foreign key index
CREATE INDEX idx_model_configs_provider_id ON model_configs(provider_id);
-- Optional: Covering index for client usage queries
CREATE INDEX idx_clients_usage ON clients(client_id, total_requests, total_tokens, total_cost);
```
## 3. Migration Strategy Assessment
### Current Approach
The migration system uses a hybrid approach:
1. **Schema synchronization**: `CREATE TABLE IF NOT EXISTS` on startup
2. **Ad-hoc migrations**: `ALTER TABLE` statements with error suppression
3. **Single migration file**: `migrations/001-add-billing-mode.sql` with transaction wrapper
**Pros:**
- Simple to understand and maintain
- Automatic schema creation for new deployments
- Error suppression prevents crashes on column existence
**Cons:**
- No version tracking of applied migrations
- Potential for inconsistent schema across deployments
- `ALTER TABLE` error suppression hides genuine schema issues
- No rollback capability
### Risks and Limitations
1. **Schema Drift**: Different instances may have different schemas if migrations are applied out of order
2. **Data Loss Risk**: No backup/verification before schema changes
3. **Production Issues**: Error suppression could mask migration failures until runtime
### Recommendations
1. **Implement Proper Migration Tooling**: Use `sqlx migrate` or similar versioned migration system
2. **Add Migration Version Table**: Track applied migrations and checksum verification
3. **Separate Migration Scripts**: One file per migration with up/down directions
4. **Pre-deployment Validation**: Schema checks in CI/CD pipeline
5. **Backup Strategy**: Automatic backups before migration execution
## 4. Data Integrity Evaluation
### Foreign Key Enforcement
**Critical Issue:** Foreign key constraints are defined but **not enforced** in SQLite.
**Impact:** Orphaned records, inconsistent referential integrity.
**Solution:** Enable foreign key support in connection string:
```rust
let options = SqliteConnectOptions::from_str(&format!("sqlite:{}", database_path))?
.create_if_missing(true)
.pragma("foreign_keys", "ON");
```
### Transaction Usage
**Good Patterns:**
- Request logging uses transactions for insert + provider balance update
- Atomic UPDATE for client usage statistics
**Problematic Areas:**
1. **Split Transactions**: Client usage update and request logging are in separate transactions
- In `logging/mod.rs`: `insert_log` transaction includes provider balance update
- In `utils/streaming.rs`: Client usage updated separately after logging
- **Risk**: Partial updates if one transaction fails
2. **No Transaction for Client Creation**: Client and token creation not atomic
**Recommendations:**
- Wrap client usage update within the same transaction as request logging
- Use transaction for client + token creation
- Consider using savepoints for complex operations
### Race Conditions and Consistency
**Potential Race Conditions:**
1. **Provider credit balance**: Concurrent requests may cause lost updates
- Current: `UPDATE provider_configs SET credit_balance = credit_balance - ?`
- SQLite provides serializable isolation, but negative balances not prevented
2. **Client usage aggregates**: Concurrent updates to `total_requests`, `total_tokens`, `total_cost`
- Similar UPDATE pattern, generally safe but consider idempotency
**Recommendations:**
- Add check constraint: `CHECK (credit_balance >= 0)`
- Implement idempotent request logging with unique request IDs
- Consider optimistic concurrency control for critical balances
## 5. Usage Tracking Accuracy
### Token Counting Methodology
**Current Approach:**
- Prompt tokens: Estimated using provider-specific estimators
- Completion tokens: Estimated or from provider real usage data
- Cache tokens: Separately tracked for cache-aware pricing
**Strengths:**
- Fallback to estimation when provider doesn't report usage
- Cache token differentiation for accurate pricing
**Weaknesses:**
- Estimation may differ from actual provider counts
- No validation of provider-reported token counts
### Cost Calculation
**Well Implemented:**
- Model-specific cost overrides via `model_configs`
- Cache-aware pricing when supported by registry
- Provider fallback calculations
**Potential Issues:**
- Floating-point precision for monetary calculations
- No rounding strategy for fractional cents
### Update Consistency
**Inconsistency Risk:** Client aggregates updated separately from request logging.
**Example Flow:**
1. Request log inserted and provider balance updated (transaction)
2. Client usage updated (separate operation)
3. If step 2 fails, client stats undercount usage
**Solution:** Include client update in the same transaction:
```rust
// In insert_log function, add:
UPDATE clients
SET total_requests = total_requests + 1,
total_tokens = total_tokens + ?,
total_cost = total_cost + ?
WHERE client_id = ?;
```
### Financial Accuracy
**Good Practices:**
- Token-level granularity for cost calculation
- Separation of prompt/completion/cache pricing
- Database persistence for audit trail
**Recommendations:**
1. **Audit Trail**: Add `balance_transactions` table for provider credit changes
2. **Rounding Policy**: Define rounding strategy (e.g., to 6 decimal places)
3. **Validation**: Periodic reconciliation of aggregates vs. detail records
## 6. Performance Recommendations
### Schema Improvements
1. **Partitioning Strategy**: For high-volume `llm_requests`, consider:
- Monthly partitioning by timestamp
- Archive old data to separate tables
2. **Data Retention Policy**: Implement automatic cleanup of old request logs
```sql
DELETE FROM llm_requests WHERE timestamp < date('now', '-90 days');
```
3. **Column Optimization**: Remove unused `request_body`, `response_body` columns or implement compression
### Query Optimizations
1. **Avoid Functions on Indexed Columns**: Rewrite date queries as range queries
2. **Batch Updates**: Consider batch updates for client usage instead of per-request
3. **Read Replicas**: For dashboard queries, consider separate read connection
### Connection Pooling
**Current:** SQLx connection pool with default settings
**Recommendations:**
- Configure pool size based on expected concurrency
- Implement connection health checks
- Monitor pool utilization metrics
### Monitoring Setup
**Essential Metrics:**
- Query execution times (slow query logging)
- Index usage statistics
- Table growth trends
- Connection pool utilization
**Implementation:**
- Add `sqlx::metrics` integration
- Regular `ANALYZE` execution for query planner
- Dashboard for database health monitoring
## 7. Security Considerations
### Data Protection
**Sensitive Data:**
- `provider_configs.api_key` - Should be encrypted at rest
- `users.password_hash` - Already hashed with bcrypt
- `client_tokens.token` - Plain text storage
**Recommendations:**
- Encrypt API keys using libsodium or similar
- Implement token hashing (similar to password hashing)
- Regular security audits of authentication flows
### SQL Injection Prevention
**Good Practices:**
- Use sqlx query builder with parameter binding
- No raw SQL concatenation observed in code review
**Verification Needed:** Ensure all dynamic SQL uses parameterized queries
### Access Controls
**Database Level:**
- SQLite lacks built-in user management
- Consider file system permissions for database file
- Application-level authentication is primary control
## 8. Summary of Critical Issues
**Priority 1 (Critical):**
1. Foreign key constraints not enabled
2. Split transactions risking data inconsistency
3. Missing composite indexes for common queries
**Priority 2 (High):**
1. No proper migration versioning system
2. Potential race conditions in balance updates
3. Non-sargable date queries impacting performance
**Priority 3 (Medium):**
1. Denormalized aggregates without consistency guarantees
2. No data retention policy for request logs
3. Missing check constraints for data validation
## 9. Recommended Action Plan
### Phase 1: Immediate Fixes (1-2 weeks)
1. Enable foreign key constraints in database connection
2. Add composite indexes for common query patterns
3. Fix transaction boundaries for client usage updates
4. Rewrite non-sargable date queries
### Phase 2: Short-term Improvements (3-4 weeks)
1. Implement proper migration system with version tracking
2. Add check constraints for data validation
3. Implement connection pooling configuration
4. Create database monitoring dashboard
### Phase 3: Long-term Enhancements (2-3 months)
1. Implement data retention and archiving strategy
2. Add audit trail for provider balance changes
3. Consider partitioning for high-volume tables
4. Implement encryption for sensitive data
### Phase 4: Ongoing Maintenance
1. Regular index maintenance and query plan analysis
2. Periodic reconciliation of aggregate vs. detail data
3. Security audits and dependency updates
4. Performance benchmarking and optimization
---
## Appendices
### A. Sample Migration Implementation
```sql
-- migrations/002-enable-foreign-keys.sql
PRAGMA foreign_keys = ON;
-- migrations/003-add-composite-indexes.sql
CREATE INDEX idx_llm_requests_client_timestamp ON llm_requests(client_id, timestamp);
CREATE INDEX idx_llm_requests_provider_timestamp ON llm_requests(provider, timestamp);
CREATE INDEX idx_model_configs_provider_id ON model_configs(provider_id);
```
### B. Transaction Fix Example
```rust
async fn insert_log(pool: &SqlitePool, log: RequestLog) -> Result<(), sqlx::Error> {
let mut tx = pool.begin().await?;
// Insert or ignore client
sqlx::query("INSERT OR IGNORE INTO clients (client_id, name, description) VALUES (?, ?, 'Auto-created from request')")
.bind(&log.client_id)
.bind(&log.client_id)
.execute(&mut *tx)
.await?;
// Insert request log
sqlx::query("INSERT INTO llm_requests ...")
.execute(&mut *tx)
.await?;
// Update provider balance
if log.cost > 0.0 {
sqlx::query("UPDATE provider_configs SET credit_balance = credit_balance - ? WHERE id = ? AND (billing_mode IS NULL OR billing_mode != 'postpaid')")
.bind(log.cost)
.bind(&log.provider)
.execute(&mut *tx)
.await?;
}
// Update client aggregates within same transaction
sqlx::query("UPDATE clients SET total_requests = total_requests + 1, total_tokens = total_tokens + ?, total_cost = total_cost + ? WHERE client_id = ?")
.bind(log.total_tokens as i64)
.bind(log.cost)
.bind(&log.client_id)
.execute(&mut *tx)
.await?;
tx.commit().await?;
Ok(())
}
```
### C. Monitoring Query Examples
```sql
-- Identify unused indexes
SELECT * FROM sqlite_master
WHERE type = 'index'
AND name NOT IN (
SELECT DISTINCT name
FROM sqlite_stat1
WHERE tbl = 'llm_requests'
);
-- Table size analysis
SELECT name, (pgsize * page_count) / 1024 / 1024 as size_mb
FROM dbstat
WHERE name = 'llm_requests';
-- Query performance analysis (requires EXPLAIN QUERY PLAN)
EXPLAIN QUERY PLAN
SELECT * FROM llm_requests
WHERE client_id = ? AND timestamp >= ?;
```
---
*This report provides a comprehensive analysis of the current database implementation and actionable recommendations for improvement. Regular review and iteration will ensure the database continues to meet performance, consistency, and scalability requirements as the application grows.*

View File

@@ -11,7 +11,7 @@ RUN go mod download
COPY . .
# Build the application
RUN CGO_ENABLED=0 GOOS=linux go build -o llm-proxy ./cmd/llm-proxy
RUN CGO_ENABLED=0 GOOS=linux go build -o gophergate ./cmd/gophergate
# Final stage
FROM alpine:latest
@@ -21,7 +21,7 @@ RUN apk --no-cache add ca-certificates tzdata
WORKDIR /app
# Copy the binary from the builder stage
COPY --from=builder /app/llm-proxy .
COPY --from=builder /app/gophergate .
COPY --from=builder /app/static ./static
# Create data directory
@@ -31,4 +31,4 @@ RUN mkdir -p /app/data
EXPOSE 8080
# Run the application
CMD ["./llm-proxy"]
CMD ["./gophergate"]

View File

@@ -1,232 +0,0 @@
# Optimization for 512MB RAM Environment
This document provides guidance for optimizing the LLM Proxy Gateway for deployment in resource-constrained environments (512MB RAM).
## Memory Optimization Strategies
### 1. Build Optimization
The project is already configured with optimized build settings in `Cargo.toml`:
```toml
[profile.release]
opt-level = 3 # Maximum optimization
lto = true # Link-time optimization
codegen-units = 1 # Single codegen unit for better optimization
strip = true # Strip debug symbols
```
**Additional optimizations you can apply:**
```bash
# Build with specific target for better optimization
cargo build --release --target x86_64-unknown-linux-musl
# Or for ARM (Raspberry Pi, etc.)
cargo build --release --target aarch64-unknown-linux-musl
```
### 2. Runtime Memory Management
#### Database Connection Pool
- Default: 10 connections
- Recommended for 512MB: 5 connections
Update `config.toml`:
```toml
[database]
max_connections = 5
```
#### Rate Limiting Memory Usage
- Client rate limit buckets: Store in memory
- Circuit breakers: Minimal memory usage
- Consider reducing burst capacity if memory is critical
#### Provider Management
- Only enable providers you actually use
- Disable unused providers in configuration
### 3. Configuration for Low Memory
Create a `config-low-memory.toml`:
```toml
[server]
port = 8080
host = "0.0.0.0"
[database]
path = "./data/llm_proxy.db"
max_connections = 3 # Reduced from default 10
[providers]
# Only enable providers you need
openai.enabled = true
gemini.enabled = false # Disable if not used
deepseek.enabled = false # Disable if not used
grok.enabled = false # Disable if not used
[rate_limiting]
# Reduce memory usage for rate limiting
client_requests_per_minute = 30 # Reduced from 60
client_burst_size = 5 # Reduced from 10
global_requests_per_minute = 300 # Reduced from 600
```
### 4. System-Level Optimizations
#### Linux Kernel Parameters
Add to `/etc/sysctl.conf`:
```bash
# Reduce TCP buffer sizes
net.ipv4.tcp_rmem = 4096 87380 174760
net.ipv4.tcp_wmem = 4096 65536 131072
# Reduce connection tracking
net.netfilter.nf_conntrack_max = 65536
net.netfilter.nf_conntrack_tcp_timeout_established = 1200
# Reduce socket buffer sizes
net.core.rmem_max = 131072
net.core.wmem_max = 131072
net.core.rmem_default = 65536
net.core.wmem_default = 65536
```
#### Systemd Service Configuration
Create `/etc/systemd/system/llm-proxy.service`:
```ini
[Unit]
Description=LLM Proxy Gateway
After=network.target
[Service]
Type=simple
User=llmproxy
Group=llmproxy
WorkingDirectory=/opt/llm-proxy
ExecStart=/opt/llm-proxy/llm-proxy
Restart=on-failure
RestartSec=5
# Memory limits
MemoryMax=400M
MemorySwapMax=100M
# CPU limits
CPUQuota=50%
# Process limits
LimitNOFILE=65536
LimitNPROC=512
Environment="RUST_LOG=info"
Environment="LLM_PROXY__DATABASE__MAX_CONNECTIONS=3"
[Install]
WantedBy=multi-user.target
```
### 5. Application-Specific Optimizations
#### Disable Unused Features
- **Multimodal support**: If not using images, disable image processing dependencies
- **Dashboard**: The dashboard uses WebSockets and additional memory. Consider disabling if not needed.
- **Detailed logging**: Reduce log verbosity in production
#### Memory Pool Sizes
The application uses several memory pools:
1. **Database connection pool**: Configured via `max_connections`
2. **HTTP client pool**: Reqwest client pool (defaults to reasonable values)
3. **Async runtime**: Tokio worker threads
Reduce Tokio worker threads for low-core systems:
```rust
// In main.rs, modify tokio runtime creation
#[tokio::main(flavor = "current_thread")] // Single-threaded runtime
async fn main() -> Result<()> {
// Or for multi-threaded with limited threads:
// #[tokio::main(worker_threads = 2)]
```
### 6. Monitoring and Profiling
#### Memory Usage Monitoring
```bash
# Install heaptrack for memory profiling
cargo install heaptrack
# Profile memory usage
heaptrack ./target/release/llm-proxy
# Monitor with ps
ps aux --sort=-%mem | head -10
# Monitor with top
top -p $(pgrep llm-proxy)
```
#### Performance Benchmarks
Test with different configurations:
```bash
# Test with 100 concurrent connections
wrk -t4 -c100 -d30s http://localhost:8080/health
# Test chat completion endpoint
ab -n 1000 -c 10 -p test_request.json -T application/json http://localhost:8080/v1/chat/completions
```
### 7. Deployment Checklist for 512MB RAM
- [ ] Build with release profile: `cargo build --release`
- [ ] Configure database with `max_connections = 3`
- [ ] Disable unused providers in configuration
- [ ] Set appropriate rate limiting limits
- [ ] Configure systemd with memory limits
- [ ] Set up log rotation to prevent disk space issues
- [ ] Monitor memory usage during initial deployment
- [ ] Consider using swap space (512MB-1GB) for safety
### 8. Troubleshooting High Memory Usage
#### Common Issues and Solutions:
1. **Database connection leaks**: Ensure connections are properly closed
2. **Memory fragmentation**: Use jemalloc or mimalloc as allocator
3. **Unbounded queues**: Check WebSocket message queues
4. **Cache growth**: Implement cache limits or TTL
#### Add to Cargo.toml for alternative allocator:
```toml
[dependencies]
mimalloc = { version = "0.1", default-features = false }
[features]
default = ["mimalloc"]
```
#### In main.rs:
```rust
#[global_allocator]
static GLOBAL: mimalloc::MiMalloc = mimalloc::MiMalloc;
```
### 9. Expected Memory Usage
| Component | Baseline | With 10 clients | With 100 clients |
|-----------|----------|-----------------|------------------|
| Base executable | 15MB | 15MB | 15MB |
| Database connections | 5MB | 8MB | 15MB |
| Rate limiting | 2MB | 5MB | 20MB |
| HTTP clients | 3MB | 5MB | 10MB |
| **Total** | **25MB** | **33MB** | **60MB** |
**Note**: These are estimates. Actual usage depends on request volume, payload sizes, and configuration.
### 10. Further Reading
- [Tokio performance guide](https://tokio.rs/tokio/topics/performance)
- [Rust performance book](https://nnethercote.github.io/perf-book/)
- [Linux memory management](https://www.kernel.org/doc/html/latest/admin-guide/mm/)
- [SQLite performance tips](https://www.sqlite.org/faq.html#q19)

99
PLAN.md
View File

@@ -1,99 +0,0 @@
# Project Plan: LLM Proxy Enhancements & Security Upgrade
This document outlines the roadmap for standardizing frontend security, cleaning up the codebase, upgrading session management to HMAC-signed tokens, and extending integration testing.
## Phase 1: Frontend Security Standardization
**Primary Agent:** `frontend-developer`
- [x] Audit `static/js/pages/users.js` for manual HTML string concatenation.
- [x] Replace custom escaping or unescaped injections with `window.api.escapeHtml`.
- [x] Verify user list and user detail rendering for XSS vulnerabilities.
## Phase 2: Codebase Cleanup
**Primary Agent:** `backend-developer`
- [x] Identify and remove unused imports in `src/config/mod.rs`.
- [x] Identify and remove unused imports in `src/providers/mod.rs`.
- [x] Run `cargo clippy` and `cargo fmt` to ensure adherence to standards.
## Phase 3: HMAC Architectural Upgrade
**Primary Agents:** `fullstack-developer`, `security-auditor`, `backend-developer`
### 3.1 Design (Security Auditor)
- [x] Define Token Structure: `base64(payload).signature`.
- Payload: `{ "session_id": "...", "username": "...", "role": "...", "exp": ... }`
- [x] Select HMAC algorithm (HMAC-SHA256).
- [x] Define environment variable for secret key: `SESSION_SECRET`.
### 3.2 Implementation (Backend Developer)
- [x] Refactor `src/dashboard/sessions.rs`:
- Integrate `hmac` and `sha2` crates (or similar).
- Update `create_session` to return signed tokens.
- Update `validate_session` to verify signature before checking store.
- [x] Implement activity-based session refresh:
- If session is valid and >50% through its TTL, extend `expires_at` and issue new signed token.
### 3.3 Integration (Fullstack Developer)
- [x] Update dashboard API handlers to handle new token format.
- [x] Update frontend session storage/retrieval if necessary.
## Phase 4: Extended Integration Testing
**Primary Agent:** `qa-automation`
- [ ] Setup test environment with encrypted key storage enabled.
- [ ] Implement end-to-end flow:
1. Store encrypted provider key via API.
2. Authenticate through Proxy.
3. Make proxied LLM request (verifying decryption and usage).
- [ ] Validate HMAC token expiration and refresh logic in automated tests.
## Phase 5: Code Quality & Refactoring
**Primary Agent:** `fullstack-developer`
- [x] Refactor dashboard monolith into modular sub-modules (`auth.rs`, `usage.rs`, etc.).
- [x] Standardize error handling and remove `unwrap()` in production paths.
- [x] Implement system health metrics and backup functionality.
---
# Phase 6: Cache Cost & Provider Audit (ACTIVE)
**Primary Agents:** `frontend-developer`, `backend-developer`, `database-optimizer`, `lab-assistant`
## 6.1 Dashboard UI Updates (@frontend-developer)
- [ ] **Update Models Page Modal:** Add input fields for `Cache Read Cost` and `Cache Write Cost` in `static/js/pages/models.js`.
- [ ] **API Integration:** Ensure `window.api.put` includes these new cost fields in the request body.
- [ ] **Verify Costs Page:** Confirm `static/js/pages/costs.js` displays these rates correctly in the pricing table.
## 6.2 Provider Audit & Stream Fixes (@backend-developer)
- [ ] **Standard DeepSeek Fix:** Modify `src/providers/deepseek.rs` to stop stripping `stream_options` for `deepseek-chat`.
- [ ] **Grok Audit:** Verify if Grok correctly returns usage in streaming; it uses `build_openai_body` and doesn't seem to strip it.
- [ ] **Gemini Audit:** Confirm Gemini returns `usage_metadata` reliably in the final chunk.
- [ ] **Anthropic Audit:** Check if Anthropic streaming requires `include_usage` or similar flags.
## 6.3 Database & Migration Validation (@database-optimizer)
- [ ] **Test Migrations:** Run the server to ensure `ALTER TABLE` logic in `src/database/mod.rs` applies the new columns correctly.
- [ ] **Schema Verification:** Verify `model_configs` has `cache_read_cost_per_m` and `cache_write_cost_per_m` columns.
## 6.4 Token Estimation Refinement (@lab-assistant)
- [ ] **Analyze Heuristic:** Review `chars / 4` in `src/utils/tokens.rs`.
- [ ] **Background Precise Recount:** Propose a mechanism for a precise token count (using Tiktoken) after the response is finalized.
## Critical Path
Migration Validation → UI Fields → Provider Stream Usage Reporting.
```mermaid
gantt
title Phase 6 Timeline
dateFormat YYYY-MM-DD
section Frontend
Models Page UI :2026-03-06, 1d
Costs Table Update:after Models Page UI, 1d
section Backend
DeepSeek Fix :2026-03-06, 1d
Provider Audit (Grok/Gemini):after DeepSeek Fix, 2d
section Database
Migration Test :2026-03-06, 1d
section Optimization
Token Heuristic Review :2026-03-06, 1d
```

View File

@@ -1,6 +1,6 @@
# LLM Proxy Gateway
# GopherGate
A unified, high-performance LLM proxy gateway built in Go. It provides a single OpenAI-compatible API to access multiple providers (OpenAI, Gemini, DeepSeek, Grok, Ollama) with built-in token tracking, real-time cost calculation, multi-user authentication, and a management dashboard.
A unified, high-performance LLM proxy gateway built in Go. It provides a single OpenAI-compatible API to access multiple providers (OpenAI, Gemini, DeepSeek, Moonshot, Grok, Ollama) with built-in token tracking, real-time cost calculation, multi-user authentication, and a management dashboard.
## Features
@@ -9,7 +9,8 @@ A unified, high-performance LLM proxy gateway built in Go. It provides a single
- **OpenAI:** GPT-4o, GPT-4o Mini, o1, o3 reasoning models.
- **Google Gemini:** Gemini 2.0 Flash, Pro, and vision models (with native CoT support).
- **DeepSeek:** DeepSeek Chat and Reasoner (R1) models.
- **xAI Grok:** Grok-beta models.
- **Moonshot:** Kimi K2.5 and other Kimi models.
- **xAI Grok:** Grok-4 models.
- **Ollama:** Local LLMs running on your network.
- **Observability & Tracking:**
- **Asynchronous Logging:** Non-blocking request logging to SQLite using background workers.
@@ -27,7 +28,7 @@ A unified, high-performance LLM proxy gateway built in Go. It provides a single
## Security
LLM Proxy is designed with security in mind:
GopherGate is designed with security in mind:
- **Signed Session Tokens:** Management dashboard sessions are secured using HMAC-SHA256 signed tokens.
- **Encrypted Storage:** Support for encrypted provider API keys in the database.
@@ -55,8 +56,8 @@ LLM Proxy is designed with security in mind:
1. Clone and build:
```bash
git clone <repository-url>
cd llm-proxy
go build -o llm-proxy ./cmd/llm-proxy
cd gophergate
go build -o gophergate ./cmd/gophergate
```
2. Configure environment:
@@ -66,11 +67,12 @@ LLM Proxy is designed with security in mind:
# LLM_PROXY__ENCRYPTION_KEY=... (32-byte hex or base64 string)
# OPENAI_API_KEY=sk-...
# GEMINI_API_KEY=AIza...
# MOONSHOT_API_KEY=...
```
3. Run the proxy:
```bash
./llm-proxy
./gophergate
```
The server starts on `http://0.0.0.0:8080` by default.
@@ -79,13 +81,13 @@ The server starts on `http://0.0.0.0:8080` by default.
```bash
# Build the container
docker build -t llm-proxy .
docker build -t gophergate .
# Run the container
docker run -p 8080:8080 \
-e LLM_PROXY__ENCRYPTION_KEY=your-secure-key \
-v ./data:/app/data \
llm-proxy
gophergate
```
## Management Dashboard
@@ -102,12 +104,22 @@ Access the dashboard at `http://localhost:8080`.
### Default Credentials
- **Username:** `admin`
- **Password:** `admin` (You will be prompted to change this or should change it manually in the dashboard)
- **Password:** `admin123` (You will be prompted to change this on first login)
**Forgot Password?**
You can reset the admin password to default by running:
```bash
./gophergate -reset-admin
```
## API Usage
The proxy is a drop-in replacement for OpenAI. Configure your client:
Moonshot models are available through the same OpenAI-compatible endpoint. For
example, use `kimi-k2.5` as the model name after setting `MOONSHOT_API_KEY` in
your environment.
### Python
```python
from openai import OpenAI

View File

@@ -1,58 +0,0 @@
# LLM Proxy Security Audit Report
## Executive Summary
A comprehensive security audit of the `llm-proxy` repository was conducted. The audit identified **1 critical vulnerability**, **3 high-risk issues**, **4 medium-risk issues**, and **3 low-risk issues**. The most severe findings include Cross-Site Scripting (XSS) in the dashboard interface and insecure storage of provider API keys in the database.
## Detailed Findings
### Critical Risk Vulnerabilities
#### **CRITICAL-01: Cross-Site Scripting (XSS) in Dashboard Interface**
- **Location**: `static/js/pages/clients.js` (multiple locations).
- **Description**: User-controlled data (e.g., `client.id`) inserted directly into HTML or `onclick` handlers without escaping.
- **Impact**: Arbitrary JavaScript execution in admin context, potentially stealing session tokens.
#### **CRITICAL-02: Insecure API Key Storage in Database**
- **Location**: `src/database/mod.rs`, `src/providers/mod.rs`, `src/dashboard/providers.rs`.
- **Description**: Provider API keys are stored in **plaintext** in the SQLite database.
- **Impact**: Compromised database file exposes all provider API keys.
### High Risk Vulnerabilities
#### **HIGH-01: Missing Input Validation and Size Limits**
- **Location**: `src/server/mod.rs`, `src/models/mod.rs`.
- **Impact**: Denial of Service via large payloads.
#### **HIGH-02: Sensitive Data Logging Without Encryption**
- **Location**: `src/database/mod.rs`, `src/logging/mod.rs`.
- **Description**: Full request and response bodies stored in `llm_requests` table without encryption or redaction.
#### **HIGH-03: Weak Default Credentials and Password Policy**
- **Description**: Default admin password is 'admin' with only 4-character minimum password length.
### Medium Risk Vulnerabilities
#### **MEDIUM-01: Missing CSRF Protection**
- No CSRF tokens or SameSite cookie attributes for state-changing dashboard endpoints.
#### **MEDIUM-02: Insecure Session Management**
- Session tokens stored in localStorage without HttpOnly flag.
- Tokens use simple `session-{uuid}` format.
#### **MEDIUM-03: Error Information Leakage**
- Internal error details exposed to clients in some cases.
#### **MEDIUM-04: Outdated Dependencies**
- Outdated versions of `chrono`, `tokio`, and `reqwest`.
### Low Risk Vulnerabilities
- Missing security headers (CSP, HSTS, X-Frame-Options).
- Insufficient rate limiting on dashboard authentication.
- No database encryption at rest.
## Recommendations
### Immediate Actions
1. **Fix XSS Vulnerabilities:** Implement proper HTML escaping for all user-controlled data.
2. **Secure API Key Storage:** Encrypt API keys in database using a library like `ring`.
3. **Implement Input Validation:** Add maximum payload size limits (e.g., 10MB).
4. **Improve Data Protection:** Add option to disable request/response body logging.
---
*Report generated by Security Auditor Agent on March 6, 2026*

16
TODO.md
View File

@@ -2,9 +2,9 @@
## Completed Tasks
- [x] Initial Go project setup
- [x] Database schema & migrations
- [x] Database schema & migrations (hardcoded in `db.go`)
- [x] Configuration loader (Viper)
- [x] Auth Middleware
- [x] Auth Middleware (scoped to `/v1`)
- [x] Basic Provider implementations (OpenAI, Gemini, DeepSeek, Grok)
- [x] Streaming Support (SSE & Gemini custom streaming)
- [x] Archive Rust files to `rust` branch
@@ -12,16 +12,21 @@
- [x] Enhanced `helpers.go` for Multimodal & Tool Calling (OpenAI compatible)
- [x] Enhanced `server.go` for robust request conversion
- [x] Dashboard Management APIs (Clients, Tokens, Users, Providers)
- [x] Dashboard Analytics & Usage Summary
- [x] WebSocket for real-time dashboard updates
- [x] Dashboard Analytics & Usage Summary (Fixed SQL robustness)
- [x] WebSocket for real-time dashboard updates (Hub with client counting)
- [x] Asynchronous Request Logging to SQLite
- [x] Update documentation (README, deployment, architecture)
- [x] Cost Tracking accuracy (Registry integration with `models.dev`)
- [x] Model Listing endpoint (`/v1/models`) with provider filtering
- [x] System Metrics endpoint (`/api/system/metrics` using `gopsutil`)
- [x] Fixed dashboard 404s and 500s
## Feature Parity Checklist (High Priority)
### OpenAI Provider
- [x] Tool Calling
- [x] Multimodal (Images) support
- [x] Accurate usage parsing (cached & reasoning tokens)
- [ ] Reasoning Content (CoT) support for `o1`, `o3` (need to ensure it's parsed in responses)
- [ ] Support for `/v1/responses` API (required for some gpt-5/o1 models)
@@ -35,15 +40,16 @@
- [x] Reasoning Content (CoT) support
- [x] Parameter sanitization for `deepseek-reasoner`
- [x] Tool Calling support
- [x] Accurate usage parsing (cache hits & reasoning)
### Grok Provider
- [x] Tool Calling support
- [x] Multimodal support
- [x] Accurate usage parsing (via OpenAI helper)
## Infrastructure & Middleware
- [ ] Implement Rate Limiting (`golang.org/x/time/rate`)
- [ ] Implement Circuit Breaker (`github.com/sony/gobreaker`)
- [ ] Implement Model Cost Calculation logic (needs registry/pricing integration)
## Verification
- [ ] Unit tests for feature-specific mapping (CoT, Tools, Images)

55
cmd/gophergate/main.go Normal file
View File

@@ -0,0 +1,55 @@
package main
import (
"flag"
"log"
"os"
"gophergate/internal/config"
"gophergate/internal/db"
"gophergate/internal/server"
"github.com/joho/godotenv"
"golang.org/x/crypto/bcrypt"
)
func main() {
resetAdmin := flag.Bool("reset-admin", false, "Reset admin password to admin123")
flag.Parse()
// Load environment variables
if err := godotenv.Load(); err != nil {
log.Println("No .env file found")
}
// Load configuration
cfg, err := config.Load()
if err != nil {
log.Fatalf("Failed to load configuration: %v", err)
}
// Initialize database
database, err := db.Init(cfg.Database.Path)
if err != nil {
log.Fatalf("Failed to initialize database: %v", err)
}
if *resetAdmin {
hash, _ := bcrypt.GenerateFromPassword([]byte("admin123"), 12)
_, err = database.Exec("UPDATE users SET password_hash = ?, must_change_password = 1 WHERE username = 'admin'", string(hash))
if err != nil {
log.Fatalf("Failed to reset admin password: %v", err)
}
log.Println("Admin password has been reset to 'admin123'")
os.Exit(0)
}
// Initialize server
s := server.NewServer(cfg, database)
// Run server
log.Printf("Starting GopherGate on %s:%d", cfg.Server.Host, cfg.Server.Port)
if err := s.Run(); err != nil {
log.Fatalf("Server failed: %v", err)
}
}

View File

@@ -1,39 +0,0 @@
package main
import (
"log"
"llm-proxy/internal/config"
"llm-proxy/internal/db"
"llm-proxy/internal/server"
"github.com/joho/godotenv"
)
func main() {
// Load environment variables
if err := godotenv.Load(); err != nil {
log.Println("No .env file found")
}
// Load configuration
cfg, err := config.Load()
if err != nil {
log.Fatalf("Failed to load configuration: %v", err)
}
// Initialize database
database, err := db.Init(cfg.Database.Path)
if err != nil {
log.Fatalf("Failed to initialize database: %v", err)
}
// Initialize server
s := server.NewServer(cfg, database)
// Run server
log.Printf("Starting LLM Proxy on %s:%d", cfg.Server.Host, cfg.Server.Port)
if err := s.Run(); err != nil {
log.Fatalf("Server failed: %v", err)
}
}

Binary file not shown.

667
deploy.sh
View File

@@ -1,667 +0,0 @@
#!/bin/bash
# LLM Proxy Gateway Deployment Script
# This script automates the deployment of the LLM Proxy Gateway on a Linux server
set -e # Exit on error
set -u # Exit on undefined variable
# Configuration
APP_NAME="llm-proxy"
APP_USER="llmproxy"
APP_GROUP="llmproxy"
GIT_REPO="ssh://git.dustin.coffee:2222/hobokenchicken/llm-proxy.git"
INSTALL_DIR="/opt/$APP_NAME"
CONFIG_DIR="/etc/$APP_NAME"
DATA_DIR="/var/lib/$APP_NAME"
LOG_DIR="/var/log/$APP_NAME"
SERVICE_FILE="/etc/systemd/system/$APP_NAME.service"
ENV_FILE="$CONFIG_DIR/.env"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Logging functions
log_info() {
echo -e "${GREEN}[INFO]${NC} $1"
}
log_warn() {
echo -e "${YELLOW}[WARN]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Check if running as root
check_root() {
if [[ $EUID -ne 0 ]]; then
log_error "This script must be run as root"
exit 1
fi
}
# Install system dependencies
install_dependencies() {
log_info "Installing system dependencies..."
# Detect package manager
if command -v apt-get &> /dev/null; then
# Debian/Ubuntu
apt-get update
apt-get install -y \
build-essential \
pkg-config \
libssl-dev \
sqlite3 \
curl \
git
elif command -v yum &> /dev/null; then
# RHEL/CentOS
yum groupinstall -y "Development Tools"
yum install -y \
openssl-devel \
sqlite \
curl \
git
elif command -v dnf &> /dev/null; then
# Fedora
dnf groupinstall -y "Development Tools"
dnf install -y \
openssl-devel \
sqlite \
curl \
git
elif command -v pacman &> /dev/null; then
# Arch Linux
pacman -Syu --noconfirm \
base-devel \
openssl \
sqlite \
curl \
git
else
log_warn "Could not detect package manager. Please install dependencies manually."
fi
}
# Install Rust if not present
install_rust() {
log_info "Checking for Rust installation..."
if ! command -v rustc &> /dev/null; then
log_info "Installing Rust..."
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
source "$HOME/.cargo/env"
else
log_info "Rust is already installed"
fi
# Verify installation
rustc --version
cargo --version
}
# Create system user and directories
setup_directories() {
log_info "Creating system user and directories..."
# Create user and group if they don't exist
if ! id "$APP_USER" &>/dev/null; then
# Arch uses /usr/bin/nologin, Debian/Ubuntu use /usr/sbin/nologin
NOLOGIN=$(command -v nologin 2>/dev/null || echo "/usr/bin/nologin")
useradd -r -s "$NOLOGIN" -M "$APP_USER"
fi
# Create directories
mkdir -p "$INSTALL_DIR"
mkdir -p "$CONFIG_DIR"
mkdir -p "$DATA_DIR"
mkdir -p "$LOG_DIR"
# Set permissions
chown -R "$APP_USER:$APP_GROUP" "$INSTALL_DIR"
chown -R "$APP_USER:$APP_GROUP" "$CONFIG_DIR"
chown -R "$APP_USER:$APP_GROUP" "$DATA_DIR"
chown -R "$APP_USER:$APP_GROUP" "$LOG_DIR"
chmod 750 "$INSTALL_DIR"
chmod 750 "$CONFIG_DIR"
chmod 750 "$DATA_DIR"
chmod 750 "$LOG_DIR"
}
# Build the application
build_application() {
log_info "Building the application..."
# Clone or update repository
if [[ ! -d "$INSTALL_DIR/.git" ]]; then
log_info "Cloning repository..."
git clone "$GIT_REPO" "$INSTALL_DIR"
else
log_info "Updating repository..."
cd "$INSTALL_DIR"
git pull
fi
# Build in release mode
cd "$INSTALL_DIR"
log_info "Building release binary..."
cargo build --release
# Verify build
if [[ -f "target/release/$APP_NAME" ]]; then
log_info "Build successful"
else
log_error "Build failed"
exit 1
fi
}
# Create configuration files
create_configuration() {
log_info "Creating configuration files..."
# Create .env file with API keys
cat > "$ENV_FILE" << EOF
# LLM Proxy Gateway Environment Variables
# Add your API keys here
# OpenAI API Key
# OPENAI_API_KEY=sk-your-key-here
# Google Gemini API Key
# GEMINI_API_KEY=AIza-your-key-here
# DeepSeek API Key
# DEEPSEEK_API_KEY=sk-your-key-here
# xAI Grok API Key
# GROK_API_KEY=gk-your-key-here
# Authentication tokens (comma-separated)
# LLM_PROXY__SERVER__AUTH_TOKENS=token1,token2,token3
EOF
# Create config.toml
cat > "$CONFIG_DIR/config.toml" << EOF
# LLM Proxy Gateway Configuration
[server]
port = 8080
host = "0.0.0.0"
# auth_tokens = ["token1", "token2", "token3"] # Uncomment to enable authentication
[database]
path = "$DATA_DIR/llm_proxy.db"
max_connections = 5
[providers.openai]
enabled = true
api_key_env = "OPENAI_API_KEY"
base_url = "https://api.openai.com/v1"
default_model = "gpt-4o"
[providers.gemini]
enabled = true
api_key_env = "GEMINI_API_KEY"
base_url = "https://generativelanguage.googleapis.com/v1"
default_model = "gemini-2.0-flash"
[providers.deepseek]
enabled = true
api_key_env = "DEEPSEEK_API_KEY"
base_url = "https://api.deepseek.com"
default_model = "deepseek-reasoner"
[providers.grok]
enabled = false # Disabled by default until API is researched
api_key_env = "GROK_API_KEY"
base_url = "https://api.x.ai/v1"
default_model = "grok-beta"
[model_mapping]
"gpt-*" = "openai"
"gemini-*" = "gemini"
"deepseek-*" = "deepseek"
"grok-*" = "grok"
[pricing]
openai = { input = 0.01, output = 0.03 }
gemini = { input = 0.0005, output = 0.0015 }
deepseek = { input = 0.00014, output = 0.00028 }
grok = { input = 0.001, output = 0.003 }
EOF
# Set permissions
chown "$APP_USER:$APP_GROUP" "$ENV_FILE"
chown "$APP_USER:$APP_GROUP" "$CONFIG_DIR/config.toml"
chmod 640 "$ENV_FILE"
chmod 640 "$CONFIG_DIR/config.toml"
}
# Create systemd service
create_systemd_service() {
log_info "Creating systemd service..."
cat > "$SERVICE_FILE" << EOF
[Unit]
Description=LLM Proxy Gateway
Documentation=https://git.dustin.coffee/hobokenchicken/llm-proxy
After=network.target
Wants=network.target
[Service]
Type=simple
User=$APP_USER
Group=$APP_GROUP
WorkingDirectory=$INSTALL_DIR
EnvironmentFile=$ENV_FILE
Environment="RUST_LOG=info"
Environment="LLM_PROXY__CONFIG_PATH=$CONFIG_DIR/config.toml"
Environment="LLM_PROXY__DATABASE__PATH=$DATA_DIR/llm_proxy.db"
ExecStart=$INSTALL_DIR/target/release/$APP_NAME
Restart=on-failure
RestartSec=5
# Security hardening
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=$DATA_DIR $LOG_DIR
# Resource limits (adjust based on your server)
MemoryMax=400M
MemorySwapMax=100M
CPUQuota=50%
LimitNOFILE=65536
# Logging
StandardOutput=journal
StandardError=journal
SyslogIdentifier=$APP_NAME
[Install]
WantedBy=multi-user.target
EOF
# Reload systemd
systemctl daemon-reload
}
# Setup nginx reverse proxy (optional)
setup_nginx_proxy() {
if ! command -v nginx &> /dev/null; then
log_warn "nginx not installed. Skipping reverse proxy setup."
return
fi
log_info "Setting up nginx reverse proxy..."
cat > "/etc/nginx/sites-available/$APP_NAME" << EOF
server {
listen 80;
server_name your-domain.com; # Change to your domain
# Redirect to HTTPS (recommended)
return 301 https://\$server_name\$request_uri;
}
server {
listen 443 ssl http2;
server_name your-domain.com; # Change to your domain
# SSL certificates (adjust paths)
ssl_certificate /etc/letsencrypt/live/your-domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/your-domain.com/privkey.pem;
# SSL configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
# Proxy to LLM Proxy Gateway
location / {
proxy_pass http://127.0.0.1:8080;
proxy_http_version 1.1;
proxy_set_header Upgrade \$http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
# Health check endpoint
location /health {
proxy_pass http://127.0.0.1:8080/health;
access_log off;
}
# Dashboard
location /dashboard {
proxy_pass http://127.0.0.1:8080/dashboard;
}
}
EOF
# Enable site
ln -sf "/etc/nginx/sites-available/$APP_NAME" "/etc/nginx/sites-enabled/"
# Test nginx configuration
nginx -t
log_info "nginx configuration created. Please update the domain and SSL certificate paths."
}
# Setup firewall
setup_firewall() {
log_info "Configuring firewall..."
# Check for ufw (Ubuntu)
if command -v ufw &> /dev/null; then
ufw allow 22/tcp # SSH
ufw allow 80/tcp # HTTP
ufw allow 443/tcp # HTTPS
ufw --force enable
log_info "UFW firewall configured"
fi
# Check for firewalld (RHEL/CentOS)
if command -v firewall-cmd &> /dev/null; then
firewall-cmd --permanent --add-service=ssh
firewall-cmd --permanent --add-service=http
firewall-cmd --permanent --add-service=https
firewall-cmd --reload
log_info "Firewalld configured"
fi
}
# Initialize database
initialize_database() {
log_info "Initializing database..."
# Run the application once to create database
sudo -u "$APP_USER" "$INSTALL_DIR/target/release/$APP_NAME" --help &> /dev/null || true
log_info "Database initialized at $DATA_DIR/llm_proxy.db"
}
# Start and enable service
start_service() {
log_info "Starting $APP_NAME service..."
systemctl enable "$APP_NAME"
systemctl start "$APP_NAME"
# Check status
sleep 2
systemctl status "$APP_NAME" --no-pager
}
# Verify installation
verify_installation() {
log_info "Verifying installation..."
# Check if service is running
if systemctl is-active --quiet "$APP_NAME"; then
log_info "Service is running"
else
log_error "Service is not running"
journalctl -u "$APP_NAME" -n 20 --no-pager
exit 1
fi
# Test health endpoint
if curl -s http://localhost:8080/health | grep -q "OK"; then
log_info "Health check passed"
else
log_error "Health check failed"
exit 1
fi
# Test dashboard
if curl -s -o /dev/null -w "%{http_code}" http://localhost:8080/dashboard | grep -q "200"; then
log_info "Dashboard is accessible"
else
log_warn "Dashboard may not be accessible (this is normal if not configured)"
fi
log_info "Installation verified successfully!"
}
# Print next steps
print_next_steps() {
cat << EOF
${GREEN}=== LLM Proxy Gateway Installation Complete ===${NC}
${YELLOW}Next steps:${NC}
1. ${GREEN}Configure API keys${NC}
Edit: $ENV_FILE
Add your API keys for the providers you want to use
2. ${GREEN}Configure authentication${NC}
Edit: $CONFIG_DIR/config.toml
Uncomment and set auth_tokens for client authentication
3. ${GREEN}Configure nginx${NC}
Edit: /etc/nginx/sites-available/$APP_NAME
Update domain name and SSL certificate paths
4. ${GREEN}Test the API${NC}
curl -X POST http://localhost:8080/v1/chat/completions \\
-H "Content-Type: application/json" \\
-H "Authorization: Bearer your-token" \\
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}]
}'
5. ${GREEN}Access the dashboard${NC}
Open: http://your-server-ip:8080/dashboard
Or: https://your-domain.com/dashboard (if nginx configured)
${YELLOW}Useful commands:${NC}
systemctl status $APP_NAME # Check service status
journalctl -u $APP_NAME -f # View logs
systemctl restart $APP_NAME # Restart service
${YELLOW}Configuration files:${NC}
Service: $SERVICE_FILE
Config: $CONFIG_DIR/config.toml
Environment: $ENV_FILE
Database: $DATA_DIR/llm_proxy.db
Logs: $LOG_DIR/
${GREEN}For more information, see:${NC}
https://git.dustin.coffee/hobokenchicken/llm-proxy
$INSTALL_DIR/README.md
$INSTALL_DIR/deployment.md
EOF
}
# Main deployment function
deploy() {
log_info "Starting LLM Proxy Gateway deployment..."
check_root
install_dependencies
install_rust
setup_directories
build_application
create_configuration
create_systemd_service
initialize_database
start_service
verify_installation
print_next_steps
# Optional steps (uncomment if needed)
# setup_nginx_proxy
# setup_firewall
log_info "Deployment completed successfully!"
}
# Update function
update() {
log_info "Updating LLM Proxy Gateway..."
check_root
# Pull latest changes (while service keeps running)
cd "$INSTALL_DIR"
log_info "Pulling latest changes..."
git pull
# Build new binary (service stays up on the old binary)
log_info "Building release binary (service still running)..."
if ! cargo build --release; then
log_error "Build failed — service was NOT interrupted. Fix the error and try again."
exit 1
fi
# Verify binary exists
if [[ ! -f "target/release/$APP_NAME" ]]; then
log_error "Binary not found after build — aborting."
exit 1
fi
# Restart service to pick up new binary
log_info "Build succeeded. Restarting service..."
systemctl restart "$APP_NAME"
sleep 2
if systemctl is-active --quiet "$APP_NAME"; then
log_info "Update completed successfully!"
systemctl status "$APP_NAME" --no-pager
else
log_error "Service failed to start after update. Check logs:"
journalctl -u "$APP_NAME" -n 20 --no-pager
exit 1
fi
}
# Uninstall function
uninstall() {
log_info "Uninstalling LLM Proxy Gateway..."
check_root
# Stop and disable service
systemctl stop "$APP_NAME" 2>/dev/null || true
systemctl disable "$APP_NAME" 2>/dev/null || true
rm -f "$SERVICE_FILE"
systemctl daemon-reload
# Remove application files
rm -rf "$INSTALL_DIR"
rm -rf "$CONFIG_DIR"
# Keep data and logs (comment out to remove)
log_warn "Data directory $DATA_DIR and logs $LOG_DIR have been preserved"
log_warn "Remove manually if desired:"
log_warn " rm -rf $DATA_DIR $LOG_DIR"
# Remove user (optional)
read -p "Remove user $APP_USER? [y/N]: " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
userdel "$APP_USER" 2>/dev/null || true
groupdel "$APP_GROUP" 2>/dev/null || true
fi
log_info "Uninstallation completed!"
}
# Show usage
usage() {
cat << EOF
LLM Proxy Gateway Deployment Script
Usage: $0 [command]
Commands:
deploy - Install and configure LLM Proxy Gateway
update - Pull latest changes, rebuild, and restart
status - Show service status and health check
logs - Tail the service logs (Ctrl+C to stop)
uninstall - Remove LLM Proxy Gateway
help - Show this help message
Examples:
$0 deploy # Full installation
$0 update # Update to latest version
$0 status # Check if service is healthy
$0 logs # Follow live logs
EOF
}
# Status function
status() {
echo ""
log_info "Service status:"
systemctl status "$APP_NAME" --no-pager 2>/dev/null || log_warn "Service not found"
echo ""
# Health check
if curl -sf http://localhost:8080/health &>/dev/null; then
log_info "Health check: OK"
else
log_warn "Health check: FAILED (service may not be running or port 8080 not responding)"
fi
# Show current git commit
if [[ -d "$INSTALL_DIR/.git" ]]; then
echo ""
log_info "Installed version:"
git -C "$INSTALL_DIR" log -1 --format=" %h %s (%cr)" 2>/dev/null
fi
}
# Logs function
logs() {
log_info "Tailing $APP_NAME logs (Ctrl+C to stop)..."
journalctl -u "$APP_NAME" -f
}
# Parse command line arguments
case "${1:-}" in
deploy)
deploy
;;
update)
update
;;
status)
status
;;
logs)
logs
;;
uninstall)
uninstall
;;
help|--help|-h)
usage
;;
*)
usage
exit 1
;;
esac

View File

@@ -1,6 +1,6 @@
# Deployment Guide (Go)
This guide covers deploying the Go-based LLM Proxy Gateway.
This guide covers deploying the Go-based GopherGate.
## Environment Setup
@@ -18,12 +18,12 @@ This guide covers deploying the Go-based LLM Proxy Gateway.
### 1. Build
```bash
go build -o llm-proxy ./cmd/llm-proxy
go build -o gophergate ./cmd/gophergate
```
### 2. Run
```bash
./llm-proxy
./gophergate
```
## Docker Deployment
@@ -32,17 +32,17 @@ The project includes a multi-stage `Dockerfile` for minimal image size.
### 1. Build Image
```bash
docker build -t llm-proxy .
docker build -t gophergate .
```
### 2. Run Container
```bash
docker run -d \
--name llm-proxy \
--name gophergate \
-p 8080:8080 \
-v $(pwd)/data:/app/data \
--env-file .env \
llm-proxy
gophergate
```
## Production Considerations

10
go.mod
View File

@@ -1,4 +1,4 @@
module llm-proxy
module gophergate
go 1.26.1
@@ -9,6 +9,7 @@ require (
github.com/gorilla/websocket v1.5.3
github.com/jmoiron/sqlx v1.4.0
github.com/joho/godotenv v1.5.1
github.com/shirou/gopsutil/v3 v3.24.5
github.com/spf13/viper v1.21.0
golang.org/x/crypto v0.48.0
modernc.org/sqlite v1.47.0
@@ -23,6 +24,7 @@ require (
github.com/fsnotify/fsnotify v1.9.0 // indirect
github.com/gabriel-vasile/mimetype v1.4.12 // indirect
github.com/gin-contrib/sse v1.1.0 // indirect
github.com/go-ole/go-ole v1.2.6 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.30.1 // indirect
@@ -32,22 +34,28 @@ require (
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
github.com/leodido/go-urn v1.4.0 // indirect
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/ncruces/go-strftime v1.0.0 // indirect
github.com/pelletier/go-toml/v2 v2.2.4 // indirect
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c // indirect
github.com/quic-go/qpack v0.6.0 // indirect
github.com/quic-go/quic-go v0.59.0 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/sagikazarmark/locafero v0.11.0 // indirect
github.com/shoenig/go-m1cpu v0.1.6 // indirect
github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8 // indirect
github.com/spf13/afero v1.15.0 // indirect
github.com/spf13/cast v1.10.0 // indirect
github.com/spf13/pflag v1.0.10 // indirect
github.com/subosito/gotenv v1.6.0 // indirect
github.com/tklauser/go-sysconf v0.3.12 // indirect
github.com/tklauser/numcpus v0.6.1 // indirect
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/ugorji/go/codec v1.3.1 // indirect
github.com/yusufpapurcu/wmi v1.2.4 // indirect
go.mongodb.org/mongo-driver/v2 v2.5.0 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/arch v0.22.0 // indirect

24
go.sum
View File

@@ -23,6 +23,8 @@ github.com/gin-contrib/sse v1.1.0 h1:n0w2GMuUpWDVp7qSpvze6fAu9iRxJY4Hmj6AmBOU05w
github.com/gin-contrib/sse v1.1.0/go.mod h1:hxRZ5gVpWMT7Z0B0gSNYqqsSCNIJMjzvm6fqCz9vjwM=
github.com/gin-gonic/gin v1.12.0 h1:b3YAbrZtnf8N//yjKeU2+MQsh2mY5htkZidOM7O0wG8=
github.com/gin-gonic/gin v1.12.0/go.mod h1:VxccKfsSllpKshkBWgVgRniFFAzFb9csfngsqANjnLc=
github.com/go-ole/go-ole v1.2.6 h1:/Fpf6oFPoeFik9ty7siob0G6Ke8QvQEuVcuChpwXzpY=
github.com/go-ole/go-ole v1.2.6/go.mod h1:pprOEPIfldk/42T2oK7lQ4v4JSDwmV0As9GaiUsvbm0=
github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA=
@@ -41,6 +43,7 @@ github.com/goccy/go-json v0.10.5 h1:Fq85nIqj+gXn/S5ahsiTlK3TmC85qgirsdTP/+DeaC4=
github.com/goccy/go-json v0.10.5/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
github.com/goccy/go-yaml v1.19.2 h1:PmFC1S6h8ljIz6gMRBopkjP1TVT7xuwrButHID66PoM=
github.com/goccy/go-yaml v1.19.2/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
@@ -68,6 +71,8 @@ github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0 h1:6E+4a0GO5zZEnZ81pIr0yLvtUWk2if982qA3F3QD6H4=
github.com/lufia/plan9stats v0.0.0-20211012122336-39d0f177ccd0/go.mod h1:zJYVVT2jmtg6P3p1VtQj7WsuWi/y4VnjVBn7F8KPB3I=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-sqlite3 v1.14.22 h1:2gZY6PC6kBnID23Tichd1K+Z0oS6nE/XwU+Vz/5o4kU=
@@ -83,6 +88,8 @@ github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0
github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c h1:ncq/mPwQF4JjgDlrVEn3C11VoGHZN7m8qihwgMEtzYw=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
github.com/quic-go/qpack v0.6.0 h1:g7W+BMYynC1LbYLSqRt8PBg5Tgwxn214ZZR34VIOjz8=
github.com/quic-go/qpack v0.6.0/go.mod h1:lUpLKChi8njB4ty2bFLX2x4gzDqXwUpaO1DP9qMDZII=
github.com/quic-go/quic-go v0.59.0 h1:OLJkp1Mlm/aS7dpKgTc6cnpynnD2Xg7C1pwL6vy/SAw=
@@ -93,6 +100,12 @@ github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjR
github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog=
github.com/sagikazarmark/locafero v0.11.0 h1:1iurJgmM9G3PA/I+wWYIOw/5SyBtxapeHDcg+AAIFXc=
github.com/sagikazarmark/locafero v0.11.0/go.mod h1:nVIGvgyzw595SUSUE6tvCp3YYTeHs15MvlmU87WwIik=
github.com/shirou/gopsutil/v3 v3.24.5 h1:i0t8kL+kQTvpAYToeuiVk3TgDeKOFioZO3Ztz/iZ9pI=
github.com/shirou/gopsutil/v3 v3.24.5/go.mod h1:bsoOS1aStSs9ErQ1WWfxllSeS1K5D+U30r2NfcubMVk=
github.com/shoenig/go-m1cpu v0.1.6 h1:nxdKQNcEB6vzgA2E2bvzKIYRuNj7XNJ4S/aRSwKzFtM=
github.com/shoenig/go-m1cpu v0.1.6/go.mod h1:1JJMcUBvfNwpq05QDQVAnx3gUHr9IYF7GNg9SUEw2VQ=
github.com/shoenig/test v0.6.4 h1:kVTaSd7WLz5WZ2IaoM0RSzRsUD+m8wRR+5qvntpn4LU=
github.com/shoenig/test v0.6.4/go.mod h1:byHiCGXqrVaflBLAMq/srcZIHynQPQgeyvkvXnjqq0k=
github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8 h1:+jumHNA0Wrelhe64i8F6HNlS8pkoyMv5sreGx2Ry5Rw=
github.com/sourcegraph/conc v0.3.1-0.20240121214520-5f936abd7ae8/go.mod h1:3n1Cwaq1E1/1lhQhtRK2ts/ZwZEhjcQeJQ1RuC6Q/8U=
github.com/spf13/afero v1.15.0 h1:b/YBCLWAJdFWJTN9cLhiXXcD7mzKn9Dm86dNnfyQw1I=
@@ -116,10 +129,16 @@ github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
github.com/tklauser/go-sysconf v0.3.12 h1:0QaGUFOdQaIVdPgfITYzaTegZvdCjmYO52cSFAEVmqU=
github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI=
github.com/tklauser/numcpus v0.6.1 h1:ng9scYS7az0Bk4OZLvrNXNSAO2Pxr1XXRAPyjhIx+Fk=
github.com/tklauser/numcpus v0.6.1/go.mod h1:1XfjsgE2zo8GVw7POkMbHENHzVg3GzmoZ9fESEdAacY=
github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS4MhqMhdFk5YI=
github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08=
github.com/ugorji/go/codec v1.3.1 h1:waO7eEiFDwidsBN6agj1vJQ4AG7lh2yqXyOXqhgQuyY=
github.com/ugorji/go/codec v1.3.1/go.mod h1:pRBVtBSKl77K30Bv8R2P+cLSGaTtex6fsA2Wjqmfxj4=
github.com/yusufpapurcu/wmi v1.2.4 h1:zFUKzehAFReQwLys1b/iSMl+JQGSCSjtVqQn9bBrPo0=
github.com/yusufpapurcu/wmi v1.2.4/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQmPyzfmi0=
go.mongodb.org/mongo-driver/v2 v2.5.0 h1:yXUhImUjjAInNcpTcAlPHiT7bIXhshCTL3jVBkF3xaE=
go.mongodb.org/mongo-driver/v2 v2.5.0/go.mod h1:yOI9kBsufol30iFsl1slpdq1I0eHPzybRWdyYUs8K/0=
go.uber.org/mock v0.6.0 h1:hyF9dfmbgIX5EfOdasqLsWD6xqpNZlXblLB/Dbnwv3Y=
@@ -136,7 +155,11 @@ golang.org/x/net v0.51.0 h1:94R/GTO7mt3/4wIKpcR5gkGmRLOuE/2hNGeWq/GBIFo=
golang.org/x/net v0.51.0/go.mod h1:aamm+2QF5ogm02fjy5Bb7CQ0WMt1/WVM7FtyaTLlA9Y=
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.42.0 h1:omrd2nAlyT5ESRdCLYdm3+fMfNFE/+Rf4bDIQImRJeo=
golang.org/x/sys v0.42.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=
golang.org/x/text v0.34.0 h1:oL/Qq0Kdaqxa1KbNeMKwQq0reLCCaFtqu2eNuSeNHbk=
@@ -145,6 +168,7 @@ golang.org/x/time v0.15.0 h1:bbrp8t3bGUeFOx08pvsMYRTCVSMk89u4tKbNOZbp88U=
golang.org/x/time v0.15.0/go.mod h1:Y4YMaQmXwGQZoFaVFk4YpCt4FLQMYKZe9oeV/f4MSno=
golang.org/x/tools v0.42.0 h1:uNgphsn75Tdz5Ji2q36v/nsFSfR/9BRFvqhGBaJGd5k=
golang.org/x/tools v0.42.0/go.mod h1:Ma6lCIwGZvHK6XtgbswSoWroEkhugApmsXyrUmBhfr0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/protobuf v1.36.10 h1:AYd7cD/uASjIL6Q9LiTjz8JLcrh/88q5UObnmY3aOOE=
google.golang.org/protobuf v1.36.10/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=

View File

@@ -33,6 +33,7 @@ type ProviderConfig struct {
OpenAI OpenAIConfig `mapstructure:"openai"`
Gemini GeminiConfig `mapstructure:"gemini"`
DeepSeek DeepSeekConfig `mapstructure:"deepseek"`
Moonshot MoonshotConfig `mapstructure:"moonshot"`
Grok GrokConfig `mapstructure:"grok"`
Ollama OllamaConfig `mapstructure:"ollama"`
}
@@ -58,6 +59,13 @@ type DeepSeekConfig struct {
Enabled bool `mapstructure:"enabled"`
}
type MoonshotConfig struct {
APIKeyEnv string `mapstructure:"api_key_env"`
BaseURL string `mapstructure:"base_url"`
DefaultModel string `mapstructure:"default_model"`
Enabled bool `mapstructure:"enabled"`
}
type GrokConfig struct {
APIKeyEnv string `mapstructure:"api_key_env"`
BaseURL string `mapstructure:"base_url"`
@@ -97,9 +105,14 @@ func Load() (*Config, error) {
v.SetDefault("providers.deepseek.default_model", "deepseek-reasoner")
v.SetDefault("providers.deepseek.enabled", true)
v.SetDefault("providers.moonshot.api_key_env", "MOONSHOT_API_KEY")
v.SetDefault("providers.moonshot.base_url", "https://api.moonshot.ai/v1")
v.SetDefault("providers.moonshot.default_model", "kimi-k2.5")
v.SetDefault("providers.moonshot.enabled", true)
v.SetDefault("providers.grok.api_key_env", "GROK_API_KEY")
v.SetDefault("providers.grok.base_url", "https://api.x.ai/v1")
v.SetDefault("providers.grok.default_model", "grok-beta")
v.SetDefault("providers.grok.default_model", "grok-4-1-fast-non-reasoning")
v.SetDefault("providers.grok.enabled", true)
v.SetDefault("providers.ollama.base_url", "http://localhost:11434/v1")
@@ -111,6 +124,11 @@ func Load() (*Config, error) {
v.SetEnvKeyReplacer(strings.NewReplacer(".", "__"))
v.AutomaticEnv()
// Explicitly bind keys that might use double underscores in .env
v.BindEnv("encryption_key", "LLM_PROXY__ENCRYPTION_KEY")
v.BindEnv("server.port", "LLM_PROXY__SERVER__PORT")
v.BindEnv("server.host", "LLM_PROXY__SERVER__HOST")
// Config file
v.SetConfigName("config")
v.SetConfigType("toml")
@@ -130,6 +148,19 @@ func Load() (*Config, error) {
return nil, fmt.Errorf("failed to unmarshal config: %w", err)
}
fmt.Printf("Debug Config: port from viper=%d, host from viper=%s\n", cfg.Server.Port, cfg.Server.Host)
fmt.Printf("Debug Env: LLM_PROXY__SERVER__PORT=%s, LLM_PROXY__SERVER__HOST=%s\n", os.Getenv("LLM_PROXY__SERVER__PORT"), os.Getenv("LLM_PROXY__SERVER__HOST"))
// Manual overrides for nested keys which Viper doesn't always bind correctly with AutomaticEnv + SetEnvPrefix
if port := os.Getenv("LLM_PROXY__SERVER__PORT"); port != "" {
fmt.Sscanf(port, "%d", &cfg.Server.Port)
fmt.Printf("Overriding port to %d from env\n", cfg.Server.Port)
}
if host := os.Getenv("LLM_PROXY__SERVER__HOST"); host != "" {
cfg.Server.Host = host
fmt.Printf("Overriding host to %s from env\n", cfg.Server.Host)
}
// Validate encryption key
if cfg.EncryptionKey == "" {
return nil, fmt.Errorf("encryption key is required (LLM_PROXY__ENCRYPTION_KEY)")
@@ -160,6 +191,8 @@ func (c *Config) GetAPIKey(provider string) (string, error) {
envVar = c.Providers.Gemini.APIKeyEnv
case "deepseek":
envVar = c.Providers.DeepSeek.APIKeyEnv
case "moonshot":
envVar = c.Providers.Moonshot.APIKeyEnv
case "grok":
envVar = c.Providers.Grok.APIKeyEnv
default:
@@ -170,5 +203,5 @@ func (c *Config) GetAPIKey(provider string) (string, error) {
if val == "" {
return "", fmt.Errorf("environment variable %s not set for %s", envVar, provider)
}
return val, nil
return strings.TrimSpace(val), nil
}

View File

@@ -159,7 +159,7 @@ func (db *DB) RunMigrations() error {
}
if count == 0 {
hash, err := bcrypt.GenerateFromPassword([]byte("admin"), 12)
hash, err := bcrypt.GenerateFromPassword([]byte("admin123"), 12)
if err != nil {
return fmt.Errorf("failed to hash default password: %w", err)
}
@@ -167,7 +167,7 @@ func (db *DB) RunMigrations() error {
if err != nil {
return fmt.Errorf("failed to insert default admin: %w", err)
}
log.Println("Created default admin user with password 'admin' (must change on first login)")
log.Println("Created default admin user with password 'admin123' (must change on first login)")
}
// Default client
@@ -244,13 +244,13 @@ type ModelConfig struct {
}
type User struct {
ID int `db:"id"`
Username string `db:"username"`
PasswordHash string `db:"password_hash"`
DisplayName *string `db:"display_name"`
Role string `db:"role"`
MustChangePassword bool `db:"must_change_password"`
CreatedAt time.Time `db:"created_at"`
ID int `db:"id" json:"id"`
Username string `db:"username" json:"username"`
PasswordHash string `db:"password_hash" json:"-"`
DisplayName *string `db:"display_name" json:"display_name"`
Role string `db:"role" json:"role"`
MustChangePassword bool `db:"must_change_password" json:"must_change_password"`
CreatedAt time.Time `db:"created_at" json:"created_at"`
}
type ClientToken struct {

View File

@@ -4,8 +4,8 @@ import (
"log"
"strings"
"llm-proxy/internal/db"
"llm-proxy/internal/models"
"gophergate/internal/db"
"gophergate/internal/models"
"github.com/gin-gonic/gin"
)

View File

@@ -1,5 +1,7 @@
package models
import "strings"
type ModelRegistry struct {
Providers map[string]ProviderInfo `json:"-"`
}
@@ -54,5 +56,14 @@ func (r *ModelRegistry) FindModel(modelID string) *ModelMetadata {
}
}
// Try fuzzy matching (e.g. gpt-4o-2024-05-13 matching gpt-4o)
for _, provider := range r.Providers {
for id, model := range provider.Models {
if strings.HasPrefix(modelID, id) {
return &model
}
}
}
return nil
}

View File

@@ -1,12 +1,15 @@
package providers
import (
"bufio"
"context"
"encoding/json"
"fmt"
"io"
"strings"
"llm-proxy/internal/config"
"llm-proxy/internal/models"
"gophergate/internal/config"
"gophergate/internal/models"
"github.com/go-resty/resty/v2"
)
@@ -28,6 +31,35 @@ func (p *DeepSeekProvider) Name() string {
return "deepseek"
}
type deepSeekUsage struct {
PromptTokens uint32 `json:"prompt_tokens"`
CompletionTokens uint32 `json:"completion_tokens"`
TotalTokens uint32 `json:"total_tokens"`
PromptCacheHitTokens uint32 `json:"prompt_cache_hit_tokens"`
PromptCacheMissTokens uint32 `json:"prompt_cache_miss_tokens"`
CompletionTokensDetails *struct {
ReasoningTokens uint32 `json:"reasoning_tokens"`
} `json:"completion_tokens_details"`
}
func (u *deepSeekUsage) ToUnified() *models.Usage {
usage := &models.Usage{
PromptTokens: u.PromptTokens,
CompletionTokens: u.CompletionTokens,
TotalTokens: u.TotalTokens,
}
if u.PromptCacheHitTokens > 0 {
usage.CacheReadTokens = &u.PromptCacheHitTokens
}
if u.PromptCacheMissTokens > 0 {
usage.CacheWriteTokens = &u.PromptCacheMissTokens
}
if u.CompletionTokensDetails != nil && u.CompletionTokensDetails.ReasoningTokens > 0 {
usage.ReasoningTokens = &u.CompletionTokensDetails.ReasoningTokens
}
return usage
}
func (p *DeepSeekProvider) ChatCompletion(ctx context.Context, req *models.UnifiedRequest) (*models.ChatCompletionResponse, error) {
messagesJSON, err := MessagesToOpenAIJSON(req.Messages)
if err != nil {
@@ -43,7 +75,6 @@ func (p *DeepSeekProvider) ChatCompletion(ctx context.Context, req *models.Unifi
delete(body, "presence_penalty")
delete(body, "frequency_penalty")
// Ensure assistant messages have content and reasoning_content
if msgs, ok := body["messages"].([]interface{}); ok {
for _, m := range msgs {
if msg, ok := m.(map[string]interface{}); ok {
@@ -79,7 +110,21 @@ func (p *DeepSeekProvider) ChatCompletion(ctx context.Context, req *models.Unifi
return nil, fmt.Errorf("failed to parse response: %w", err)
}
return ParseOpenAIResponse(respJSON, req.Model)
result, err := ParseOpenAIResponse(respJSON, req.Model)
if err != nil {
return nil, err
}
// Fix usage for DeepSeek specifically if details were missing in ParseOpenAIResponse
if usageData, ok := respJSON["usage"]; ok {
var dUsage deepSeekUsage
usageBytes, _ := json.Marshal(usageData)
if err := json.Unmarshal(usageBytes, &dUsage); err == nil {
result.Usage = dUsage.ToUnified()
}
}
return result, nil
}
func (p *DeepSeekProvider) ChatCompletionStream(ctx context.Context, req *models.UnifiedRequest) (<-chan *models.ChatCompletionStreamResponse, error) {
@@ -97,7 +142,6 @@ func (p *DeepSeekProvider) ChatCompletionStream(ctx context.Context, req *models
delete(body, "presence_penalty")
delete(body, "frequency_penalty")
// Ensure assistant messages have content and reasoning_content
if msgs, ok := body["messages"].([]interface{}); ok {
for _, m := range msgs {
if msg, ok := m.(map[string]interface{}); ok {
@@ -133,7 +177,8 @@ func (p *DeepSeekProvider) ChatCompletionStream(ctx context.Context, req *models
go func() {
defer close(ch)
err := StreamOpenAI(resp.RawBody(), ch)
// Custom scanner loop to handle DeepSeek specific usage in chunks
err := StreamDeepSeek(resp.RawBody(), ch)
if err != nil {
fmt.Printf("DeepSeek Stream error: %v\n", err)
}
@@ -141,3 +186,35 @@ func (p *DeepSeekProvider) ChatCompletionStream(ctx context.Context, req *models
return ch, nil
}
func StreamDeepSeek(ctx io.ReadCloser, ch chan<- *models.ChatCompletionStreamResponse) error {
defer ctx.Close()
scanner := bufio.NewScanner(ctx)
for scanner.Scan() {
line := scanner.Text()
if line == "" || !strings.HasPrefix(line, "data: ") {
continue
}
data := strings.TrimPrefix(line, "data: ")
if data == "[DONE]" {
break
}
var chunk models.ChatCompletionStreamResponse
if err := json.Unmarshal([]byte(data), &chunk); err != nil {
continue
}
// Fix DeepSeek specific usage in stream
var rawChunk struct {
Usage *deepSeekUsage `json:"usage"`
}
if err := json.Unmarshal([]byte(data), &rawChunk); err == nil && rawChunk.Usage != nil {
chunk.Usage = rawChunk.Usage.ToUnified()
}
ch <- &chunk
}
return scanner.Err()
}

View File

@@ -5,8 +5,8 @@ import (
"encoding/json"
"fmt"
"llm-proxy/internal/config"
"llm-proxy/internal/models"
"gophergate/internal/config"
"gophergate/internal/models"
"github.com/go-resty/resty/v2"
)

View File

@@ -5,8 +5,8 @@ import (
"encoding/json"
"fmt"
"llm-proxy/internal/config"
"llm-proxy/internal/models"
"gophergate/internal/config"
"gophergate/internal/models"
"github.com/go-resty/resty/v2"
)

View File

@@ -7,7 +7,7 @@ import (
"io"
"strings"
"llm-proxy/internal/models"
"gophergate/internal/models"
)
// MessagesToOpenAIJSON converts unified messages to OpenAI-compatible JSON, including tools and images.
@@ -58,9 +58,20 @@ func MessagesToOpenAIJSON(messages []models.UnifiedMessage) ([]interface{}, erro
}
}
var finalContent interface{}
if len(parts) == 1 {
if p, ok := parts[0].(map[string]interface{}); ok && p["type"] == "text" {
finalContent = p["text"]
} else {
finalContent = parts
}
} else {
finalContent = parts
}
msg := map[string]interface{}{
"role": m.Role,
"content": parts,
"content": finalContent,
}
if m.ReasoningContent != nil {
@@ -122,6 +133,33 @@ func BuildOpenAIBody(request *models.UnifiedRequest, messagesJSON []interface{},
return body
}
type openAIUsage struct {
PromptTokens uint32 `json:"prompt_tokens"`
CompletionTokens uint32 `json:"completion_tokens"`
TotalTokens uint32 `json:"total_tokens"`
PromptTokensDetails *struct {
CachedTokens uint32 `json:"cached_tokens"`
} `json:"prompt_tokens_details"`
CompletionTokensDetails *struct {
ReasoningTokens uint32 `json:"reasoning_tokens"`
} `json:"completion_tokens_details"`
}
func (u *openAIUsage) ToUnified() *models.Usage {
usage := &models.Usage{
PromptTokens: u.PromptTokens,
CompletionTokens: u.CompletionTokens,
TotalTokens: u.TotalTokens,
}
if u.PromptTokensDetails != nil && u.PromptTokensDetails.CachedTokens > 0 {
usage.CacheReadTokens = &u.PromptTokensDetails.CachedTokens
}
if u.CompletionTokensDetails != nil && u.CompletionTokensDetails.ReasoningTokens > 0 {
usage.ReasoningTokens = &u.CompletionTokensDetails.ReasoningTokens
}
return usage
}
func ParseOpenAIResponse(respJSON map[string]interface{}, model string) (*models.ChatCompletionResponse, error) {
data, err := json.Marshal(respJSON)
if err != nil {
@@ -133,6 +171,16 @@ func ParseOpenAIResponse(respJSON map[string]interface{}, model string) (*models
return nil, err
}
// Manually fix usage because ChatCompletionResponse uses the unified Usage struct
// but the provider might have returned more details.
if usageData, ok := respJSON["usage"]; ok {
var oUsage openAIUsage
usageBytes, _ := json.Marshal(usageData)
if err := json.Unmarshal(usageBytes, &oUsage); err == nil {
resp.Usage = oUsage.ToUnified()
}
}
return &resp, nil
}
@@ -156,6 +204,14 @@ func ParseOpenAIStreamChunk(line string) (*models.ChatCompletionStreamResponse,
return nil, false, fmt.Errorf("failed to unmarshal stream chunk: %w", err)
}
// Handle specialized usage in stream chunks
var rawChunk struct {
Usage *openAIUsage `json:"usage"`
}
if err := json.Unmarshal([]byte(data), &rawChunk); err == nil && rawChunk.Usage != nil {
chunk.Usage = rawChunk.Usage.ToUnified()
}
return &chunk, false, nil
}
@@ -210,9 +266,10 @@ func StreamGemini(ctx io.ReadCloser, ch chan<- *models.ChatCompletionStreamRespo
return err
}
if len(geminiChunk.Candidates) > 0 {
if len(geminiChunk.Candidates) > 0 || geminiChunk.UsageMetadata.TotalTokenCount > 0 {
content := ""
var reasoning *string
if len(geminiChunk.Candidates) > 0 {
for _, p := range geminiChunk.Candidates[0].Content.Parts {
if p.Text != "" {
content += p.Text
@@ -224,10 +281,12 @@ func StreamGemini(ctx io.ReadCloser, ch chan<- *models.ChatCompletionStreamRespo
*reasoning += p.Thought
}
}
}
finishReason := strings.ToLower(geminiChunk.Candidates[0].FinishReason)
if finishReason == "stop" {
finishReason = "stop"
var finishReason *string
if len(geminiChunk.Candidates) > 0 {
fr := strings.ToLower(geminiChunk.Candidates[0].FinishReason)
finishReason = &fr
}
ch <- &models.ChatCompletionStreamResponse{
@@ -242,7 +301,7 @@ func StreamGemini(ctx io.ReadCloser, ch chan<- *models.ChatCompletionStreamRespo
Content: &content,
ReasoningContent: reasoning,
},
FinishReason: &finishReason,
FinishReason: finishReason,
},
},
Usage: &models.Usage{

View File

@@ -0,0 +1,114 @@
package providers
import (
"context"
"encoding/json"
"fmt"
"strings"
"gophergate/internal/config"
"gophergate/internal/models"
"github.com/go-resty/resty/v2"
)
type MoonshotProvider struct {
client *resty.Client
config config.MoonshotConfig
apiKey string
}
func NewMoonshotProvider(cfg config.MoonshotConfig, apiKey string) *MoonshotProvider {
return &MoonshotProvider{
client: resty.New(),
config: cfg,
apiKey: strings.TrimSpace(apiKey),
}
}
func (p *MoonshotProvider) Name() string {
return "moonshot"
}
func (p *MoonshotProvider) ChatCompletion(ctx context.Context, req *models.UnifiedRequest) (*models.ChatCompletionResponse, error) {
messagesJSON, err := MessagesToOpenAIJSON(req.Messages)
if err != nil {
return nil, fmt.Errorf("failed to convert messages: %w", err)
}
body := BuildOpenAIBody(req, messagesJSON, false)
if strings.Contains(strings.ToLower(req.Model), "kimi-k2.5") {
if maxTokens, ok := body["max_tokens"]; ok {
delete(body, "max_tokens")
body["max_completion_tokens"] = maxTokens
}
}
baseURL := strings.TrimRight(p.config.BaseURL, "/")
resp, err := p.client.R().
SetContext(ctx).
SetHeader("Authorization", "Bearer "+p.apiKey).
SetHeader("Content-Type", "application/json").
SetHeader("Accept", "application/json").
SetBody(body).
Post(fmt.Sprintf("%s/chat/completions", baseURL))
if err != nil {
return nil, fmt.Errorf("request failed: %w", err)
}
if !resp.IsSuccess() {
return nil, fmt.Errorf("Moonshot API error (%d): %s", resp.StatusCode(), resp.String())
}
var respJSON map[string]interface{}
if err := json.Unmarshal(resp.Body(), &respJSON); err != nil {
return nil, fmt.Errorf("failed to parse response: %w", err)
}
return ParseOpenAIResponse(respJSON, req.Model)
}
func (p *MoonshotProvider) ChatCompletionStream(ctx context.Context, req *models.UnifiedRequest) (<-chan *models.ChatCompletionStreamResponse, error) {
messagesJSON, err := MessagesToOpenAIJSON(req.Messages)
if err != nil {
return nil, fmt.Errorf("failed to convert messages: %w", err)
}
body := BuildOpenAIBody(req, messagesJSON, true)
if strings.Contains(strings.ToLower(req.Model), "kimi-k2.5") {
if maxTokens, ok := body["max_tokens"]; ok {
delete(body, "max_tokens")
body["max_completion_tokens"] = maxTokens
}
}
baseURL := strings.TrimRight(p.config.BaseURL, "/")
resp, err := p.client.R().
SetContext(ctx).
SetHeader("Authorization", "Bearer "+p.apiKey).
SetHeader("Content-Type", "application/json").
SetHeader("Accept", "text/event-stream").
SetBody(body).
SetDoNotParseResponse(true).
Post(fmt.Sprintf("%s/chat/completions", baseURL))
if err != nil {
return nil, fmt.Errorf("request failed: %w", err)
}
if !resp.IsSuccess() {
return nil, fmt.Errorf("Moonshot API error (%d): %s", resp.StatusCode(), resp.String())
}
ch := make(chan *models.ChatCompletionStreamResponse)
go func() {
defer close(ch)
if err := StreamOpenAI(resp.RawBody(), ch); err != nil {
fmt.Printf("Moonshot Stream error: %v\n", err)
}
}()
return ch, nil
}

View File

@@ -6,8 +6,8 @@ import (
"fmt"
"strings"
"llm-proxy/internal/config"
"llm-proxy/internal/models"
"gophergate/internal/config"
"gophergate/internal/models"
"github.com/go-resty/resty/v2"
)

View File

@@ -3,7 +3,7 @@ package providers
import (
"context"
"llm-proxy/internal/models"
"gophergate/internal/models"
)
type Provider interface {

File diff suppressed because it is too large Load Diff

View File

@@ -4,7 +4,7 @@ import (
"log"
"time"
"llm-proxy/internal/db"
"gophergate/internal/db"
)
type RequestLog struct {

View File

@@ -8,12 +8,12 @@ import (
"strings"
"time"
"llm-proxy/internal/config"
"llm-proxy/internal/db"
"llm-proxy/internal/middleware"
"llm-proxy/internal/models"
"llm-proxy/internal/providers"
"llm-proxy/internal/utils"
"gophergate/internal/config"
"gophergate/internal/db"
"gophergate/internal/middleware"
"gophergate/internal/models"
"gophergate/internal/providers"
"gophergate/internal/utils"
"github.com/gin-gonic/gin"
)
@@ -33,13 +33,6 @@ func NewServer(cfg *config.Config, database *db.DB) *Server {
router := gin.Default()
hub := NewHub()
// Fetch registry (non-blocking for startup if it fails, but we'll try once)
registry, err := utils.FetchRegistry()
if err != nil {
fmt.Printf("Warning: Failed to fetch initial model registry: %v\n", err)
registry = &models.ModelRegistry{Providers: make(map[string]models.ProviderInfo)}
}
s := &Server{
router: router,
cfg: cfg,
@@ -48,45 +41,142 @@ func NewServer(cfg *config.Config, database *db.DB) *Server {
sessions: NewSessionManager(cfg.KeyBytes, 24*time.Hour),
hub: hub,
logger: NewRequestLogger(database, hub),
registry: registry,
registry: &models.ModelRegistry{Providers: make(map[string]models.ProviderInfo)},
}
// Initialize providers
if cfg.Providers.OpenAI.Enabled {
apiKey, _ := cfg.GetAPIKey("openai")
s.providers["openai"] = providers.NewOpenAIProvider(cfg.Providers.OpenAI, apiKey)
// Fetch registry in background
go func() {
registry, err := utils.FetchRegistry()
if err != nil {
fmt.Printf("Warning: Failed to fetch initial model registry: %v\n", err)
} else {
s.registry = registry
}
if cfg.Providers.Gemini.Enabled {
apiKey, _ := cfg.GetAPIKey("gemini")
s.providers["gemini"] = providers.NewGeminiProvider(cfg.Providers.Gemini, apiKey)
}
if cfg.Providers.DeepSeek.Enabled {
apiKey, _ := cfg.GetAPIKey("deepseek")
s.providers["deepseek"] = providers.NewDeepSeekProvider(cfg.Providers.DeepSeek, apiKey)
}
if cfg.Providers.Grok.Enabled {
apiKey, _ := cfg.GetAPIKey("grok")
s.providers["grok"] = providers.NewGrokProvider(cfg.Providers.Grok, apiKey)
}()
// Initialize providers from DB and Config
if err := s.RefreshProviders(); err != nil {
fmt.Printf("Warning: Failed to initial refresh providers: %v\n", err)
}
s.setupRoutes()
return s
}
func (s *Server) RefreshProviders() error {
var dbConfigs []db.ProviderConfig
err := s.database.Select(&dbConfigs, "SELECT * FROM provider_configs")
if err != nil {
return fmt.Errorf("failed to fetch provider configs from db: %w", err)
}
dbMap := make(map[string]db.ProviderConfig)
for _, cfg := range dbConfigs {
dbMap[cfg.ID] = cfg
}
providerIDs := []string{"openai", "gemini", "deepseek", "moonshot", "grok"}
for _, id := range providerIDs {
// Default values from config
enabled := false
baseURL := ""
apiKey := ""
switch id {
case "openai":
enabled = s.cfg.Providers.OpenAI.Enabled
baseURL = s.cfg.Providers.OpenAI.BaseURL
apiKey, _ = s.cfg.GetAPIKey("openai")
case "gemini":
enabled = s.cfg.Providers.Gemini.Enabled
baseURL = s.cfg.Providers.Gemini.BaseURL
apiKey, _ = s.cfg.GetAPIKey("gemini")
case "deepseek":
enabled = s.cfg.Providers.DeepSeek.Enabled
baseURL = s.cfg.Providers.DeepSeek.BaseURL
apiKey, _ = s.cfg.GetAPIKey("deepseek")
case "moonshot":
enabled = s.cfg.Providers.Moonshot.Enabled
baseURL = s.cfg.Providers.Moonshot.BaseURL
apiKey, _ = s.cfg.GetAPIKey("moonshot")
case "grok":
enabled = s.cfg.Providers.Grok.Enabled
baseURL = s.cfg.Providers.Grok.BaseURL
apiKey, _ = s.cfg.GetAPIKey("grok")
}
// Overrides from DB
if dbCfg, ok := dbMap[id]; ok {
enabled = dbCfg.Enabled
if dbCfg.BaseURL != nil && *dbCfg.BaseURL != "" {
baseURL = *dbCfg.BaseURL
}
if dbCfg.APIKey != nil && *dbCfg.APIKey != "" {
key := *dbCfg.APIKey
if dbCfg.APIKeyEncrypted {
decrypted, err := utils.Decrypt(key, s.cfg.KeyBytes)
if err == nil {
key = decrypted
} else {
fmt.Printf("Warning: Failed to decrypt API key for %s: %v\n", id, err)
}
}
apiKey = key
}
}
if !enabled {
delete(s.providers, id)
continue
}
// Initialize provider
switch id {
case "openai":
cfg := s.cfg.Providers.OpenAI
cfg.BaseURL = baseURL
s.providers["openai"] = providers.NewOpenAIProvider(cfg, apiKey)
case "gemini":
cfg := s.cfg.Providers.Gemini
cfg.BaseURL = baseURL
s.providers["gemini"] = providers.NewGeminiProvider(cfg, apiKey)
case "deepseek":
cfg := s.cfg.Providers.DeepSeek
cfg.BaseURL = baseURL
s.providers["deepseek"] = providers.NewDeepSeekProvider(cfg, apiKey)
case "moonshot":
cfg := s.cfg.Providers.Moonshot
cfg.BaseURL = baseURL
s.providers["moonshot"] = providers.NewMoonshotProvider(cfg, apiKey)
case "grok":
cfg := s.cfg.Providers.Grok
cfg.BaseURL = baseURL
s.providers["grok"] = providers.NewGrokProvider(cfg, apiKey)
}
}
return nil
}
func (s *Server) setupRoutes() {
s.router.Use(middleware.AuthMiddleware(s.database))
// Static files
s.router.Static("/static", "./static")
s.router.StaticFile("/", "./static/index.html")
s.router.StaticFile("/favicon.ico", "./static/favicon.ico")
s.router.Static("/css", "./static/css")
s.router.Static("/js", "./static/js")
s.router.Static("/img", "./static/img")
// WebSocket
s.router.GET("/ws", s.handleWebSocket)
// API V1 (External LLM Access) - Secured with AuthMiddleware
v1 := s.router.Group("/v1")
v1.Use(middleware.AuthMiddleware(s.database))
{
v1.POST("/chat/completions", s.handleChatCompletions)
v1.GET("/models", s.handleListModels)
}
// Dashboard API Group
@@ -95,6 +185,7 @@ func (s *Server) setupRoutes() {
api.POST("/auth/login", s.handleLogin)
api.GET("/auth/status", s.handleAuthStatus)
api.POST("/auth/logout", s.handleLogout)
api.POST("/auth/change-password", s.handleChangePassword)
// Protected dashboard routes (need admin session)
admin := api.Group("/")
@@ -102,10 +193,15 @@ func (s *Server) setupRoutes() {
{
admin.GET("/usage/summary", s.handleUsageSummary)
admin.GET("/usage/time-series", s.handleTimeSeries)
admin.GET("/usage/providers", s.handleProvidersUsage)
admin.GET("/usage/clients", s.handleClientsUsage)
admin.GET("/usage/detailed", s.handleDetailedUsage)
admin.GET("/analytics/breakdown", s.handleAnalyticsBreakdown)
admin.GET("/clients", s.handleGetClients)
admin.POST("/clients", s.handleCreateClient)
admin.GET("/clients/:id", s.handleGetClient)
admin.PUT("/clients/:id", s.handleUpdateClient)
admin.DELETE("/clients/:id", s.handleDeleteClient)
admin.GET("/clients/:id/tokens", s.handleGetClientTokens)
@@ -114,7 +210,10 @@ func (s *Server) setupRoutes() {
admin.GET("/providers", s.handleGetProviders)
admin.PUT("/providers/:name", s.handleUpdateProvider)
admin.POST("/providers/:name/test", s.handleTestProvider)
admin.GET("/models", s.handleGetModels)
admin.PUT("/models/:id", s.handleUpdateModel)
admin.GET("/users", s.handleGetUsers)
admin.POST("/users", s.handleCreateUser)
@@ -122,6 +221,10 @@ func (s *Server) setupRoutes() {
admin.DELETE("/users/:id", s.handleDeleteUser)
admin.GET("/system/health", s.handleSystemHealth)
admin.GET("/system/metrics", s.handleSystemMetrics)
admin.GET("/system/settings", s.handleGetSettings)
admin.POST("/system/backup", s.handleCreateBackup)
admin.GET("/system/logs", s.handleGetLogs)
}
}
@@ -130,6 +233,45 @@ func (s *Server) setupRoutes() {
})
}
func (s *Server) handleListModels(c *gin.Context) {
type OpenAIModel struct {
ID string `json:"id"`
Object string `json:"object"`
Created int64 `json:"created"`
OwnedBy string `json:"owned_by"`
}
var data []OpenAIModel
allowedProviders := map[string]bool{
"openai": true,
"google": true, // Models from models.dev use 'google' ID for Gemini
"deepseek": true,
"moonshot": true,
"xai": true, // Models from models.dev use 'xai' ID for Grok
}
if s.registry != nil {
for pID, pInfo := range s.registry.Providers {
if !allowedProviders[pID] {
continue
}
for mID := range pInfo.Models {
data = append(data, OpenAIModel{
ID: mID,
Object: "model",
Created: 1700000000,
OwnedBy: pID,
})
}
}
}
c.JSON(http.StatusOK, gin.H{
"object": "list",
"data": data,
})
}
func (s *Server) handleChatCompletions(c *gin.Context) {
startTime := time.Now()
var req models.ChatCompletionRequest
@@ -144,6 +286,8 @@ func (s *Server) handleChatCompletions(c *gin.Context) {
providerName = "gemini"
} else if strings.Contains(req.Model, "deepseek") {
providerName = "deepseek"
} else if strings.Contains(req.Model, "kimi") || strings.Contains(req.Model, "moonshot") {
providerName = "moonshot"
} else if strings.Contains(req.Model, "grok") {
providerName = "grok"
}
@@ -322,7 +466,9 @@ func (s *Server) logRequest(start time.Time, clientID, provider, model string, u
}
// Calculate cost using registry
entry.Cost = utils.CalculateCost(s.registry, model, entry.PromptTokens, entry.CompletionTokens, entry.CacheReadTokens, entry.CacheWriteTokens)
entry.Cost = utils.CalculateCost(s.registry, model, entry.PromptTokens, entry.CompletionTokens, entry.ReasoningTokens, entry.CacheReadTokens, entry.CacheWriteTokens)
fmt.Printf("[DEBUG] Request logged: model=%s, prompt=%d, completion=%d, reasoning=%d, cache_read=%d, cost=%f\n",
model, entry.PromptTokens, entry.CompletionTokens, entry.ReasoningTokens, entry.CacheReadTokens, entry.Cost)
}
s.logger.LogRequest(entry)

View File

@@ -15,6 +15,7 @@ import (
type Session struct {
Username string `json:"username"`
DisplayName string `json:"display_name"`
Role string `json:"role"`
CreatedAt time.Time `json:"created_at"`
ExpiresAt time.Time `json:"expires_at"`
@@ -31,6 +32,7 @@ type SessionManager struct {
type sessionPayload struct {
SessionID string `json:"session_id"`
Username string `json:"username"`
DisplayName string `json:"display_name"`
Role string `json:"role"`
Exp int64 `json:"exp"`
}
@@ -43,7 +45,7 @@ func NewSessionManager(secret []byte, ttl time.Duration) *SessionManager {
}
}
func (m *SessionManager) CreateSession(username, role string) (string, error) {
func (m *SessionManager) CreateSession(username, displayName, role string) (string, error) {
sessionID := uuid.New().String()
now := time.Now()
expiresAt := now.Add(m.ttl)
@@ -51,6 +53,7 @@ func (m *SessionManager) CreateSession(username, role string) (string, error) {
m.mu.Lock()
m.sessions[sessionID] = Session{
Username: username,
DisplayName: displayName,
Role: role,
CreatedAt: now,
ExpiresAt: expiresAt,
@@ -58,13 +61,14 @@ func (m *SessionManager) CreateSession(username, role string) (string, error) {
}
m.mu.Unlock()
return m.createSignedToken(sessionID, username, role, expiresAt.Unix())
return m.createSignedToken(sessionID, username, displayName, role, expiresAt.Unix())
}
func (m *SessionManager) createSignedToken(sessionID, username, role string, exp int64) (string, error) {
func (m *SessionManager) createSignedToken(sessionID, username, displayName, role string, exp int64) (string, error) {
payload := sessionPayload{
SessionID: sessionID,
Username: username,
DisplayName: displayName,
Role: role,
Exp: exp,
}

View File

@@ -4,6 +4,7 @@ import (
"log"
"net/http"
"sync"
"sync/atomic"
"github.com/gin-gonic/gin"
"github.com/gorilla/websocket"
@@ -23,6 +24,7 @@ type Hub struct {
register chan *websocket.Conn
unregister chan *websocket.Conn
mu sync.Mutex
clientCount int32
}
func NewHub() *Hub {
@@ -40,6 +42,7 @@ func (h *Hub) Run() {
case client := <-h.register:
h.mu.Lock()
h.clients[client] = true
atomic.AddInt32(&h.clientCount, 1)
h.mu.Unlock()
log.Println("WebSocket client registered")
case client := <-h.unregister:
@@ -47,6 +50,7 @@ func (h *Hub) Run() {
if _, ok := h.clients[client]; ok {
delete(h.clients, client)
client.Close()
atomic.AddInt32(&h.clientCount, -1)
}
h.mu.Unlock()
log.Println("WebSocket client unregistered")
@@ -58,6 +62,7 @@ func (h *Hub) Run() {
log.Printf("WebSocket error: %v", err)
client.Close()
delete(h.clients, client)
atomic.AddInt32(&h.clientCount, -1)
}
}
h.mu.Unlock()
@@ -65,6 +70,10 @@ func (h *Hub) Run() {
}
}
func (h *Hub) GetClientCount() int {
return int(atomic.LoadInt32(&h.clientCount))
}
func (s *Server) handleWebSocket(c *gin.Context) {
conn, err := upgrader.Upgrade(c.Writer, c.Request, nil)
if err != nil {
@@ -81,7 +90,7 @@ func (s *Server) handleWebSocket(c *gin.Context) {
// Initial message
conn.WriteJSON(gin.H{
"type": "connected",
"message": "Connected to LLM Proxy Dashboard",
"message": "Connected to GopherGate Dashboard",
})
for {

71
internal/utils/crypto.go Normal file
View File

@@ -0,0 +1,71 @@
package utils
import (
"crypto/aes"
"crypto/cipher"
"crypto/rand"
"encoding/base64"
"fmt"
"io"
)
// Encrypt encrypts plain text using AES-GCM with the given 32-byte key.
func Encrypt(plainText string, key []byte) (string, error) {
if len(key) != 32 {
return "", fmt.Errorf("encryption key must be 32 bytes")
}
block, err := aes.NewCipher(key)
if err != nil {
return "", err
}
gcm, err := cipher.NewGCM(block)
if err != nil {
return "", err
}
nonce := make([]byte, gcm.NonceSize())
if _, err := io.ReadFull(rand.Reader, nonce); err != nil {
return "", err
}
// The nonce should be prepended to the ciphertext
cipherText := gcm.Seal(nonce, nonce, []byte(plainText), nil)
return base64.StdEncoding.EncodeToString(cipherText), nil
}
// Decrypt decrypts base64-encoded cipher text using AES-GCM with the given 32-byte key.
func Decrypt(encodedCipherText string, key []byte) (string, error) {
if len(key) != 32 {
return "", fmt.Errorf("encryption key must be 32 bytes")
}
cipherText, err := base64.StdEncoding.DecodeString(encodedCipherText)
if err != nil {
return "", err
}
block, err := aes.NewCipher(key)
if err != nil {
return "", err
}
gcm, err := cipher.NewGCM(block)
if err != nil {
return "", err
}
nonceSize := gcm.NonceSize()
if len(cipherText) < nonceSize {
return "", fmt.Errorf("cipher text too short")
}
nonce, actualCipherText := cipherText[:nonceSize], cipherText[nonceSize:]
plainText, err := gcm.Open(nil, nonce, actualCipherText, nil)
if err != nil {
return "", err
}
return string(plainText), nil
}

View File

@@ -6,7 +6,7 @@ import (
"log"
"time"
"llm-proxy/internal/models"
"gophergate/internal/models"
"github.com/go-resty/resty/v2"
)
@@ -34,13 +34,25 @@ func FetchRegistry() (*models.ModelRegistry, error) {
return &models.ModelRegistry{Providers: providers}, nil
}
func CalculateCost(registry *models.ModelRegistry, modelID string, promptTokens, completionTokens, cacheRead, cacheWrite uint32) float64 {
func CalculateCost(registry *models.ModelRegistry, modelID string, promptTokens, completionTokens, reasoningTokens, cacheRead, cacheWrite uint32) float64 {
meta := registry.FindModel(modelID)
if meta == nil || meta.Cost == nil {
log.Printf("[DEBUG] CalculateCost: model %s not found or has no cost metadata", modelID)
return 0.0
}
cost := (float64(promptTokens) * meta.Cost.Input / 1000000.0) +
// promptTokens is usually the TOTAL prompt size.
// We subtract cacheRead from it to get the uncached part.
uncachedTokens := promptTokens
if cacheRead > 0 {
if cacheRead > promptTokens {
uncachedTokens = 0
} else {
uncachedTokens = promptTokens - cacheRead
}
}
cost := (float64(uncachedTokens) * meta.Cost.Input / 1000000.0) +
(float64(completionTokens) * meta.Cost.Output / 1000000.0)
if meta.Cost.CacheRead != nil {
@@ -50,5 +62,8 @@ func CalculateCost(registry *models.ModelRegistry, modelID string, promptTokens,
cost += float64(cacheWrite) * (*meta.Cost.CacheWrite) / 1000000.0
}
log.Printf("[DEBUG] CalculateCost: model=%s, uncached=%d, completion=%d, reasoning=%d, cache_read=%d, cache_write=%d, cost=%f (input_rate=%f, output_rate=%f)",
modelID, uncachedTokens, completionTokens, reasoningTokens, cacheRead, cacheWrite, cost, meta.Cost.Input, meta.Cost.Output)
return cost
}

View File

@@ -1,13 +0,0 @@
-- Migration: add billing_mode to provider_configs
-- Adds a billing_mode TEXT column with default 'prepaid'
-- After applying, set Gemini to postpaid with:
-- UPDATE provider_configs SET billing_mode = 'postpaid' WHERE id = 'gemini';
BEGIN TRANSACTION;
ALTER TABLE provider_configs ADD COLUMN billing_mode TEXT DEFAULT 'prepaid';
COMMIT;
-- NOTE: If you use a production SQLite file, run the following to set Gemini to postpaid:
-- sqlite3 /path/to/db.sqlite "UPDATE provider_configs SET billing_mode='postpaid' WHERE id='gemini';"

View File

@@ -1,13 +0,0 @@
-- Migration: add composite indexes for query performance
-- Adds three composite indexes:
-- 1. idx_llm_requests_client_timestamp on llm_requests(client_id, timestamp)
-- 2. idx_llm_requests_provider_timestamp on llm_requests(provider, timestamp)
-- 3. idx_model_configs_provider_id on model_configs(provider_id)
BEGIN TRANSACTION;
CREATE INDEX IF NOT EXISTS idx_llm_requests_client_timestamp ON llm_requests(client_id, timestamp);
CREATE INDEX IF NOT EXISTS idx_llm_requests_provider_timestamp ON llm_requests(provider, timestamp);
CREATE INDEX IF NOT EXISTS idx_model_configs_provider_id ON model_configs(provider_id);
COMMIT;

View File

@@ -1,11 +0,0 @@
2026-03-06T20:07:36.737914Z  INFO Starting LLM Proxy Gateway v0.1.0
2026-03-06T20:07:36.738903Z  INFO Configuration loaded from Some("/home/newkirk/Documents/projects/web_projects/llm-proxy/config.toml")
2026-03-06T20:07:36.738945Z  INFO Encryption initialized
2026-03-06T20:07:36.739124Z  INFO Connecting to database at ./data/llm_proxy.db
2026-03-06T20:07:36.753254Z  INFO Database migrations completed
2026-03-06T20:07:36.753294Z  INFO Database initialized at "./data/llm_proxy.db"
2026-03-06T20:07:36.755187Z  INFO Fetching model registry from https://models.dev/api.json
2026-03-06T20:07:37.000853Z  INFO Successfully loaded model registry
2026-03-06T20:07:37.001382Z  INFO Model config cache initialized
2026-03-06T20:07:37.001702Z  WARN SESSION_SECRET environment variable not set. Using a randomly generated secret. This will invalidate all sessions on restart. Set SESSION_SECRET to a fixed hex or base64 encoded 32-byte value.
2026-03-06T20:07:37.002898Z  INFO Server listening on http://0.0.0.0:8082

View File

@@ -1 +0,0 @@
945904

View File

@@ -148,22 +148,54 @@ body {
width: 80px;
height: 80px;
margin: 0 auto 1.25rem;
border-radius: 16px;
background: var(--bg2);
display: flex;
align-items: center;
justify-content: center;
color: var(--orange);
font-size: 2rem;
background: rgba(254, 128, 25, 0.15);
color: var(--primary);
border-radius: 12px;
font-size: 2.5rem;
}
/* GopherGate Logo Icon */
.logo-icon-container {
width: 60px;
height: 60px;
background: var(--blue-light);
border-radius: 12px;
display: flex;
align-items: center;
justify-content: center;
box-shadow: var(--shadow);
border: 2px solid var(--fg1);
margin: 0 auto;
}
.logo-icon-container.small {
width: 32px;
height: 32px;
border-radius: 6px;
margin: 0;
}
.logo-icon-text {
font-family: 'JetBrains Mono', monospace;
font-weight: 700;
color: var(--bg0);
font-size: 1.8rem;
}
.logo-icon-container.small .logo-icon-text {
font-size: 1rem;
}
.login-header h1 {
font-size: 1.75rem;
font-size: 2rem;
font-weight: 800;
color: var(--fg0);
color: var(--primary-light);
margin-bottom: 0.5rem;
letter-spacing: -0.025em;
text-transform: uppercase;
}
.login-subtitle {
@@ -297,6 +329,25 @@ body {
font-size: 1.125rem;
}
/* Badges */
.badge {
display: inline-block;
padding: 0.25rem 0.5rem;
font-size: 0.75rem;
font-weight: 600;
line-height: 1;
text-align: center;
white-space: nowrap;
vertical-align: baseline;
border-radius: 4px;
}
.badge-success { background-color: rgba(152, 151, 26, 0.15); color: var(--green-light); border: 1px solid var(--green); }
.badge-info { background-color: rgba(69, 133, 136, 0.15); color: var(--blue-light); border: 1px solid var(--blue); }
.badge-warning { background-color: rgba(215, 153, 33, 0.15); color: var(--yellow-light); border: 1px solid var(--yellow); }
.badge-danger { background-color: rgba(204, 36, 29, 0.15); color: var(--red-light); border: 1px solid var(--red); }
.badge-client { background-color: var(--bg2); color: var(--fg1); border: 1px solid var(--bg3); padding: 2px 6px; font-size: 0.7rem; text-transform: uppercase; }
/* Responsive Login */
@media (max-width: 480px) {
.login-card {
@@ -375,11 +426,15 @@ body {
}
.sidebar.collapsed .logo {
display: flex;
}
.sidebar.collapsed .logo span {
display: none;
}
.sidebar.collapsed .sidebar-toggle {
opacity: 1;
margin-left: 0;
}
.logo {
@@ -394,6 +449,7 @@ body {
white-space: nowrap;
}
.sidebar-logo {
width: 32px;
height: 32px;
@@ -588,17 +644,48 @@ body {
/* Main Content Area */
.main-content {
margin-left: 260px;
padding-left: 260px;
flex: 1;
min-height: 100vh;
transition: all 0.3s;
transition: all 0.3s cubic-bezier(0.4, 0, 0.2, 1);
display: flex;
flex-direction: column;
background-color: var(--bg-primary);
}
.sidebar.collapsed ~ .main-content {
margin-left: 80px;
.sidebar.collapsed + .main-content {
padding-left: 80px;
}
.top-bar {
height: 70px;
background: var(--bg0);
border-bottom: 1px solid var(--bg2);
display: flex;
align-items: center;
justify-content: space-between;
padding: 0 var(--spacing-xl);
position: sticky;
top: 0;
z-index: 100;
}
.top-bar .page-title h2 {
font-size: 1.25rem;
font-weight: 700;
color: var(--fg0);
}
.top-bar-actions {
display: flex;
align-items: center;
gap: var(--spacing-lg);
}
.content-body {
padding: var(--spacing-xl);
flex: 1;
position: relative;
}
.top-nav {
@@ -1047,6 +1134,53 @@ body {
gap: 0.75rem;
}
/* Connection Status Indicator */
.status-indicator {
display: flex;
align-items: center;
gap: 0.75rem;
padding: 0.5rem 0.875rem;
background: var(--bg1);
border: 1px solid var(--bg3);
border-radius: 6px;
font-size: 0.8rem;
font-weight: 600;
color: var(--fg3);
transition: all 0.2s;
}
.status-dot {
width: 8px;
height: 8px;
border-radius: 50%;
background: var(--fg4);
position: relative;
}
.status-dot.connected {
background: var(--green-light);
box-shadow: 0 0 0 0 rgba(184, 187, 38, 0.4);
animation: status-pulse 2s infinite;
}
.status-dot.disconnected {
background: var(--red-light);
}
.status-dot.connecting {
background: var(--yellow-light);
}
.status-dot.error {
background: var(--red);
}
@keyframes status-pulse {
0% { box-shadow: 0 0 0 0 rgba(184, 187, 38, 0.4); }
70% { box-shadow: 0 0 0 6px rgba(184, 187, 38, 0); }
100% { box-shadow: 0 0 0 0 rgba(184, 187, 38, 0); }
}
/* WebSocket Dot Pulse */
@keyframes ws-pulse {
0% { box-shadow: 0 0 0 0 rgba(184, 187, 38, 0.4); }

BIN
static/favicon.ico Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1002 B

View File

@@ -3,49 +3,38 @@
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LLM Proxy Gateway - Admin Dashboard</title>
<title>GopherGate - Admin Dashboard</title>
<link rel="stylesheet" href="/css/dashboard.css?v=11">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.4.0/css/all.min.css">
<link rel="icon" href="img/logo-icon.png" type="image/png" sizes="any">
<link rel="apple-touch-icon" href="img/logo-icon.png">
<link href="https://fonts.googleapis.com/css2?family=Fira+Code:wght@300;400;500;600;700&family=JetBrains+Mono:wght@400;700&display=swap" rel="stylesheet">
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
<script src="https://cdn.jsdelivr.net/npm/luxon@3.4.4/build/global/luxon.min.js"></script>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@400;700&family=Inter:wght@400;500;600;700&display=swap" rel="stylesheet">
</head>
<body>
<!-- Login Screen -->
<div id="login-screen" class="login-container">
<body class="gruvbox-dark">
<!-- Auth Page -->
<div id="auth-page" class="login-container">
<div class="login-card">
<div class="login-header">
<i class="fas fa-terminal login-logo-fallback"></i>
<h1>LLM Proxy Gateway</h1>
<p class="login-subtitle">Admin Dashboard</p>
<div class="logo-icon-container">
<span class="logo-icon-text">GG</span>
</div>
<form id="login-form" class="login-form" onsubmit="event.preventDefault();">
<div class="form-group">
<input type="text" id="username" name="username" placeholder=" " required>
<label for="username">
<i class="fas fa-user"></i> Username
</label>
<h1>GopherGate</h1>
<p class="login-subtitle">Secure LLM Gateway & Management</p>
</div>
<div class="form-group">
<input type="password" id="password" name="password" placeholder=" " required>
<label for="password">
<i class="fas fa-lock"></i> Password
</label>
<form id="login-form">
<div class="form-control">
<label for="username">Username</label>
<input type="text" id="username" name="username" required autocomplete="username">
</div>
<div class="form-group">
<button type="submit" class="login-btn">
<i class="fas fa-sign-in-alt"></i> Sign In
</button>
</div>
<div class="login-footer">
<p>Default: <code>admin</code> / <code>admin</code> (change in Settings &gt; Security)</p>
<div class="form-control">
<label for="password">Password</label>
<input type="password" id="password" name="password" required autocomplete="current-password">
</div>
<button type="submit" id="login-btn" class="btn btn-primary btn-block">Sign In</button>
</form>
<div id="login-error" class="error-message" style="display: none;">
<i class="fas fa-exclamation-circle"></i>
<span>Invalid credentials. Please try again.</span>
<span></span>
</div>
</div>
</div>
@@ -56,9 +45,10 @@
<nav class="sidebar">
<div class="sidebar-header">
<div class="logo">
<img src="img/logo-icon.png" alt="LLM Proxy" class="sidebar-logo" onerror="this.style.display='none'; this.nextElementSibling.style.display='inline-block';">
<i class="fas fa-shield-alt logo-fallback" style="display: none;"></i>
<span>LLM Proxy</span>
<div class="logo-icon-container small">
<span class="logo-icon-text">GG</span>
</div>
<span>GopherGate</span>
</div>
<button class="sidebar-toggle" id="sidebar-toggle">
<i class="fas fa-bars"></i>
@@ -68,68 +58,74 @@
<div class="sidebar-menu">
<div class="menu-section">
<h3 class="menu-title">MAIN</h3>
<a href="#overview" class="menu-item active" data-page="overview" data-tooltip="Dashboard Overview">
<ul class="menu-list">
<li class="menu-item active" data-page="overview">
<i class="fas fa-th-large"></i>
<span>Overview</span>
</a>
<a href="#analytics" class="menu-item" data-page="analytics" data-tooltip="Usage Analytics">
<i class="fas fa-chart-line"></i>
</li>
<li class="menu-item" data-page="analytics">
<i class="fas fa-chart-bar"></i>
<span>Analytics</span>
</a>
<a href="#costs" class="menu-item" data-page="costs" data-tooltip="Cost Tracking">
</li>
<li class="menu-item" data-page="costs">
<i class="fas fa-dollar-sign"></i>
<span>Cost Management</span>
</a>
<span>Costs & Billing</span>
</li>
</ul>
</div>
<div class="menu-section">
<h3 class="menu-title">MANAGEMENT</h3>
<a href="#clients" class="menu-item" data-page="clients" data-tooltip="API Clients">
<ul class="menu-list">
<li class="menu-item" data-page="clients">
<i class="fas fa-users"></i>
<span>Client Management</span>
</a>
<a href="#providers" class="menu-item" data-page="providers" data-tooltip="Model Providers">
<span>Clients</span>
</li>
<li class="menu-item" data-page="providers">
<i class="fas fa-server"></i>
<span>Providers</span>
</a>
<a href="#models" class="menu-item" data-page="models" data-tooltip="Manage Models">
<i class="fas fa-cube"></i>
</li>
<li class="menu-item" data-page="models">
<i class="fas fa-brain"></i>
<span>Models</span>
</a>
<a href="#monitoring" class="menu-item" data-page="monitoring" data-tooltip="Live Monitoring">
<i class="fas fa-heartbeat"></i>
<span>Real-time Monitoring</span>
</a>
</li>
</ul>
</div>
<div class="menu-section">
<h3 class="menu-title">SYSTEM</h3>
<a href="#users" class="menu-item admin-only" data-page="users" data-tooltip="User Accounts">
<ul class="menu-list">
<li class="menu-item" data-page="monitoring">
<i class="fas fa-activity"></i>
<span>Live Monitoring</span>
</li>
<li class="menu-item" data-page="logs">
<i class="fas fa-list-alt"></i>
<span>Logs</span>
</li>
<li class="menu-item" data-page="users">
<i class="fas fa-user-shield"></i>
<span>User Management</span>
</a>
<a href="#settings" class="menu-item admin-only" data-page="settings" data-tooltip="System Settings">
<span>Admin Users</span>
</li>
<li class="menu-item" data-page="settings">
<i class="fas fa-cog"></i>
<span>Settings</span>
</a>
<a href="#logs" class="menu-item" data-page="logs" data-tooltip="System Logs">
<i class="fas fa-list-alt"></i>
<span>System Logs</span>
</a>
</li>
</ul>
</div>
</div>
<div class="sidebar-footer">
<div class="user-info">
<div class="user-avatar">
<i class="fas fa-user-circle"></i>
<i class="fas fa-user"></i>
</div>
<div class="user-details">
<span class="user-name">Loading...</span>
<span class="user-role">...</span>
<div class="user-name" id="display-username">Admin</div>
<div class="user-role" id="display-role">Administrator</div>
</div>
</div>
<button class="logout-btn" id="logout-btn" title="Logout">
<button id="logout-btn" class="btn-icon" title="Logout">
<i class="fas fa-sign-out-alt"></i>
</button>
</div>
@@ -137,43 +133,40 @@
<!-- Main Content -->
<main class="main-content">
<!-- Top Navigation -->
<header class="top-nav">
<div class="nav-left">
<h1 class="page-title" id="page-title">Dashboard Overview</h1>
<header class="top-bar">
<div class="page-title">
<h2 id="current-page-title">Overview</h2>
</div>
<div class="nav-right">
<div class="nav-item" id="ws-status-nav" title="WebSocket Connection Status">
<div class="ws-dot"></div>
<span class="ws-text">Connecting...</span>
<div class="top-bar-actions">
<div id="connection-status" class="status-indicator">
<span class="status-dot"></span>
<span class="status-text">Disconnected</span>
</div>
<div class="nav-item" title="Refresh Current Page">
<i class="fas fa-sync-alt" id="refresh-btn"></i>
</div>
<div class="nav-item">
<span id="current-time">Loading...</span>
<div class="theme-toggle" id="theme-toggle">
<i class="fas fa-moon"></i>
</div>
</div>
</header>
<!-- Page Content -->
<div class="page-content" id="page-content">
<!-- Dynamic content container -->
<div id="page-content" class="content-body">
<!-- Content will be loaded dynamically -->
<div class="loader-container">
<div class="loader"></div>
</div>
<!-- Global Spinner -->
<div class="spinner-container">
<div class="spinner"></div>
</div>
</main>
</div>
<!-- Scripts (cache-busted with version query params) -->
<!-- Scripts -->
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
<script src="https://cdn.jsdelivr.net/npm/luxon@3.3.0/build/global/luxon.min.js"></script>
<script src="/js/api.js?v=7"></script>
<script src="/js/auth.js?v=7"></script>
<script src="/js/dashboard.js?v=7"></script>
<script src="/js/websocket.js?v=7"></script>
<script src="/js/charts.js?v=7"></script>
<script src="/js/websocket.js?v=7"></script>
<script src="/js/dashboard.js?v=7"></script>
<!-- Page Modules -->
<script src="/js/pages/overview.js?v=7"></script>
<script src="/js/pages/analytics.js?v=7"></script>
<script src="/js/pages/costs.js?v=7"></script>

View File

@@ -1,4 +1,4 @@
// Authentication Module for LLM Proxy Dashboard
// Authentication Module for GopherGate Dashboard
class AuthManager {
constructor() {
@@ -58,7 +58,7 @@ class AuthManager {
async login(username, password) {
const errorElement = document.getElementById('login-error');
const loginBtn = document.querySelector('.login-btn');
const loginBtn = document.getElementById('login-btn');
try {
loginBtn.innerHTML = '<i class="fas fa-spinner fa-spin"></i> Authenticating...';
@@ -124,7 +124,7 @@ class AuthManager {
}
showLogin() {
const loginScreen = document.getElementById('login-screen');
const loginScreen = document.getElementById('auth-page');
const dashboard = document.getElementById('dashboard');
if (loginScreen) loginScreen.style.display = 'flex';
@@ -139,7 +139,7 @@ class AuthManager {
if (errorElement) errorElement.style.display = 'none';
// Reset button
const loginBtn = document.querySelector('.login-btn');
const loginBtn = document.getElementById('login-btn');
if (loginBtn) {
loginBtn.innerHTML = '<i class="fas fa-sign-in-alt"></i> Sign In';
loginBtn.disabled = false;
@@ -147,7 +147,7 @@ class AuthManager {
}
showDashboard() {
const loginScreen = document.getElementById('login-screen');
const loginScreen = document.getElementById('auth-page');
const dashboard = document.getElementById('dashboard');
if (loginScreen) loginScreen.style.display = 'none';
@@ -167,7 +167,7 @@ class AuthManager {
const userRoleElement = document.querySelector('.user-role');
if (userNameElement && this.user) {
userNameElement.textContent = this.user.name || this.user.username || 'User';
userNameElement.textContent = this.user.display_name || this.user.username || 'User';
}
if (userRoleElement && this.user) {

View File

@@ -492,7 +492,7 @@ class MonitoringPage {
simulateRequest() {
const clients = ['client-1', 'client-2', 'client-3', 'client-4', 'client-5'];
const providers = ['OpenAI', 'Gemini', 'DeepSeek', 'Grok'];
const models = ['gpt-4', 'gpt-3.5-turbo', 'gemini-pro', 'deepseek-chat', 'grok-beta'];
const models = ['gpt-4o', 'gpt-4o-mini', 'gemini-2.0-flash', 'deepseek-chat', 'grok-4-1-fast-non-reasoning'];
const statuses = ['success', 'success', 'success', 'error', 'warning']; // Mostly success
const request = {

View File

@@ -248,21 +248,19 @@ class WebSocketManager {
}
updateStatus(status) {
const statusElement = document.getElementById('ws-status-nav');
const statusElement = document.getElementById('connection-status');
if (!statusElement) return;
const dot = statusElement.querySelector('.ws-dot');
const text = statusElement.querySelector('.ws-text');
const dot = statusElement.querySelector('.status-dot');
const text = statusElement.querySelector('.status-text');
if (!dot || !text) return;
// Remove all status classes
dot.classList.remove('connected', 'disconnected');
statusElement.classList.remove('connected', 'disconnected');
dot.classList.remove('connected', 'disconnected', 'error', 'connecting');
// Add new status class
dot.classList.add(status);
statusElement.classList.add(status);
// Update text
const statusText = {

View File

@@ -1,14 +0,0 @@
gantt
title LLM Proxy Project Timeline
dateFormat YYYY-MM-DD
section Frontend
Standardize Escaping (users.js) :a1, 2026-03-06, 1d
section Backend Cleanup
Remove Unused Imports :b1, 2026-03-06, 1d
section HMAC Migration
Architecture Design :c1, 2026-03-07, 1d
Backend Implementation :c2, after c1, 2d
Session Refresh Logic :c3, after c2, 1d
section Testing
Integration Test (Encrypted Keys) :d1, 2026-03-09, 2d
HMAC Verification Tests :d2, after c3, 1d