17 KiB
Database Review Report for LLM-Proxy Repository
Review Date: 2025-03-06
Reviewer: Database Optimization Expert
Repository: llm-proxy
Focus Areas: Schema Design, Query Optimization, Migration Strategy, Data Integrity, Usage Tracking Accuracy
Executive Summary
The llm-proxy database implementation demonstrates solid foundation with appropriate table structures and clear separation of concerns. However, several areas require improvement to ensure scalability, data consistency, and performance as usage grows. Key findings include:
- Schema Design: Generally normalized but missing foreign key enforcement and some critical indexes.
- Query Optimization: Well-optimized for most queries but missing composite indexes for common filtering patterns.
- Migration Strategy: Ad-hoc migration approach that may cause issues with schema evolution.
- Data Integrity: Potential race conditions in usage tracking and missing transaction boundaries.
- Usage Tracking: Generally accurate but risk of inconsistent state between related tables.
This report provides detailed analysis and actionable recommendations for each area.
1. Schema Design Review
Tables Overview
The database consists of 6 main tables:
- clients: Client management with usage aggregates
- llm_requests: Request logging with token counts and costs
- provider_configs: Provider configuration and credit balances
- model_configs: Model-specific configuration and cost overrides
- users: Dashboard user authentication
- client_tokens: API token storage for client authentication
Normalization Assessment
Strengths:
- Tables follow 3rd Normal Form (3NF) with appropriate separation
- Foreign key relationships properly defined
- No obvious data duplication across tables
Areas for Improvement:
- Denormalized aggregates:
clients.total_requests,total_tokens,total_costare derived fromllm_requests. This introduces risk of inconsistency. - Provider credit balance: Stored in
provider_configsbut also updated based onllm_requests. No audit trail for balance changes.
Data Type Analysis
Appropriate Choices:
- INTEGER for token counts (cast from u32 to i64)
- REAL for monetary values
- DATETIME for timestamps using SQLite's CURRENT_TIMESTAMP
- TEXT for identifiers with appropriate length
Potential Issues:
llm_requests.request_bodyandresponse_bodydefined as TEXT but always set to NULL - consider removing or making optional columns.provider_configs.billing_modeadded via migration but default value not consistently applied to existing rows.
Constraints and Foreign Keys
Current Constraints:
- Primary keys defined for all tables
- UNIQUE constraints on
clients.client_id,users.username,client_tokens.token - Foreign key definitions present but not enforced (SQLite default)
Missing Constraints:
- NOT NULL constraints missing on several columns where nullability not intended
- CHECK constraints for positive values (
credit_balance >= 0) - Foreign key enforcement not enabled
2. Query Optimization Analysis
Indexing Strategy
Existing Indexes:
idx_clients_client_id- Essential for client lookupsidx_clients_created_at- Useful for chronological listingidx_llm_requests_timestamp- Critical for time-based queriesidx_llm_requests_client_id- Supports client-specific queriesidx_llm_requests_provider- Good for provider breakdownsidx_llm_requests_status- Low cardinality but acceptableidx_client_tokens_tokenUNIQUE - Essential for authenticationidx_client_tokens_client_id- Supports token management
Missing Critical Indexes:
model_configs.provider_id- Foreign key column used in JOINsllm_requests(client_id, timestamp)- Composite index for client time-series queriesllm_requests(provider, timestamp)- For provider performance analysisllm_requests(status, timestamp)- For error trend analysis
N+1 Query Detection
Well-Optimized Areas:
- Model configuration caching prevents repeated database hits
- Provider configs loaded in batch for dashboard display
- Client listing uses single efficient query
Potential N+1 Patterns:
- In
server/mod.rslist_models function, cache lookup per model but this is in-memory - No significant database N+1 issues identified
Inefficient Query Patterns
Query 1: Time-series aggregation with strftime()
SELECT strftime('%Y-%m-%d', timestamp) as date, ...
FROM llm_requests
WHERE 1=1 {}
GROUP BY date, client_id, provider, model
ORDER BY date DESC
LIMIT 200
Issue: Function on indexed column prevents index utilization for the WHERE clause when filtering by timestamp range.
Recommendation: Store computed date column or use range queries on timestamp directly.
Query 2: Today's stats using strftime()
WHERE strftime('%Y-%m-%d', timestamp) = ?
Issue: Non-sargable query prevents index usage.
Recommendation: Use range query:
WHERE timestamp >= date(?) AND timestamp < date(?, '+1 day')
Recommended Index Additions
-- Composite indexes for common query patterns
CREATE INDEX idx_llm_requests_client_timestamp ON llm_requests(client_id, timestamp);
CREATE INDEX idx_llm_requests_provider_timestamp ON llm_requests(provider, timestamp);
CREATE INDEX idx_llm_requests_status_timestamp ON llm_requests(status, timestamp);
-- Foreign key index
CREATE INDEX idx_model_configs_provider_id ON model_configs(provider_id);
-- Optional: Covering index for client usage queries
CREATE INDEX idx_clients_usage ON clients(client_id, total_requests, total_tokens, total_cost);
3. Migration Strategy Assessment
Current Approach
The migration system uses a hybrid approach:
- Schema synchronization:
CREATE TABLE IF NOT EXISTSon startup - Ad-hoc migrations:
ALTER TABLEstatements with error suppression - Single migration file:
migrations/001-add-billing-mode.sqlwith transaction wrapper
Pros:
- Simple to understand and maintain
- Automatic schema creation for new deployments
- Error suppression prevents crashes on column existence
Cons:
- No version tracking of applied migrations
- Potential for inconsistent schema across deployments
ALTER TABLEerror suppression hides genuine schema issues- No rollback capability
Risks and Limitations
- Schema Drift: Different instances may have different schemas if migrations are applied out of order
- Data Loss Risk: No backup/verification before schema changes
- Production Issues: Error suppression could mask migration failures until runtime
Recommendations
- Implement Proper Migration Tooling: Use
sqlx migrateor similar versioned migration system - Add Migration Version Table: Track applied migrations and checksum verification
- Separate Migration Scripts: One file per migration with up/down directions
- Pre-deployment Validation: Schema checks in CI/CD pipeline
- Backup Strategy: Automatic backups before migration execution
4. Data Integrity Evaluation
Foreign Key Enforcement
Critical Issue: Foreign key constraints are defined but not enforced in SQLite.
Impact: Orphaned records, inconsistent referential integrity.
Solution: Enable foreign key support in connection string:
let options = SqliteConnectOptions::from_str(&format!("sqlite:{}", database_path))?
.create_if_missing(true)
.pragma("foreign_keys", "ON");
Transaction Usage
Good Patterns:
- Request logging uses transactions for insert + provider balance update
- Atomic UPDATE for client usage statistics
Problematic Areas:
-
Split Transactions: Client usage update and request logging are in separate transactions
- In
logging/mod.rs:insert_logtransaction includes provider balance update - In
utils/streaming.rs: Client usage updated separately after logging - Risk: Partial updates if one transaction fails
- In
-
No Transaction for Client Creation: Client and token creation not atomic
Recommendations:
- Wrap client usage update within the same transaction as request logging
- Use transaction for client + token creation
- Consider using savepoints for complex operations
Race Conditions and Consistency
Potential Race Conditions:
-
Provider credit balance: Concurrent requests may cause lost updates
- Current:
UPDATE provider_configs SET credit_balance = credit_balance - ? - SQLite provides serializable isolation, but negative balances not prevented
- Current:
-
Client usage aggregates: Concurrent updates to
total_requests,total_tokens,total_cost- Similar UPDATE pattern, generally safe but consider idempotency
Recommendations:
- Add check constraint:
CHECK (credit_balance >= 0) - Implement idempotent request logging with unique request IDs
- Consider optimistic concurrency control for critical balances
5. Usage Tracking Accuracy
Token Counting Methodology
Current Approach:
- Prompt tokens: Estimated using provider-specific estimators
- Completion tokens: Estimated or from provider real usage data
- Cache tokens: Separately tracked for cache-aware pricing
Strengths:
- Fallback to estimation when provider doesn't report usage
- Cache token differentiation for accurate pricing
Weaknesses:
- Estimation may differ from actual provider counts
- No validation of provider-reported token counts
Cost Calculation
Well Implemented:
- Model-specific cost overrides via
model_configs - Cache-aware pricing when supported by registry
- Provider fallback calculations
Potential Issues:
- Floating-point precision for monetary calculations
- No rounding strategy for fractional cents
Update Consistency
Inconsistency Risk: Client aggregates updated separately from request logging.
Example Flow:
- Request log inserted and provider balance updated (transaction)
- Client usage updated (separate operation)
- If step 2 fails, client stats undercount usage
Solution: Include client update in the same transaction:
// In insert_log function, add:
UPDATE clients
SET total_requests = total_requests + 1,
total_tokens = total_tokens + ?,
total_cost = total_cost + ?
WHERE client_id = ?;
Financial Accuracy
Good Practices:
- Token-level granularity for cost calculation
- Separation of prompt/completion/cache pricing
- Database persistence for audit trail
Recommendations:
- Audit Trail: Add
balance_transactionstable for provider credit changes - Rounding Policy: Define rounding strategy (e.g., to 6 decimal places)
- Validation: Periodic reconciliation of aggregates vs. detail records
6. Performance Recommendations
Schema Improvements
-
Partitioning Strategy: For high-volume
llm_requests, consider:- Monthly partitioning by timestamp
- Archive old data to separate tables
-
Data Retention Policy: Implement automatic cleanup of old request logs
DELETE FROM llm_requests WHERE timestamp < date('now', '-90 days'); -
Column Optimization: Remove unused
request_body,response_bodycolumns or implement compression
Query Optimizations
- Avoid Functions on Indexed Columns: Rewrite date queries as range queries
- Batch Updates: Consider batch updates for client usage instead of per-request
- Read Replicas: For dashboard queries, consider separate read connection
Connection Pooling
Current: SQLx connection pool with default settings
Recommendations:
- Configure pool size based on expected concurrency
- Implement connection health checks
- Monitor pool utilization metrics
Monitoring Setup
Essential Metrics:
- Query execution times (slow query logging)
- Index usage statistics
- Table growth trends
- Connection pool utilization
Implementation:
- Add
sqlx::metricsintegration - Regular
ANALYZEexecution for query planner - Dashboard for database health monitoring
7. Security Considerations
Data Protection
Sensitive Data:
provider_configs.api_key- Should be encrypted at restusers.password_hash- Already hashed with bcryptclient_tokens.token- Plain text storage
Recommendations:
- Encrypt API keys using libsodium or similar
- Implement token hashing (similar to password hashing)
- Regular security audits of authentication flows
SQL Injection Prevention
Good Practices:
- Use sqlx query builder with parameter binding
- No raw SQL concatenation observed in code review
Verification Needed: Ensure all dynamic SQL uses parameterized queries
Access Controls
Database Level:
- SQLite lacks built-in user management
- Consider file system permissions for database file
- Application-level authentication is primary control
8. Summary of Critical Issues
Priority 1 (Critical):
- Foreign key constraints not enabled
- Split transactions risking data inconsistency
- Missing composite indexes for common queries
Priority 2 (High):
- No proper migration versioning system
- Potential race conditions in balance updates
- Non-sargable date queries impacting performance
Priority 3 (Medium):
- Denormalized aggregates without consistency guarantees
- No data retention policy for request logs
- Missing check constraints for data validation
9. Recommended Action Plan
Phase 1: Immediate Fixes (1-2 weeks)
- Enable foreign key constraints in database connection
- Add composite indexes for common query patterns
- Fix transaction boundaries for client usage updates
- Rewrite non-sargable date queries
Phase 2: Short-term Improvements (3-4 weeks)
- Implement proper migration system with version tracking
- Add check constraints for data validation
- Implement connection pooling configuration
- Create database monitoring dashboard
Phase 3: Long-term Enhancements (2-3 months)
- Implement data retention and archiving strategy
- Add audit trail for provider balance changes
- Consider partitioning for high-volume tables
- Implement encryption for sensitive data
Phase 4: Ongoing Maintenance
- Regular index maintenance and query plan analysis
- Periodic reconciliation of aggregate vs. detail data
- Security audits and dependency updates
- Performance benchmarking and optimization
Appendices
A. Sample Migration Implementation
-- migrations/002-enable-foreign-keys.sql
PRAGMA foreign_keys = ON;
-- migrations/003-add-composite-indexes.sql
CREATE INDEX idx_llm_requests_client_timestamp ON llm_requests(client_id, timestamp);
CREATE INDEX idx_llm_requests_provider_timestamp ON llm_requests(provider, timestamp);
CREATE INDEX idx_model_configs_provider_id ON model_configs(provider_id);
B. Transaction Fix Example
async fn insert_log(pool: &SqlitePool, log: RequestLog) -> Result<(), sqlx::Error> {
let mut tx = pool.begin().await?;
// Insert or ignore client
sqlx::query("INSERT OR IGNORE INTO clients (client_id, name, description) VALUES (?, ?, 'Auto-created from request')")
.bind(&log.client_id)
.bind(&log.client_id)
.execute(&mut *tx)
.await?;
// Insert request log
sqlx::query("INSERT INTO llm_requests ...")
.execute(&mut *tx)
.await?;
// Update provider balance
if log.cost > 0.0 {
sqlx::query("UPDATE provider_configs SET credit_balance = credit_balance - ? WHERE id = ? AND (billing_mode IS NULL OR billing_mode != 'postpaid')")
.bind(log.cost)
.bind(&log.provider)
.execute(&mut *tx)
.await?;
}
// Update client aggregates within same transaction
sqlx::query("UPDATE clients SET total_requests = total_requests + 1, total_tokens = total_tokens + ?, total_cost = total_cost + ? WHERE client_id = ?")
.bind(log.total_tokens as i64)
.bind(log.cost)
.bind(&log.client_id)
.execute(&mut *tx)
.await?;
tx.commit().await?;
Ok(())
}
C. Monitoring Query Examples
-- Identify unused indexes
SELECT * FROM sqlite_master
WHERE type = 'index'
AND name NOT IN (
SELECT DISTINCT name
FROM sqlite_stat1
WHERE tbl = 'llm_requests'
);
-- Table size analysis
SELECT name, (pgsize * page_count) / 1024 / 1024 as size_mb
FROM dbstat
WHERE name = 'llm_requests';
-- Query performance analysis (requires EXPLAIN QUERY PLAN)
EXPLAIN QUERY PLAN
SELECT * FROM llm_requests
WHERE client_id = ? AND timestamp >= ?;
This report provides a comprehensive analysis of the current database implementation and actionable recommendations for improvement. Regular review and iteration will ensure the database continues to meet performance, consistency, and scalability requirements as the application grows.