refactor: comprehensive audit — fix bugs, harden security, deduplicate providers, add CI/Docker
Some checks failed
CI / Check (push) Has been cancelled
CI / Clippy (push) Has been cancelled
CI / Formatting (push) Has been cancelled
CI / Test (push) Has been cancelled
CI / Release Build (push) Has been cancelled

Phase 1: Fix compilation (config_path Option<PathBuf>, streaming test, stale test cleanup)
Phase 2: Fix critical bugs (remove block_on deadlocks in 4 providers, fix broken SQL query builder)
Phase 3: Security hardening (session manager, real auth, token masking, Gemini key to header, password policy)
Phase 4: Implement stubs (real provider test, /proc health metrics, client/provider/backup endpoints, has_images)
Phase 5: Code quality (shared provider helpers, explicit re-exports, all Clippy warnings fixed, unwrap removal, 6 unused deps removed, dashboard split into 7 sub-modules)
Phase 6: Infrastructure (GitHub Actions CI, multi-stage Dockerfile, rustfmt.toml, clippy.toml, script fixes)
This commit is contained in:
2026-03-02 00:35:45 -05:00
parent ba643dd2b0
commit 2cdc49d7f2
42 changed files with 2800 additions and 2747 deletions

View File

@@ -1,27 +1,26 @@
use tiktoken_rs::get_bpe_from_model;
use crate::models::UnifiedRequest;
use tiktoken_rs::get_bpe_from_model;
/// Count tokens for a given model and text
pub fn count_tokens(model: &str, text: &str) -> u32 {
// If we can't get the bpe for the model, fallback to a safe default (cl100k_base for GPT-4/o1)
let bpe = get_bpe_from_model(model).unwrap_or_else(|_| {
tiktoken_rs::cl100k_base().expect("Failed to get cl100k_base encoding")
});
let bpe = get_bpe_from_model(model)
.unwrap_or_else(|_| tiktoken_rs::cl100k_base().expect("Failed to get cl100k_base encoding"));
bpe.encode_with_special_tokens(text).len() as u32
}
/// Estimate tokens for a unified request
pub fn estimate_request_tokens(model: &str, request: &UnifiedRequest) -> u32 {
let mut total_tokens = 0;
// Base tokens per message for OpenAI (approximate)
let tokens_per_message = 3;
let _tokens_per_name = 1;
for msg in &request.messages {
total_tokens += tokens_per_message;
for part in &msg.content {
match part {
crate::models::ContentPart::Text { text } => {
@@ -34,14 +33,14 @@ pub fn estimate_request_tokens(model: &str, request: &UnifiedRequest) -> u32 {
}
}
}
// Add name tokens if we had names (we don't in UnifiedMessage yet)
// total_tokens += tokens_per_name;
}
// Add 3 tokens for the assistant reply header
total_tokens += 3;
total_tokens
}