v1.52.14
版本发布时间: 2024-11-22 23:46:53
BerriAI/litellm最新发布版本:v1.56.9(2025-01-04 10:46:47)
What's Changed
- (fix) passthrough - allow internal users to access /anthropic by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6843
- LiteLLM Minor Fixes & Improvements (11/21/2024) by @krrishdholakia in https://github.com/BerriAI/litellm/pull/6837
- fix latency issues on google ai studio by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6852
- (fix) add linting check to ban creating
AsyncHTTPHandler
during LLM calling by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6855 - (feat) Add usage tracking for streaming
/anthropic
passthrough routes by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6842 - (Feat) Allow passing
litellm_metadata
to pass through endpoints + Add e2e tests for /anthropic/ usage tracking by @ishaan-jaff in https://github.com/BerriAI/litellm/pull/6864
Full Changelog: https://github.com/BerriAI/litellm/compare/v1.52.12...v1.52.14
Docker Run LiteLLM Proxy
docker run \
-e STORE_MODEL_IN_DB=True \
-p 4000:4000 \
ghcr.io/berriai/litellm:main-v1.52.14
Don't want to maintain your internal proxy? get in touch 🎉
Hosted Proxy Alpha: https://calendly.com/d/4mp-gd3-k5k/litellm-1-1-onboarding-chat
Load Test LiteLLM Proxy Results
Name | Status | Median Response Time (ms) | Average Response Time (ms) | Requests/s | Failures/s | Request Count | Failure Count | Min Response Time (ms) | Max Response Time (ms) |
---|---|---|---|---|---|---|---|---|---|
/chat/completions | Passed ✅ | 260.0 | 292.32742033908687 | 6.002121672811824 | 0.0 | 1796 | 0 | 222.04342999998516 | 2700.951708000048 |
Aggregated | Passed ✅ | 260.0 | 292.32742033908687 | 6.002121672811824 | 0.0 | 1796 | 0 | 222.04342999998516 | 2700.951708000048 |
1、 load_test.html 1.59MB
2、 load_test_stats.csv 538B