Releases: Lightning-AI/LitServe
Releases · Lightning-AI/LitServe
v0.2.0.dev0
What's Changed
- Add warning message if
batch
andunbatch
is implemented but max_batch_size is unset. by @bhimrazy in #185 - cleanup: move middleware to utils by @aniketmaurya in #189
- Add meaningful error message if response queues are not initialized by @rasbt in #191
- [pre-commit.ci] pre-commit suggestions by @pre-commit-ci in #193
- add codeowners by @aniketmaurya in #194
- cleanup: fix test naming convention by @aniketmaurya in #190
- properly shutdown litserve workers by @aniketmaurya in #192
- provide uvicorn configs with kwargs by @aniketmaurya in #198
- remove uvicorn from argument names by @aniketmaurya in #199
- fix flaky batch timeout test by @aniketmaurya in #200
- moved wrap_litserve_start to utils by @ankitsharma07 in #201
- bump version by @aniketmaurya in #202
New Contributors
- @ankitsharma07 made their first contribution in #201
Full Changelog: v0.1.5...v0.2.0.dev0
v0.1.5
What's Changed
- Feat: adds health check endpoint by @bhimrazy in #182
- Bump Lightning-AI/utilities from 0.11.5 to 0.11.6 by @dependabot in #184
- Bump mypy from 1.10.1 to 1.11.1 by @dependabot in #187
- scale uvicorn servers by @aniketmaurya in #186
- bump version by @aniketmaurya in #188
Full Changelog: v0.1.4...v0.1.5
What's Changed
- Feat: adds health check endpoint by @bhimrazy in #182
- Bump Lightning-AI/utilities from 0.11.5 to 0.11.6 by @dependabot in #184
- Bump mypy from 1.10.1 to 1.11.1 by @dependabot in #187
- scale uvicorn servers by @aniketmaurya in #186
- bump version by @aniketmaurya in #188
Full Changelog: v0.1.4...v0.1.5
v0.1.4
What's Changed
- Bump Lightning-AI/utilities from 0.11.3.post0 to 0.11.5 by @dependabot in #172
- fix flaky timeout test by @aniketmaurya in #176
- Add max payload size middleware by @andyland in #174
- Make mp.Queue.get async for response queue by @aniketmaurya in #178
- bump version v0.1.4 by @aniketmaurya in #180
Full Changelog: v0.1.3...v0.1.4
v0.1.3
What's Changed
- Add Stable Audio example to README.md by @andyland in #135
- custom api endpoint path by @aniketmaurya in #136
- run LitServe with minimal dependency by @aniketmaurya in #138
- inject context for batching loops by @aniketmaurya in #139
- propagate error with OpenAISpec by @aniketmaurya in #143
- Bump pypa/gh-action-pypi-publish from 1.8.14 to 1.9.0 by @dependabot in #142
- remove busy wait from data_streamer by @aniketmaurya in #140
- raise HTTPException from LitAPI by @aniketmaurya in #145
- avoid multiple
get_event_loop
calls by @aniketmaurya in #148 - bugfix: OpenAISpec populate missing
zip
by @aniketmaurya in #149 - optimize batch aggregation by @aniketmaurya in #147
- feat: Add ability to customize authorization method by @andyland in #151
- add type hint and reorg function definition by @aniketmaurya in #152
- implement OpenAI token usage by @aniketmaurya in #150
- feat: Support gzip by @andyland in #153
- Add AudioCraft example by @andyland in #154
- ci: fix building package by @Borda in #157
- use
get_running_loop
overget_event_loop
by @aniketmaurya in #155 - Bump actions/upload-artifact & actions/download-artifact from 3 to 4 by @dependabot in #159
- [pre-commit.ci] pre-commit suggestions by @pre-commit-ci in #162
- Bump mypy from 1.9.0 to 1.10.1 by @dependabot in #161
- Update CODEOWNERS for develop requirements by @Borda in #163
- Multiple Queue architecture for process communication by @aniketmaurya in #164
- Fix: Support files > 1MB by @andyland in #169
- support openai image_url with detail settings by @liangjs in #168
- bugfix: cover disabled request timeout scenario for
collate_requests
by @aniketmaurya in #167 - bump version for release by @aniketmaurya in #170
New Contributors
Full Changelog: v0.1.2...v0.1.3
v0.1.2
What's Changed
- Add option to disable automatic client.py generation by @rasbt in #131
- fix: join chat message content, removing empty space by @bhimrazy in #132
- Fix Starlette streaming responses by @apaz-cli in #133
- bump version for release by @aniketmaurya in #134
New Contributors
Full Changelog: v0.1.1...v0.1.2
v0.1.1
What's Changed
- Fix macos CI by @aniketmaurya in #62
- batched streaming by @aniketmaurya in #55
- fix: broken pipes caused inference worker to fail by @aniketmaurya in #61
- fix: default unbatch always generator by @aniketmaurya in #68
- document streaming by @aniketmaurya in #49
- document accelerator=auto by @aniketmaurya in #64
- document batching by @aniketmaurya in #65
- allow timeout disable by @aniketmaurya in #63
- Add devices auto by @lantiga in #69
- add license info by @aniketmaurya in #74
- allow to set mps accelerator explicitly by @aniketmaurya in #73
- add readme improvements by @aniketmaurya in #70
- decouple decode_request from get_batch_from_uid by @aniketmaurya in #79
- fix cuda forking error by @aniketmaurya in #77
- feat: Support multipart & url-encoded form bodies by @andyland in #80
- Let the user know when "setup" has completed... by @williamFalcon in #82
- format encoded response by @aniketmaurya in #85
- [pre-commit.ci] pre-commit suggestions by @pre-commit-ci in #78
- improve logging by @aniketmaurya in #87
- Update server.py by @williamFalcon in #89
- removes
/stream-predict
endpoint and standardizes to/predict
by @aniketmaurya in #93 - validate port by @aniketmaurya in #92
- Bump version since litserve is different from the last released version by @rasbt in #94
- Add Stream API Example by @rasbt in #95
- add openai spec v0 by @aniketmaurya in #76
- Exclude markdown files from pre-commit formatter by @rasbt in #96
- fix summary tag by @aniketmaurya in #97
- add client detail to streaming document by @aniketmaurya in #98
- Fix installation instructions for case sensitive file systems by @rasbt in #102
- Improve docs for OpenAI spec by @lantiga in #100
- add newline while streaming JSON objects by @aniketmaurya in #105
- fix pipe reuse by @aniketmaurya in #108
- add openai streaming by @aniketmaurya in #101
- Add real docs! by @williamFalcon in #114
- feat: Update OpenAI spec to include image url in message content by @bhimrazy in #113
- feat: Update OpenAI spec to include support for tools by @bhimrazy in #119
- inject context to share/log values across LitAPI methods by @aniketmaurya in #118
- Track device automatically by @aniketmaurya in #123
- fix: kill process by @bhimrazy in #124
- [pre-commit.ci] pre-commit suggestions by @pre-commit-ci in #125
- pre-populate context with OpenAISpec by @aniketmaurya in #126
- Bump version for new release by @rasbt in #127
New Contributors
- @andyland made their first contribution in #80
- @pre-commit-ci made their first contribution in #78
- @rasbt made their first contribution in #94
- @bhimrazy made their first contribution in #113
Full Changelog: v0.1.0...v0.1.1
Development release 0.1.1dev0
What's Changed
- Fix macos CI by @aniketmaurya in #62
- batched streaming by @aniketmaurya in #55
- fix: broken pipes caused inference worker to fail by @aniketmaurya in #61
- fix: default unbatch always generator by @aniketmaurya in #68
- document streaming by @aniketmaurya in #49
- document accelerator=auto by @aniketmaurya in #64
- document batching by @aniketmaurya in #65
- allow timeout disable by @aniketmaurya in #63
- Add devices auto by @lantiga in #69
- add license info by @aniketmaurya in #74
- allow to set mps accelerator explicitly by @aniketmaurya in #73
- add readme improvements by @aniketmaurya in #70
- decouple decode_request from get_batch_from_uid by @aniketmaurya in #79
- fix cuda forking error by @aniketmaurya in #77
- feat: Support multipart & url-encoded form bodies by @andyland in #80
- Let the user know when "setup" has completed... by @williamFalcon in #82
- format encoded response by @aniketmaurya in #85
- [pre-commit.ci] pre-commit suggestions by @pre-commit-ci in #78
- improve logging by @aniketmaurya in #87
- Update server.py by @williamFalcon in #89
- removes
/stream-predict
endpoint and standardizes to/predict
by @aniketmaurya in #93 - validate port by @aniketmaurya in #92
- Bump version since litserve is different from the last released version by @rasbt in #94
- Add Stream API Example by @rasbt in #95
- add openai spec v0 by @aniketmaurya in #76
- Exclude markdown files from pre-commit formatter by @rasbt in #96
- fix summary tag by @aniketmaurya in #97
- add client detail to streaming document by @aniketmaurya in #98
- Fix installation instructions for case sensitive file systems by @rasbt in #102
- Improve docs for OpenAI spec by @lantiga in #100
- add newline while streaming JSON objects by @aniketmaurya in #105
- fix pipe reuse by @aniketmaurya in #108
- add openai streaming by @aniketmaurya in #101
- Add real docs! by @williamFalcon in #114
- feat: Update OpenAI spec to include image url in message content by @bhimrazy in #113
- feat: Update OpenAI spec to include support for tools by @bhimrazy in #119
- inject context to share/log values across LitAPI methods by @aniketmaurya in #118
- Track device automatically by @aniketmaurya in #123
- fix: kill process by @bhimrazy in #124
- [pre-commit.ci] pre-commit suggestions by @pre-commit-ci in #125
- pre-populate context with OpenAISpec by @aniketmaurya in #126
New Contributors
- @andyland made their first contribution in #80
- @pre-commit-ci made their first contribution in #78
- @rasbt made their first contribution in #94
- @bhimrazy made their first contribution in #113
Full Changelog: v0.1.0...v0.1.1dev0
v0.1.0
Development release 2
Release dev2 (#12) * Expose root path * Update release * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Canary release
v0.0.0.dev1 add releasing flow | `0.0.0.dev1`