Skip to content

Commit 2e49fc3

Browse files
authored
bump version to v0.6.1 (#2513)
* bump version to v0.6.1 * update
1 parent 8db20bc commit 2e49fc3

File tree

7 files changed

+7
-5
lines changed

7 files changed

+7
-5
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -149,7 +149,7 @@ For detailed inference benchmarks in more devices and more settings, please refe
149149
<li>InternLM-XComposer2 (7B, 4khd-7B)</li>
150150
<li>InternLM-XComposer2.5 (7B)</li>
151151
<li>Qwen-VL (7B)</li>
152-
<li>Qwen2-VL (2B, 7B)</li>
152+
<li>Qwen2-VL (2B, 7B, 72B)</li>
153153
<li>DeepSeek-VL (7B)</li>
154154
<li>InternVL-Chat (v1.1-v1.5)</li>
155155
<li>InternVL2 (1B-76B)</li>

README_zh-CN.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -150,7 +150,7 @@ LMDeploy TurboMind 引擎拥有卓越的推理能力,在各种规模的模型
150150
<li>InternLM-XComposer2 (7B, 4khd-7B)</li>
151151
<li>InternLM-XComposer2.5 (7B)</li>
152152
<li>Qwen-VL (7B)</li>
153-
<li>Qwen2-VL (2B, 7B)</li>
153+
<li>Qwen2-VL (2B, 7B, 72B)</li>
154154
<li>DeepSeek-VL (7B)</li>
155155
<li>InternVL-Chat (v1.1-v1.5)</li>
156156
<li>InternVL2 (1B-76B)</li>

docs/en/get_started/installation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ pip install lmdeploy
2323
The default prebuilt package is compiled on **CUDA 12**. If CUDA 11+ (>=11.3) is required, you can install lmdeploy by:
2424

2525
```shell
26-
export LMDEPLOY_VERSION=0.6.0
26+
export LMDEPLOY_VERSION=0.6.1
2727
export PYTHON_VERSION=38
2828
pip install https://github.com/InternLM/lmdeploy/releases/download/v${LMDEPLOY_VERSION}/lmdeploy-${LMDEPLOY_VERSION}+cu118-cp${PYTHON_VERSION}-cp${PYTHON_VERSION}-manylinux2014_x86_64.whl --extra-index-url https://download.pytorch.org/whl/cu118
2929
```

docs/en/supported_models/supported_models.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ The following tables detail the models supported by LMDeploy's TurboMind engine
2020
| Qwen2 | 1.5B - 72B | LLM | Yes | Yes | Yes | Yes |
2121
| Mistral | 7B | LLM | Yes | Yes | Yes | - |
2222
| Qwen-VL | 7B | MLLM | Yes | Yes | Yes | Yes |
23+
| Qwen2-VL | 2B, 7B, 72B | MLLM | Yes | Yes | Yes | - |
2324
| DeepSeek-VL | 7B | MLLM | Yes | Yes | Yes | Yes |
2425
| Baichuan | 7B | LLM | Yes | Yes | Yes | Yes |
2526
| Baichuan2 | 7B | LLM | Yes | Yes | Yes | Yes |

docs/zh_cn/get_started/installation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ pip install lmdeploy
2323
默认的预构建包是在 **CUDA 12** 上编译的。如果需要 CUDA 11+ (>=11.3),你可以使用以下命令安装 lmdeploy:
2424

2525
```shell
26-
export LMDEPLOY_VERSION=0.6.0
26+
export LMDEPLOY_VERSION=0.6.1
2727
export PYTHON_VERSION=38
2828
pip install https://github.com/InternLM/lmdeploy/releases/download/v${LMDEPLOY_VERSION}/lmdeploy-${LMDEPLOY_VERSION}+cu118-cp${PYTHON_VERSION}-cp${PYTHON_VERSION}-manylinux2014_x86_64.whl --extra-index-url https://download.pytorch.org/whl/cu118
2929
```

docs/zh_cn/supported_models/supported_models.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@
2020
| Qwen2 | 1.5B - 72B | LLM | Yes | Yes | Yes | Yes |
2121
| Mistral | 7B | LLM | Yes | Yes | Yes | - |
2222
| Qwen-VL | 7B | MLLM | Yes | Yes | Yes | Yes |
23+
| Qwen2-VL | 2B, 7B, 72B | MLLM | Yes | Yes | Yes | - |
2324
| DeepSeek-VL | 7B | MLLM | Yes | Yes | Yes | Yes |
2425
| Baichuan | 7B | LLM | Yes | Yes | Yes | Yes |
2526
| Baichuan2 | 7B | LLM | Yes | Yes | Yes | Yes |

lmdeploy/version.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
# Copyright (c) OpenMMLab. All rights reserved.
22
from typing import Tuple
33

4-
__version__ = '0.6.0'
4+
__version__ = '0.6.1'
55
short_version = __version__
66

77

0 commit comments

Comments
 (0)