-
Notifications
You must be signed in to change notification settings - Fork 82
Description
System Info / 系統信息
使用vllm代码示例,推理的结果总是Grounded de,是啥原因?
Who can help? / 谁可以帮助到您?
No response
Information / 问题信息
- The official example scripts / 官方的示例脚本
- My own modified scripts / 我自己修改的脚本和任务
Reproduction / 复现过程
from PIL import Image
from vllm import LLM, SamplingParams
model_name = "THUDM/cogagent-9b-20241220"
def procress_inputs():
task = "Mark emails as read"
platform_str = "(Platform: Mac)\n"
history_str = "\nHistory steps: "
format_str = "(Answer in Action-Operation-Sensitive format.)"
query = f"Task: {task}{history_str}\n{platform_str}{format_str}"
return query
llm = LLM(model=model_name,
tensor_parallel_size=1,
max_model_len=8192,
trust_remote_code=True,
enforce_eager=True)
stop_token_ids = [151329, 151336, 151338]
sampling_params = SamplingParams(temperature=0.2,
max_tokens=1024,
stop_token_ids=stop_token_ids)
prompt = procress_inputs()
image = Image.open("your image.png").convert('RGB')
inputs = {
"prompt": prompt,
"multi_modal_data": {
"image": image
},
}
outputs = llm.generate(inputs, sampling_params=sampling_params)
for o in outputs:
generated_text = o.outputs[0].text
print(generated_text)
Expected behavior / 期待表现
vllm框架推理结果总是Grounded de,是啥原因?