You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using the default compose.yaml file for Xeon on IBM cloud ( bx3d-16x80 instance on IBM Cloud. It has 16 vCPU, 80GB RAM and 100GB boot volume), the output for the question What is the revenue of Nike in 2023? is not as expected, or close to accepted.
Also tried the question what is the capital city of France?
Priority
P2-High
OS type
Ubuntu
Hardware type
Xeon-SPR
Installation method
Deploy method
Running nodes
Single Node
What's the version?
https://hub.docker.com/layers/opea/chatqna/latest/images/sha256-b6fe31dd33dc819054b6ae850ef45e1572d62e80305164759bed852638a0c691?context=explore
Description
When using the default compose.yaml file for Xeon on IBM cloud (
bx3d-16x80
instance on IBM Cloud. It has 16 vCPU, 80GB RAM and 100GB boot volume), the output for the questionWhat is the revenue of Nike in 2023?
is not as expected, or close to accepted.Also tried the question
what is the capital city of France?
Attaching the output logs:
Question: What is the revenue of Nike in 2023?
default_output.txt
Question: What is the capital of France?
default_output_paris.txt
if you limit the tokens, the output is just truncated and the user doesn't get the answer:
Question: What is the capital of France? (max tokens=10)
truncated.txt
Reproduce steps
git clone https://github.com/opea-project/GenAIExamples.git
cd GenAIExamples/ChatQnA
cd docker_compose/intel/cpu/xeon/
source set_env.sh
docker compose up -d
Raw log
No response
The text was updated successfully, but these errors were encountered: