Skip to content

Multiple models support for LLM TGI #1818

Multiple models support for LLM TGI

Multiple models support for LLM TGI #1818

Triggered via pull request November 4, 2024 06:53
@sgurunatsgurunat
synchronize #835
Status Success
Total duration 32m 55s
Artifacts 3

pr-microservice-test.yml

on: pull_request_target
job1  /  Get-test-matrix
7s
job1 / Get-test-matrix
Matrix: Microservice-test
Fit to window
Zoom out
Zoom in

Annotations

4 warnings
Microservice-test (embeddings_tei_langchain, intel_cpu)
No files were found with the provided path: /home/sdp/GenAIComps-actions-runner/_work/GenAIComps/GenAIComps/tests/*.log. No artifacts will be uploaded.
Microservice-test (reranks_tei, intel_cpu)
No files were found with the provided path: /home/sdp/GenAIComps-actions-runner/_work/GenAIComps/GenAIComps/tests/*.log. No artifacts will be uploaded.
Microservice-test (llms_text-generation_tgi, intel_cpu)
No files were found with the provided path: /home/sdp/GenAIComps-actions-runner/_work/GenAIComps/GenAIComps/tests/*.log. No artifacts will be uploaded.
Microservice-test (web_retrievers_chroma_langchain, intel_cpu)
No files were found with the provided path: /home/sdp/GenAIComps-actions-runner/_work/GenAIComps/GenAIComps/tests/*.log. No artifacts will be uploaded.

Artifacts

Produced during runtime
Name Size
agent_langchain_intel_hpu
11.2 KB
llms_faq-generation_tgi_langchain_intel_cpu
6.28 KB
llms_summarization_tgi_langchain_intel_cpu
2.94 KB