-
Notifications
You must be signed in to change notification settings - Fork 124
components Benchmarking documentation
-
batch_benchmark_config_generator
Generates the config for the batch score component.
-
Components for batch endpoint inference
-
batch_benchmark_inference_claude
Components for batch endpoint inference
-
batch_benchmark_inference_with_inference_compute
Components for batch endpoint inference with inference compute support.
-
-
Prepare the jsonl file and endpoint for batch inference component.
-
Output Formatter for batch inference output
-
Resource Manager for batch inference.
-
Component for benchmarking an embedding model via MTEB.
-
Aggregate quality metrics, performance metrics and all of the metadata from the pipeline. Also add them to the root run.
-
Performs performance metric post processing using data from a model inference run.
-
Downloads the dataset onto blob store.
-
Dataset Preprocessor
-
Samples a dataset containing JSONL file(s).
-
Inference Postprocessor
-
This component is used to create prompts from a given dataset. From a given jinja prompt template, it will generate prompts. It can also create few-shot prompts given a few-shot dataset and the number of shots.