Skip to content

components model_evaluation_pipeline

github-actions[bot] edited this page Dec 19, 2024 · 35 revisions

Model Evaluation Pipeline

model_evaluation_pipeline

Overview

Pipeline component for model evaluation for supported tasks. \ Generates predictions on a given model, followed by computing model performance metrics to score the model quality for supported tasks.

Version: 0.0.36

Tags

type : evaluation sub_type : subgraph

View in Studio: https://ml.azure.com/registries/azureml/components/model_evaluation_pipeline/version/0.0.36

Inputs

Name Description Type Default Optional Enum
compute_name string serverless
instance_type string STANDARD_NC24S_V3

model prediction

Name Description Type Default Optional Enum
task Task type string tabular-classification ['tabular-classification', 'tabular-classification-multilabel', 'tabular-regression', 'text-classification', 'text-classification-multilabel', 'text-named-entity-recognition', 'text-summarization', 'question-answering', 'text-translation', 'text-generation', 'fill-mask', 'image-classification', 'image-classification-multilabel', 'chat-completion', 'image-object-detection', 'image-instance-segmentation']
test_data Test Data uri_folder False
mlflow_model Mlflow Model (could be a registered model or part of another pipeline mlflow_model False
label_column_name Label column name in provided test dataset (Ex: label) string True
input_column_names Input column names in provided test dataset (Ex : column1). Add comma delimited values in case of multiple input columns (Ex : column1,column2) string True
device string auto False ['auto', 'cpu', 'gpu']
batch_size integer True

compute metrics

Name Description Type Default Optional Enum
evaluation_config Additional parameters required for evaluation. See How to create a config here uri_file True
evaluation_config_params JSON Serialized string of evaluation_config string True
openai_config_params Required OpenAI Params for calculating GPT Based metrics for QnA task string True

Outputs

Name Description Type
evaluation_result Output dir to save the evaluation result uri_folder
Clone this wiki locally