Skip to content

components compute_performance_metrics

github-actions[bot] edited this page Dec 19, 2024 · 17 revisions

Compute Performance Metrics

compute_performance_metrics

Overview

Performs performance metric post processing using data from a model inference run.

Version: 0.0.11

View in Studio: https://ml.azure.com/registries/azureml/components/compute_performance_metrics/version/0.0.11

Inputs

Name Description Type Default Optional Enum
performance_data Data outputted by model inferencing that contains performance data. uri_folder False
percentiles Comma-separated list of percentiles of latency to be calculated. string 50,90,99 True
batch_size_column_name The name of the column that contains the batch size information. Ex. "batch_size" string False
start_time_column_name The name of the column that contains the start timestamp in ISO 8601 format. Ex. "start_time_iso" string False
end_time_column_name The name of the column that contains the end timestamp in ISO 8601 format. Ex. "end_time_iso" string False
input_token_count_column_name The name of the column that contains the input token count information. Ex. "input_token_count" string True
output_token_count_column_name The name of the column that contains the output token count information. Ex. "output_token_count" string True
input_char_count_column_name The name of the column that contains the input character count information. Ex. "input_char_count" string True
output_char_count_column_name The name of the column that contains the output character count information. Ex. "output_char_count" string True
is_batch_inference_result If True, we will use the time between the first and last request to calculate the tokens per second and request per second. If False, we will use individual request time to calculate the tokens per second and request per second. boolean True False

Outputs

Name Description Type
performance_result Path to the file where the calculated performance metric results will be stored. uri_file

Environment

azureml://registries/azureml/environments/model-evaluation/labels/latest

Clone this wiki locally