-
Notifications
You must be signed in to change notification settings - Fork 127
components batch_inference_preparer
github-actions[bot] edited this page Nov 9, 2024
·
21 revisions
Prepare the jsonl file and endpoint for batch inference component.
Version: 0.0.14
View in Studio: https://ml.azure.com/registries/azureml/components/batch_inference_preparer/version/0.0.14
Name | Description | Type | Default | Optional | Enum |
---|---|---|---|---|---|
input_dataset | Input jsonl dataset that contains prompt. For the performance test, this one will be neglected. | uri_folder | True | ||
model_type | Type of model. Can be one of ('aoai', 'oss', 'vision_oss', 'claude') | string | True | ||
batch_input_pattern | The string for the batch input pattern. The input should be the payload format with substitution for the key for the value put in the ###<key> . For example, one can use the following format for a llama text-gen model with a input dataset has prompt for the payload and _batch_request_metadata storing the corresponding ground truth. {"input_data": { "input_string": ["###"], "parameters": { "temperature": 0.6, "max_new_tokens": 100, "do_sample": true } }, "_batch_request_metadata": ###<_batch_request_metadata> }
|
string | False | ||
label_column_name | The label column name. | string | True | ||
additional_columns | Name(s) of additional column(s) that could be useful to compute metrics, separated by comma (","). | string | True | ||
is_performance_test | If true, the performance test will be run. | boolean | False | ||
endpoint_url | The endpoint name or url. | string | True | ||
n_samples | The number of top samples send to endpoint. | integer | True |
Name | Description | Type |
---|---|---|
formatted_data | Path to the folder where the payload will be stored. | mltable |
ground_truth_metadata | Path to the folder where the ground truth metadata will be stored. | uri_folder |
azureml://registries/azureml/environments/evaluation/labels/latest