-
Notifications
You must be signed in to change notification settings - Fork 124
components inference_postprocessor
github-actions[bot] edited this page Oct 23, 2024
·
16 revisions
Inference Postprocessor
Version: 0.0.10
View in Studio: https://ml.azure.com/registries/azureml/components/inference_postprocessor/version/0.0.10
Name | Description | Type | Default | Optional | Enum |
---|---|---|---|---|---|
prediction_dataset | A file that contains predicted values | uri_file | False | ||
prediction_column_name | Key in prediction dataset that contains predictions. | string | False | ||
ground_truth_dataset | A file that contains the ground truth | uri_file | True | ||
ground_truth_column_name | Key in ground truth dataset that contains ground truth. If ground_truth_dataset is given, then, this is required input. | string | True | ||
additional_columns | Name(s) of additional columns that could be helpful for computing some metrics, separated by comma (","). | string | True | ||
remove_prefixes | A set of string prefixes separated by comma list of string prefixes to be removed from the inference results in sequence. The prefixes should be separated by a comma. Example: for the inference string - "###>>>Hello world." and prefixes - "###,>>>" will output "Hello world". | string | True | ||
separator | The separator used in few_shot patterns. One common example is "###". If provided, response will be split on this separator, and only the first part will be used. Example: "This is the first part ### This is the second part" will result in "This is the first part". | string | True | ||
find_first | A list of strings to search for in the inference results. The first occurrence of each string will \ be extracted and the occurrence with minimum index will be returned. Must provide a comma-separated list of strings. Example: >>> find_first = "positive,negative" >>> completion = "This is a positive example, not negative" # Output: "positive" | string | True | ||
extract_number | If the inference results contain a number, this can be used to extract the first or last number in the inference results. The number will be extracted as a string. Example: >>> extract_number = "first" >>> prediction = "Adding 0.3 to 1,000 gives 1,000.3" # Output: "0.3" Example: >>> extract_number = "last" >>> prediction = "Adding 0.3 to 1,000 gives 1,000.3" # Output: "1000.3" | string | True | ['first', 'last'] | |
regex_expr | A regular expression to extract the answer from the inference results. The pattern must contain a group to be extracted. The first group and the first match will be used. Example: "\n\nThe answer is: (\d)." | string | True | ||
strip_characters | A set of characters to remove from the beginning or end of the extracted answer.It is applied in the very end of the extraction process. | string | True | ||
label_map | JSON serialized dictionary to perform mapping. Must contain key-value pair "column_name": "<actual_column_name>" whose value needs mapping, followed by key-value pairs containing idtolabel or labeltoid mappers. Example format: {"column_name":"label", "0":"NEUTRAL", "1":"ENTAILMENT", "2":"CONTRADICTION"} | string | True | ||
template | Jinja template containing logic to extract prediction. In case of multiple predictions, logic must be written in a written in format so that it outputs a list of formatted predictions. Example: >>> prediction = ["The answer is phone.", "The answer is cellular."] The provided jinja template logic should be able extract and output in this format: # Output : ["phone", "cellular"] | string | True | ||
script_path | Path to the custom postprocessor python script to extract prediction. This [base template] (https://github.com/Azure/azureml-assets/tree/main/assets/aml-benchmark/scripts/custom_inference_postprocessors/base_postprocessor_template.py) tshould be used to create a custom postprocessor script. | uri_file | True |
Name | Description | Type |
---|---|---|
output_dataset_result | Path to the output the post processed result in .jsonl file. | uri_file |
azureml://registries/azureml/environments/model-evaluation/labels/latest