-
Notifications
You must be signed in to change notification settings - Fork 124
components prompt_crafter
github-actions[bot] edited this page Oct 23, 2024
·
20 revisions
This component is used to create prompts from a given dataset. From a given jinja prompt template, it will generate prompts. It can also create few-shot prompts given a few-shot dataset and the number of shots.
Version: 0.0.12
View in Studio: https://ml.azure.com/registries/azureml/components/prompt_crafter/version/0.0.12
Name | Description | Type | Default | Optional | Enum |
---|---|---|---|---|---|
prompt_type | Determine the prompt format. This component supports chat and completion models. Completions: {"prompt": "Few shot prompts will go here"} Chat: Chat models have 3 roles: System, User and Assistant. Example: {"prompt": [{"role": "system", "content": "You are a helpful assistant"}, {"role": "user", "content": "Example chat input"}, {"role": "assistant", "content": "Example chat output"}]} | string | completions | False | ['chat', 'completions'] |
test_data | The uri file (jsonl) used to generate prompts. | uri_file | False | ||
prompt_pattern | The pattern to be used to generate the prompts. It should be a valid jinja template. Example: Input Data : {"question":"Example Question?", "choices":{"option":["Answer1","Answer2","Answer3","Answer4"]}, "answerKey":"D"} The prompt pattern for the above input data can be: "Question: {{question}}\n Choices are: (1) {{choices.option[0]}}\n (2) {{choices.option[1]}}\n (3) {{choices.option[2]}}\n (4) {{choices.option[3]}}\n" | string | False | ||
few_shot_pattern | The pattern used to generate the few shot portion of a prompt. It should be a valid jinja template. If this pattern is not provided, few shot prompts are generated from a concatenation of prompt_pattern and output_pattern. Example: Input Data : {"question":"Example Question?", "choices":{"option":["Answer1","Answer2","Answer3","Answer4"]}, "answerKey":"D"} The few shot pattern for the above input data can be: "Question: {{question}}\n Choices are: (1) {{choices.option[0]}}\n (2) {{choices.option[1]}}\n (3) {{choices.option[2]}}\n (4) {{choices.option[3]}}\n Answer: {{answerKey}}" | string | True | ||
n_shots | The number of shots to use in the few-shot prompts. Default is 0, which means no few-shot examples will be generated. n_shots must be smaller than the size of few_shot dataset. | integer | 0 | False | |
few_shot_data | The uri file(jsonl) to be used to generate the n-shot prompts. | uri_file | True | ||
output_pattern | The jinja template representing the expected output that would be used for few shot prompts when n_shot > 0. e.g: {{answerKey}} | string | False | ||
few_shot_separator | The separator to be added between few-shot prompts. | string | True | ||
system_message | This is the description of the task that the Assisstant should perform. Applicable for chat models only. e.g: "You are a helpful assistant." will be added to system role for chat models. {"role": "system", "content": "You are a helpful assistant"} | string | True | ||
prefix | The prefix to be added to the prompts. e.g: "Question: " | string | True | ||
random_seed | Random seed for sampling few-shots; if not specified, 0 is used. | integer | 0 | True | |
ground_truth_column_name | This will be used as the ground truth column if present in the input. If not present, the output_pattern will be used as the ground truth. | string | True | ||
additional_columns | Any additional columns that would be helpful for computing metrics, if present in the input. If there're multiple such columns, they should be separated by comma (","). | string | True |
Name | Description | Type |
---|---|---|
output_file | Output file path where few_shot_prompt data will be written. | uri_file |
azureml://registries/azureml/environments/model-evaluation/labels/latest