Skip to content

components import_model

github-actions[bot] edited this page Nov 6, 2024 · 41 revisions

Import model

import_model

Overview

Import a model into a workspace or a registry

Version: 0.0.39

Tags

Preview

View in Studio: https://ml.azure.com/registries/azureml/components/import_model/version/0.0.39

Inputs

pipeline specific compute

Name Description Type Default Optional Enum
compute Common compute for model download, MLflow conversion and registration. eg. provide 'FT-Cluster' if your compute is named 'FT-Cluster'. Special characters like \ and ' are invalid in the parameter value. If compute name is provided, instance_type field will be ignored and the respective cluster will be used string serverless True
instance_type Instance type to be used for the component in case of serverless compute, eg. STANDARD_NC6s_v3. The parameter compute must be set to 'serverless' for instance_type to be used string STANDARD_NC6s_v3 True

Inputs for download model

Name Description Type Default Optional Enum
model_source Storage containers from where model will be sourced from string Huggingface ['AzureBlob', 'GIT', 'Huggingface']
model_id A valid model id for the model source selected. For example you can specify bert-base-uncased for importing HuggingFace bert base uncased model. Please specify the complete URL if GIT or AzureBlob is selected in model_source string
model_flavor Flavor of MLFlow to which model the model is converted to. string HFTransformersV2 False ['HFTransformersV2', 'OSS']
model_framework Framework from which model is imported from. string Huggingface False ['Huggingface', 'MMLab', 'llava', 'AutoML']
vllm_enabled Enable vllm in the converted model boolean False False
token If set use it to access the private models or authenticate the user. For example, user can get the token for HF private model by creating account in Huggingface, accept the condition for models that needs to be downloaded and create access token from browser. For more details please visit - https://huggingface.co/docs/hub/security-tokens string True

Inputs for the MlFLow conversion

Name Description Type Default Optional Enum
license_file_path Path to the license file uri_file True
task_name A Hugging face task on which model was trained on string True ['chat-completion', 'fill-mask', 'token-classification', 'question-answering', 'summarization', 'text-generation', 'text2text-generation', 'text-classification', 'translation', 'image-classification', 'image-classification-multilabel', 'image-object-detection', 'image-instance-segmentation', 'image-to-text', 'text-to-image', 'text-to-image-inpainting', 'image-text-to-text', 'image-to-image', 'zero-shot-image-classification', 'mask-generation', 'video-multi-object-tracking', 'visual-question-answering']
hf_config_args Provide args that should be used to load Huggingface model config. eg: trust_remote_code=True; string True
hf_tokenizer_args Provide args that should be used to load Huggingface model tokenizer. eg: trust_remote_code=True, device_map=auto, string True
hf_model_args Provide args that should be used to load Huggingface model. eg: trust_remote_code=True, device_map=auto, low_cpu_mem_usage=True string True
hf_pipeline_args Provide pipeline args that should be used while loading the hugging face model. Dont use quotes. If value cannot be eval'ed it will be taken as as string. eg: trust_remote_code=True, device_map=auto string True
hf_config_class AutoConfig class may not be sufficient to load config for some of the models. You can use this parameter to send Config class name as it is string True
hf_model_class AutoModel classes may not be sufficient to load some of the models. You can use this parameter to send Model class name as it is string True
hf_tokenizer_class AutoTokenizer class may not be sufficient to load tokenizer for some of the models. You can use this parameter to send Config class name as it is string True
hf_use_experimental_features Enable experimental features for hugging face MLflow model conversion boolean False True
extra_pip_requirements Extra pip dependencies that MLflow model should capture as part of conversion. This would be used to create environment while loading the model for inference. Pip dependencies expressed as below. Do not use quotes for passing. eg: pkg1==1.0, pkg2, pkg3==1.0 string True

Inputs for MLflow local validation

Name Description Type Default Optional Enum
local_validation_test_data Test data for MLflow local validation. Validation will be skipped if test data is not provided uri_file True
local_validation_column_rename_map Provide mapping for local validation test data column names, that should be renamed before inferencing eg: col1:ren1; col2:ren2; col3:ren3 string True

Inputs for Model registration

Name Description Type Default Optional Enum
custom_model_name Model name to use in the registration. If name already exists, the version will be auto incremented string True
model_version Model version in workspace/registry. If the same model name and version exists, the version will be auto incremented string True
model_description Description of the model that will be shown in AzureML registry or workspace string True
registry_name Name of the AzureML asset registry where the model will be registered. Model will be registered in a workspace if this is unspecified string True
model_metadata A JSON or a YAML file that contains model metadata confirming to Model V2 contract uri_file True
update_existing_model If set to true, will update the existing model. If set to false, will create a new model. boolean False True

Pipeline outputs

Name Description Type Default Optional Enum

Outputs

Name Description Type
mlflow_model_folder Output path for the converted MLflow model mlflow_model
model_registration_details Output folder with a file which captures transformations applied above and registration details in JSON file uri_folder
Clone this wiki locally