Skip to content

components convert_model_to_mlflow

github-actions[bot] edited this page Dec 6, 2024 · 47 revisions

Convert models to MLflow

convert_model_to_mlflow

Overview

Component converts models from supported frameworks to MLflow model packaging format

Version: 0.0.35

Tags

Preview

View in Studio: https://ml.azure.com/registries/azureml/components/convert_model_to_mlflow/version/0.0.35

Inputs

Name Description Type Default Optional Enum
model_id Huggingface model id (https://huggingface.co/<model_id>). A required parameter for Huggingface model framework. Can be provided as input here or in model_download_metadata JSON file. string True
model_flavor Flavor of MLFlow to which the model is converted to. string HFTransformersV2 False ['HFTransformersV2', 'OSS']
vllm_enabled Enable vllm in the converted model boolean False False
model_framework Framework from which model is imported from. string Huggingface False ['Huggingface', 'MMLab', 'llava', 'AutoML']
task_name A Hugging face task on which model was trained on. A required parameter for transformers MLflow flavor. Can be provided as input here or in model_download_metadata JSON file. string True ['chat-completion', 'fill-mask', 'token-classification', 'question-answering', 'summarization', 'text-generation', 'text2text-generation', 'text-classification', 'translation', 'image-classification', 'image-classification-multilabel', 'image-object-detection', 'image-instance-segmentation', 'image-to-text', 'text-to-image', 'text-to-image-inpainting', 'image-text-to-text', 'image-to-image', 'zero-shot-image-classification', 'mask-generation', 'video-multi-object-tracking', 'visual-question-answering', 'image-feature-extraction']
hf_config_args Provide args that should be used to load Huggingface model config. eg: trust_remote_code=True; string True
hf_tokenizer_args Provide args that should be used to load Huggingface model tokenizer. eg: trust_remote_code=True, device_map=auto, string True
hf_model_args Provide args that should be used to load Huggingface model. eg: trust_remote_code=True, device_map=auto, low_cpu_mem_usage=True string True
hf_pipeline_args Provide pipeline args that should be used while loading the hugging face model. Dont use quotes. If value cannot be eval'ed it will be taken as as string. eg: trust_remote_code=True, device_map=auto string True
hf_config_class AutoConfig class may not be sufficient to load config for some of the models. You can use this parameter to send Config class name as it is string True
hf_model_class AutoModel classes may not be sufficient to load some of the models. You can use this parameter to send Model class name as it is string True
hf_tokenizer_class AutoTokenizer class may not be sufficient to load tokenizer for some of the models. You can use this parameter to send Config class name as it is string True
hf_use_experimental_features Enable experimental features for hugging face MLflow model conversion boolean False True
extra_pip_requirements Extra pip dependencies that MLflow model should capture as part of conversion. This would be used to create environment while loading the model for inference. Pip dependencies expressed as below. Do not use quotes for passing. eg: pkg1==1.0, pkg2, pkg3==1.0 string True
inference_base_image The docker image to use in model inference. This image id is assigned to azureml.base_image key in metadata section of mlmodel file. string True
model_download_metadata JSON file containing model download details. uri_file True
model_path Path to the model. uri_folder True
model_path_mmd Path to the MMD model. uri_folder True
license_file_path Path to the license file uri_file True

Outputs

Name Description Type
mlflow_model_folder Output path for the converted MLflow model. mlflow_model

Environment

azureml://registries/azureml/environments/model-management/versions/36

Clone this wiki locally