Skip to content

components transformers_image_classification_pipeline

github-actions[bot] edited this page Dec 24, 2024 · 23 revisions

Image Classification HuggingFace Transformers Pipeline

transformers_image_classification_pipeline

Overview

Pipeline component for image classification using HuggingFace transformers models.

Version: 0.0.23

View in Studio: https://ml.azure.com/registries/azureml/components/transformers_image_classification_pipeline/version/0.0.23

Inputs

Name Description Type Default Optional Enum
compute_model_import Compute to be used for model_import eg. provide 'FT-Cluster' if your compute is named 'FT-Cluster' string False
compute_finetune Compute to be used for finetune eg. provide 'FT-Cluster' if your compute is named 'FT-Cluster' string False
instance_count Number of nodes to be used for finetuning (used for distributed training) integer 1 True
process_count_per_instance Number of gpus to be used per node for finetuning, should be equal to number of gpu per node in the compute SKU used for finetune integer 1 True
compute_model_evaluation Compute to be used for model evaluation eg. provide 'FT-Cluster' if your compute is named 'FT-Cluster' string True

Model Selector Component Model family

Name Description Type Default Optional Enum
model_family Which framework the model belongs to. string HuggingFaceImage True ['HuggingFaceImage']
model_name Please select models from AzureML Model Assets for all supported models. For HuggingFace models, which are not supported in AuzreML model registry, input HuggingFace model_name here. The Model will be downloaded from HuggingFace hub using this model_name and are subject to third party license terms available on the HuggingFace model details page. It is the user responsibility to comply with the model's license terms. string True
pytorch_model Pytorch Model registered in AzureML Asset. custom_model True
mlflow_model Mlflow Model registered in AzureML Asset. mlflow_model True
download_from_source Download model directly from HuggingFace instead of system registry boolean False True

Finetuning Component component input: training mltable

Name Description Type Default Optional Enum
training_data Path to the mltable of the training dataset. mltable False

optional component input: validation mltable

Name Description Type Default Optional Enum
validation_data Path to the mltable of the validation dataset. mltable True
image_width Final image width after augmentation that is input to the network. Default value is -1 which means it would be overwritten by default image width in Hugging Face feature extractor. If either image_width or image_height is set to -1, default value would be used for both width and height. integer -1 True
image_height Final image height after augmentation that is input to the network. Default value is -1 which means it would be overwritten by default image height in Hugging Face feature extractor. If either image_width or image_height is set to -1, default value would be used for both width and height. integer -1 True
task_name Which task the model is solving. string ['image-classification', 'image-classification-multilabel']

primary metric

Name Description Type Default Optional Enum
metric_for_best_model Specify the metric to use to compare two different models. If left empty, will be chosen automatically based on the task type and model selected. string True ['loss', 'f1_score_macro', 'accuracy', 'precision_score_macro', 'recall_score_macro', 'iou', 'iou_macro', 'iou_micro', 'iou_weighted']

Augmentation parameters

Name Description Type Default Optional Enum
apply_augmentations If set to true, will enable data augmentations for training. boolean True True
number_of_workers Number of subprocesses to use for data loading (PyTorch only). 0 means that the data will be loaded in the main process. integer 8 True

Deepspeed Parameters

Name Description Type Default Optional Enum
apply_deepspeed If set to true, will enable deepspeed for training. If left empty, will be chosen automatically based on the task type and model selected. boolean True

optional component input: deepspeed config

Name Description Type Default Optional Enum
deepspeed_config Deepspeed config to be used for finetuning. uri_file True
apply_ort If set to true, will use the ONNXRunTime training. If left empty, will be chosen automatically based on the task type and model selected. boolean True

Training parameters

Name Description Type Default Optional Enum
number_of_epochs Number of training epochs. If left empty, will be chosen automatically based on the task type and model selected. integer True
max_steps If set to a positive number, the total number of training steps to perform. Overrides 'number_of_epochs'. In case of using a finite iterable dataset the training may stop before reaching the set number of steps when all data is exhausted. If left empty, will be chosen automatically based on the task type and model selected. integer True
training_batch_size Train batch size. If left empty, will be chosen automatically based on the task type and model selected. integer True
validation_batch_size Validation batch size. If left empty, will be chosen automatically based on the task type and model selected. integer True
auto_find_batch_size Flag to enable auto finding of batch size. If the provided 'per_device_train_batch_size' goes into Out Of Memory (OOM) enabling auto_find_batch_size will find the correct batch size by iteratively reducing 'per_device_train_batch_size' by a factor of 2 till the OOM is fixed. boolean False True

learning rate and learning rate scheduler

Name Description Type Default Optional Enum
learning_rate Start learning rate. Defaults to linear scheduler. If left empty, will be chosen automatically based on the task type and model selected. number True
learning_rate_scheduler The scheduler type to use. If left empty, will be chosen automatically based on the task type and model selected. string True ['warmup_linear', 'warmup_cosine', 'warmup_cosine_with_restarts', 'warmup_polynomial', 'constant', 'warmup_constant']
warmup_steps Number of steps used for a linear warmup from 0 to learning_rate. If left empty, will be chosen automatically based on the task type and model selected. integer True

optimizer

Name Description Type Default Optional Enum
optimizer Optimizer to be used while training. 'adamw_ort_fused' optimizer is only supported for ORT training. If left empty, will be chosen automatically based on the task type and model selected. string True ['adamw_hf', 'adamw', 'sgd', 'adafactor', 'adagrad', 'adamw_ort_fused']
weight_decay The weight decay to apply (if not zero) to all layers except all bias and LayerNorm weights in AdamW and SGD optimizer. If left empty, will be chosen automatically based on the task type and model selected. number True
extra_optim_args Optional additional arguments that are supplied to SGD Optimizer. The arguments should be semi-colon separated key value pairs and should be enclosed in double quotes. For example, "momentum=0.5; nesterov=True" for sgd. Please make sure to use a valid parameter names for the chosen optimizer. For exact parameter names, please refer https://pytorch.org/docs/1.13/generated/torch.optim.SGD.html#torch.optim.SGD for SGD. Parameters supplied in extra_optim_args will take precedence over the parameter supplied via other arguments such as weight_decay. If weight_decay is provided via "weight_decay" parameter and via extra_optim_args both, values specified in extra_optim_args will be used. string True

gradient accumulation

Name Description Type Default Optional Enum
gradient_accumulation_step Number of update steps to accumulate the gradients for, before performing a backward/update pass. If left empty, will be chosen automatically based on the task type and model selected. integer True

mixed precision training

Name Description Type Default Optional Enum
precision Apply mixed precision training. This can reduce memory footprint by performing operations in half-precision. string 32 True ['32', '16']

label smoothing factor

Name Description Type Default Optional Enum
label_smoothing_factor The label smoothing factor to use in range [0.0, 1,0). Zero means no label smoothing, otherwise the underlying onehot-encoded labels are changed from 0s and 1s to label_smoothing_factor/num_labels and 1 - label_smoothing_factor + label_smoothing_factor/num_labels respectively. Not applicable to multi-label classification. If left empty, will be chosen automatically based on the task type and model selected. number True

random seed

Name Description Type Default Optional Enum
random_seed Random seed that will be set at the beginning of training. integer 42 True

evaluation strategy parameters

Name Description Type Default Optional Enum
evaluation_strategy The evaluation strategy to adopt during training. Please note that the save_strategy and evaluation_strategy should match. string epoch True ['epoch', 'steps']
evaluation_steps Number of update steps between two evals if evaluation_strategy='steps'. Please note that the saving steps should be a multiple of the evaluation steps. integer 500 True

logging strategy parameters

Name Description Type Default Optional Enum
logging_strategy The logging strategy to adopt during training. string epoch True ['epoch', 'steps']
logging_steps Number of update steps between two logs if logging_strategy='steps'. integer 500 True

Save strategy

Name Description Type Default Optional Enum
save_strategy The checkpoint save strategy to adopt during training. Please note that the save_strategy and evaluation_strategy should match. string epoch True ['epoch', 'steps']
save_steps Number of updates steps before two checkpoint saves if save_strategy="steps". Please note that the saving steps should be a multiple of the evaluation steps. integer 500 True

model checkpointing limit

Name Description Type Default Optional Enum
save_total_limit If a value is passed, will limit the total amount of checkpoints. Deletes the older checkpoints in output_dir. If the value is -1 saves all checkpoints". integer 5 True

Early Stopping Parameters

Name Description Type Default Optional Enum
early_stopping Enable early stopping. boolean False True
early_stopping_patience Stop training when the specified metric worsens for early_stopping_patience evaluation calls. integer 1 True

Grad Norm

Name Description Type Default Optional Enum
max_grad_norm Maximum gradient norm (for gradient clipping). If left empty, will be chosen automatically based on the task type and model selected. number True

resume from the input model

Name Description Type Default Optional Enum
resume_from_checkpoint Loads optimizer, Scheduler and Trainer state for finetuning if true. boolean False True

save mlflow model

Name Description Type Default Optional Enum
save_as_mlflow_model Save as mlflow model with pyfunc as flavour. boolean True True

Model prediction Component component input: test mltable

Name Description Type Default Optional Enum
test_data Path to the mltable of the test dataset. mltable False
test_batch_size Test batch size. integer 4 True
label_column_name Label column name to be ignored by model for prediction purposes, for example "label". string label True
input_column_names Input column names provided to model for prediction, for example column1. Add comma delimited values in case of multiple input columns, for example column1,column2. string image_url True
evaluation_config Additional parameters for Computing Metrics. uri_file True
evaluation_config_params Additional parameters as JSON serialized string. string True

Outputs

########################### Finetuning Component ########################### #

Name Description Type
mlflow_model_folder Output dir to save the finetune model as mlflow model. mlflow_model
pytorch_model_folder Output dir to save the finetune model as torch model. custom_model

Compute metrics Component

Name Description Type
evaluation_result Test Data Evaluation Results uri_folder
Clone this wiki locally