Skip to content

Commit

Permalink
Main to vision foundation model (#2579)
Browse files Browse the repository at this point in the history
* Automation test for spark CLI samples (#2377)

* Enable test for submit_spark_standalone_jobs

* Generate workflow yaml

* update spark job files for automation test

* Add workflow for serverless spark with user identity job

* Add scripts to upload input data

* Update workflow to refer the script

* Update source file path

* Update workflow with correct file path

* Update working directory

* Update workflow

* Update the path

* Update the script to upload data

* Update the overwrite mode

* Update destination blob name

* Use blob upload batch

* Add spark pipeline tests

* Update spark component extension

* Add script to attache uai

* Update property name in workflow

* Update script parameters

* Update assign uai script

* Format the script

* Update setup identities script

* Update path to infra bootstraping

* Enable automation test for attached spark job

* Update resource path

* Update setup attached resource script

* Update script of setup resources

* Update setup attached resource script2

* Add logic to assign identity role

* Format the empty check

* Check if identity is empty

* Update to get compute properties

* update readme

* Reformat the script

* Update schema location and revert sdk notebook changes

* Attach pool first

* Rename resources and merge main

* Update format in yml

* Add role assigment to uid

* Enable sdk spark batch samples automation test (#2394)

* Initial update to enable sdk spark samples automation test

* Add script to setup spark resources

* Update the script path

* replace attached pool name with value

* Assign sai permission to spark pool

* Update component name

* Add two additional spark notebooks to cover with automation test

* Update spark version and use managedidentityconfiguration

* Format the samples

* Update uai compute name and remove vnet notebook test temporarily

* Update condition check

* Condition format

* Assign uai synapse role

* Update compute name to be valid

* Add readme changes

* Substituate variables

* Rename the synapse workspace

* Substitue synapse ws name in notebook

* Create unique file syanpse per rg

* replace synapse pool name

* bump RAI text and vision component versions to 0.0.8 (#2437)

* Pmanoj/read model specific defaults (#2442)

* reading the model specific defaults from model card

* updating the metric defaults for the tasks

* updating the defaults from bool -> string

* fixing formatting issues

* add llama acs notebook (#2430)

* copy acs notebook

* add docker

* add ncd score.py

* remove monitoring

* add acs

* add safety

* update score to support chunk

* update input and fix score.py

* move asc client to init

* clear output

* support chat bot

* make notebook compatible to chat model

* remove unused

* use 7b as default

* format

* update per comments

* pin model version, use studio to check env status

* add uai creation

* update folder structure

* handle -chat input

* format json

* rename nb

* fix input

* remove junk

* Add compute name and instance type param in sdk and cli (#2446)

* added compute_name in cli

* add serverless code cell

* removed extra cell & add MD

* changed device type to auto

* adding truncation for summarization data

* chged device type to auto

* remove custom environment (#2445)

* Clean up (#2449)

* Clean up

* Delete llama-safe-online-deployment.ipynb

* Delete prepare_uai.ipynb

* Update deploy-and-run.sh (#2443)

* Update deploy-and-run.sh (#2413)

* Update deploy-and-run.sh

* Update deploy-and-run.sh

* Update sdk-deploy-and-test.ipynb (#2412)

* add incremental embedding with table notebook (#2428)

* add incremental embedding with table notebook

* fix comments

---------

Co-authored-by: Lucas Pickup <[email protected]>

* Update RAG notebooks to use generate_embedding component. (#2450)

* Update RAG notebooks to use generate_embedding component.

* Rebase and fixup formatting.

* Missed testgen notebook

---------

Co-authored-by: Lucas Pickup <[email protected]>

* Add online_enabled flag (#2405)

* Add online_enabled falg

* Add support for network isolation scenario

* Modifying file

* minor update

* update the descriptions

* reformat

---------

Co-authored-by: Shail Paragbhai Shah <[email protected]>
Co-authored-by: Qianjun Xu <[email protected]>
Co-authored-by: rsethur <[email protected]>
Co-authored-by: Sethu Raman <[email protected]>

* Changed to Standard_NC6s_v3 because Standard_NC6 is deprecated. (#2456)

* Changed to Standard_NC6s_v3 because Standard_NC6 is deprecated

* Updated SDK Version to 1.52.0 in automl_env files

* Updated credentials for V1 notebooks

* Fix typo (#2459)

* [Notebook] Add dbcopilot notebook (#2427)

* [Notebook] Add dbcopilot notebook

* fix

* fix format

* fix format

---------

Co-authored-by: Xia Xiao <[email protected]>

* Add Hugging Face inference text-classification streaming example notebook (#2458)

* Added Hugging Face inference text-classification streaming example

* Update sdk/python/foundation-models/huggingface/inference/text-generation-streaming/text-generation-streaming-online-endpoint.ipynb

Co-authored-by: Manoj Bableshwar <[email protected]>

---------

Co-authored-by: Manoj Bableshwar <[email protected]>

* Fixed missing comma (#2461)

* Automation test for spark job with managed vnet and interactive session notebook (#2436)

* Automation test for spark job with managed vnet

* Update to keyword arguments in provision vnet

* Add test for data wrangling interactive notebook

* Add permanent delete to worksapce cleanup

* Rename the vnet workspace

* Support interactive session test

* rename run session file notebook

* Update to use ipython

* Add py file for notebook session

* Update relative path to py file

* Update continaer value

* Update expiry time

* upload wrangling data to gen2 storage

* Remove gen2 using service principal

* Remove session mount script

* Move test file into folder and updae variables

* Update to new workflow

* Update blob storage name

* Add test files (#2464)

* Add test files

* checkin all

* checkin all

* checkin all

* Switched to new GPU SKU because NC6 is deprecated (#2462)

* Switched to new GPU SKU because NC6 is deprecated

* Updated credentials for remaining V1 notebooks

* Updated gpu-cluster in bootstrap.sh

* compute update and viz error fix (#2454)

* compute update and viz error fix

* v1 notebooks compute update

* format updates

* format updates

* format updates

* compute name update

* cluster name update

* cluster update

* use nc6_v2 instead of nc6 (#2469)

Co-authored-by: Hannah Westra (SHE/HER) <[email protected]>

* Update Standard_NC6 compute for v2 notebooks. (#2465)

* Change NC6 to NC6s_v3
* Update endpoint compute

* modified the register output path (#2474)

Co-authored-by: bhavanatumma <[email protected]>

* chore(pr_template): Add a checklist item for file deletion (#2466)

* Changed gpu-K80-2 to gpu-V100-2 because NC is deprecated (#2472)

* Changed gpu-K80-2 to gpu-V100-2 because NC is deprecated
* Added python-sdk-tutorial prefix to V1 automl actions

* Update quickstart.ipynb (#2457)

* Update quickstart.ipynb

* Update quickstart.ipynb

* Update quickstart.ipynb

* Update quickstart.ipynb

* Update quickstart.ipynb

* Update train-model.ipynb

* Update train-model.ipynb

* Update train-model.ipynb

* Update train-model.ipynb

* Update quickstart.ipynb

* Update train-model.ipynb

* Update pipeline.ipynb

* Update pipeline.ipynb

* Update pipeline.ipynb

* Update quickstart.ipynb

* Update train-model.ipynb

* Update quickstart.ipynb

* Update pipeline.ipynb

* Update train-model.ipynb

* Update train-model.ipynb

* Update quickstart.ipynb

* Update train-model.ipynb

* Update quickstart.ipynb

* Update train-model.ipynb

* Update e2e-object-classification-distributed-pytorch.ipynb

* Update e2e-object-classification-distributed-pytorch.ipynb

* Update e2e-object-classification-distributed-pytorch.ipynb

* Update e2e-object-classification-distributed-pytorch.ipynb

* Update e2e-object-classification-distributed-pytorch.ipynb

* Update e2e-object-classification-distributed-pytorch.ipynb

* Update e2e-object-classification-distributed-pytorch.ipynb

* Update e2e-object-classification-distributed-pytorch.ipynb

* Update e2e-object-classification-distributed-pytorch.ipynb

* Update e2e-object-classification-distributed-pytorch.ipynb

* Update e2e-object-classification-distributed-pytorch.ipynb

* Update e2e-object-classification-distributed-pytorch.ipynb

* Update e2e-object-classification-distributed-pytorch.ipynb

* Update e2e-object-classification-distributed-pytorch.ipynb

* Update e2e-object-classification-distributed-pytorch.ipynb

* Update pipeline.ipynb

* Update sklearn-diabetes.ipynb

* Update sklearn-diabetes.ipynb

* Update sklearn-diabetes.ipynb

* Update iris-scikit-learn.ipynb

* Update iris-scikit-learn.ipynb

* Update sklearn-diabetes.ipynb

* Update sklearn-mnist.ipynb

* Update debug-and-monitor.ipynb

* Update distributed-cifar10.ipynb

* Update distributed-cifar10.ipynb

* Update distributed-cifar10.ipynb

* Update distributed-cifar10.ipynb

* Update distributed-cifar10.ipynb

* Update objectdetectionAzureML.ipynb

* Update distributed-cifar10.ipynb

* Update pytorch-iris.ipynb

* Update tensorflow-mnist.ipynb

* Update tensorflow-mnist.ipynb

* Update tensorflow-mnist.ipynb

* Update debug-and-monitor.ipynb

* Update objectdetectionAzureML.ipynb

* Update distributed-cifar10.ipynb

* Update pytorch-iris.ipynb

* Update sklearn-diabetes.ipynb

* Update iris-scikit-learn.ipynb

* Update sklearn-mnist.ipynb

* Update tensorflow-mnist.ipynb

* Update distributed-cifar10.ipynb

* Update objectdetectionAzureML.ipynb

* Update tensorflow-mnist.ipynb

* Update tensorflow-mnist.ipynb

* Update e2e-object-classification-distributed-pytorch.ipynb

* Update distributed-cifar10.ipynb

* Update objectdetectionAzureML.ipynb

* Update tensorflow-mnist.ipynb

* Update automl-forecasting-recipe-univariate-run.ipynb

* Update automl-forecasting-recipe-univariate-run.ipynb

* Update automl-forecasting-recipe-univariate-run.ipynb

* Update automl-forecasting-recipe-univariate-run.ipynb

* Update automl-forecasting-recipe-univariate-run.ipynb

* Update automl-forecasting-recipe-univariate-run.ipynb

* Update automl-forecasting-recipe-univariate-run.ipynb

* Update automl-forecasting-recipe-univariate-run.ipynb

* Update automl-forecasting-recipe-univariate-run.ipynb

* Update automl-forecasting-recipe-univariate-run.ipynb

* Update tensorflow-mnist.ipynb

* Update e2e-object-classification-distributed-pytorch.ipynb

* Update auto-ml-forecasting-bike-share.ipynb

* Update auto-ml-forecasting-github-dau.ipynb

* Update auto-ml-forecasting-github-dau.ipynb

* Update automl-classification-task-bankmarketing-serverless.ipynb

* Update automl-forecasting-orange-juice-sales-mlflow.ipynb

* Update azureml-getting-started-studio.ipynb

* Update automl-regression-task-hardware-performance.ipynb

* Update automl-regression-task-hardware-performance.ipynb

* Update automl-nlp-text-ner-task.ipynb

* Update automl-nlp-text-ner-task.ipynb

* Update automl-nlp-text-ner-task.ipynb

* Update automl-nlp-text-ner-task.ipynb

* Update automl-nlp-multiclass-sentiment-mlflow.ipynb

* Update automl-nlp-multiclass-sentiment-mlflow.ipynb

* Update automl-nlp-multiclass-sentiment-mlflow.ipynb

* Update automl-nlp-multiclass-sentiment-mlflow.ipynb

* Update automl-nlp-multiclass-sentiment.ipynb

* Update automl-nlp-multilabel-paper-cat.ipynb

* Update automl-forecasting-task-energy-demand-advanced.ipynb

* Update automl-nlp-multiclass-sentiment-mlflow.ipynb

* Update automl-nlp-multilabel-paper-cat.ipynb

* Update automl-nlp-text-ner-task.ipynb

* Update automl-nlp-multiclass-sentiment-mlflow.ipynb

* Update automl-nlp-multiclass-sentiment.ipynb

* Update automl-nlp-multilabel-paper-cat.ipynb

* Update automl-nlp-text-ner-task.ipynb

* Updated asr inference sample score, online and batch endpoint notebooks (#2441)

* Updated asr inference sample score, online and batch endpoint notebooks

* Updated openai whisper model from 8 to 10 in the batch deployment notebook

* Add UAI to llama deployment (#2473)

* add uai

* fix typo

* fix typo

* reformat

* Update feature store example (#2480)

* update sdk version

* add sdk version update

* update retrieval component version

* Add component-based demand forecasting notebooks (#2470)

* added notebooks

* linter

* added exceptions, readme and workflow files

* changed registries from dev to preview and prod

* fixed compute creation step

* deleted redundant file. Added try-exccept to avoid the http connection timeout issues

* changed gpu compute type due to availability in the test region

* added forece rerun setting to pipeline definition

* removed forced re-run setting since in the test environment is it triggered by default.

* removed repeated experiement name from the HTS nb

* added pipeline description to mm and hts nb

* removing single model nb and associated files

* Removed local data files from the mm nb. Will use data from the public datastore.

* modified mm nb to download data from public blob and save as parquet

* linter

* changed parameter names to be consistent with components' input names in HTS nb

* changed parameter names to be consistent with components' input names in MM nb

* removed code that enables private features

* fixed section reference hyperlinks and removed unused impots from helper scripts

* pre-formatted section headers; minor code reformat

* added experiment and timout restictions to the MM and HTS nb

* added check to make sure all job child runs are posted before downloading forecast results

* workround for the PipelinJob bug which is stuck in the preparing state

* fix llama for empty request/response

* Excluded yolov5/tutorial (#2487)

* update code to fix pipeline test by updating the outbound rule (#2488)

* fix: Update cli/setup.sh to ensure that release candidates are actually installed during sample validation (#2492)

* fix: Update instructructions in cli/setup.sh for validating a release candidate

* [skip ci] Remove dead code

* Add preview label to HTS and MM notebooks and update data sources (#2490)

* Addedd preview label to HTS and MM notebooks, removed data folder from the HTS nb, changed data URIs in the MM nb.

* fixed section reference links

* dropped pre-formatting

* Batch inference sample scripts for foundation models (#2367)

* fill-mask, qna, summarization

* Add all tasks

* black formatting

* Make delete compute step optional

* Fix wording

---------

Co-authored-by: Sumadhva Sridhar <[email protected]>

* [RAG] Move from text-davinci-003 to gpt-3.5 turbo (#2493)

* mdc/monitoring cli samples (#2479)

* add data collector cli samples

* add custom monitoring signal samples

* add relvant supporting files and additional samples

* remove data from this PR, update custom samples

* remove model from this PR

* update email:

* chore: Run black on monitoring cli samples (#2499)

* chore: Update cron schedule for automated-cleanup-resources (#2498)

Will go at about 1am PST

* fix: updating deployments schemas (#2497)

* replace the public data source to a public Azure blob one (#2500)

* replace the public data source to a public Azure blob one, to solve mount/download issue

* update pipeline registered data asset name to resolve conflict

* update e2e flow with same data asset register meta

* update file name to csv, which is the actual exist one

* update code and environment

* bump custom env version

---------

Co-authored-by: Anthony Hu <[email protected]>

* Sdg pipeline (#2496)

* revise pipeline & data notebooks

* wording

* fix error when data version exists

* reformat

* fix cli files to pass smoke test

* many models and HTS cli (#2505)

* Update LlaMa notebooks to use HF TGI container (#2475)

* first draft

* llama hf tgi (#2476)

* Update notebook

* update

* update response format, input format, use env vars

* default sharding to true

* update scoring changes and notebook

* udpate

* update scoring script to use AACS (#2481)

* update scoring script to use AACS

* Add mlflow

* update

* fixes to scoring script

* remove /n

* update scoring script to have system prompt

---------

Co-authored-by: Gaurav Singh <[email protected]>

* black + minor fixes

* update default

* add gen params validation (#2489)

* add top_p in text-gen examples

* score.py changes

* update

* fix

* update scoring to include new aacs key

* add checking for empty string

---------

Co-authored-by: Gaurav Singh <[email protected]>
Co-authored-by: Ayush Mishra <[email protected]>
Co-authored-by: Ayush Mishra <[email protected]>
Co-authored-by: Ke Xu <[email protected]>
Co-authored-by: xuke444 <[email protected]>

* switch from building inf env to using train env (#2508)

* fix iris download error by adding iris_data.csv (#2502)

* fix iris download error by adding iris_data.csv

* fix precompilation issue

* added valid sink argument

* fix BoundsError

* fix bounds error

* fix distributed tf notebook (#2509)

* update mscoco RAI object detection notebook to increase num masks and reduce images in dataset (#2514)

* register model under outputs/mlflow-model (#2407)

* register model under outputs/mlflow-model

* update SDK register.py

* [LLM] RAG Examples - Remove link to old registry (#2519)

Co-authored-by: Gerard <[email protected]>

* Update client registry to public for AutoML forecasting components (#2522)

* update client registry to public

* update registries for cli components

* add falcon model safe deployment notebook (#2512)

* add falcon model notebook

* update md cell

* rename

* rename registry

* Add distributed TCN (v2) notebook (#2516)

* distributed tcn notebook

* Added cluster name to notebooks_config.ini. Increased experiment timeout to 1 hour

* modified readme.py to add mlflow to requirements without explicitly calling import mlflow

* re-ran readmy.py to reflect changes in the workflwo file

* removed best run line from artifacts download

* added logging of the best child run ID to file an ICM for the service team.

* changed to public client registry

* print format

* add tracking URI for mlflow

* replaced mlflowclient due to deprecation

* added disclaimer and increase experiment limit to 60 min

* added sleep import

* update code to fix pipeline test by updating the outbound rule (#2542)

* Update resources name (#2521)

* Update keyvault name

* Update attached compute name

* Fix if condition

* Update compute name

* Update joblib import so that new scikit-learn versions can be used (#2546)

* Update Llamav2 to default to hf-tgi (#2548)

* default to hf_tgi

* remove docker env

* remove hf env vars

---------

Co-authored-by: svaruag <[email protected]>

* pin compute metrics component to 0.0.10. The later versions of this component break the pipelines due to the latest changes by the component owners (#2549)

* Update V2 sample joblib import so that new scikit-learn can be used (#2547)

* Update V2 sample joblib import so that new scikit-learn can be used

* Removed stderr check for orange juice sales because of download messages and blank lines

* Add default score file for non hftgi (#2552)

* add default score file for non hftgi

* rev

* black

* add excount

---------

Co-authored-by: svaruag <[email protected]>
Co-authored-by: Srujan Saggam <[email protected]>

* Add warning message with links to the v1 forecasting notebooks (#2553)

* added warning message with links to v1 forecasting notebooks

* fixed default kernels; fixed link rendering; add warning to the output check

* link rendering

* added comma to the output check

* changed the compute type due to quota issues. This notebook has been failing since 7/18/23 because of this.

* changed many models v1 compute name

* added warning to the notebook check

* Add random numbers at the end of endpoint name in workflows (#2558)

* Add random numbers at the end of endpoint name

* Fix bootstrapping directory

* Improve getting environment in helper script. (#2560)

* Fix environment

* Fix regression-explanation-featurization

* Fix loading of environments

* Fix linting

* pin version of scikit-learn (#2540)

Co-authored-by: Aishani Bhalla <[email protected]>
Co-authored-by: Vivian Li <[email protected]>

* New embedding step should use instance_count==1 (#2562)

* New embedding step should use instance_count==1

* Revert registry change.

---------

Co-authored-by: Lucas Pickup <[email protected]>

* Pin version of scikit-learn for inference-schema sample (#2564)

Co-authored-by: Aishani Bhalla <[email protected]>

* Ignore Downloading artifact messages to stderr (#2568)

* Fix multilabel notebook to work with the new scikit-learn (#2563)

* Fix notebook

* Fix notbook gate

* Fix notebook runs

* Fix workspaces

* Fix multiclass/multilabel runs.

* Remove v1 samples from repository (#2559)

* Remove v1 samples from v2 repo

* Remove v1 from table of contents

* Remove v1 test files

* Remove v1 test files

* Remove v1 workflows

* [RAG] Remove local testing raise exception (#2561)

* [RAG] Match document_path_replacement_regex with AzureML-Assets Components

* Remove regex changes

---------

Co-authored-by: Gerard <[email protected]>

* [LLM] RAG Examples - Remove link to old registry (#2569)

Co-authored-by: Gerard <[email protected]>

* Revert "Remove v1 samples from repository" (#2577)

* Revert "Remove v1 samples from repository (#2559)"

This reverts commit 81175f6.

* Increase size limit to allow revert

* Add/update for managed online endpoint examples for vnet (#2570)

* Create deploy-managed-online-endpoint-workspacevnet.sh

* Rename deploy-moe-vnet-mlflow.sh to deploy-moe-vnet-mlflow-legacy.sh

* Rename deploy-moe-vnet.sh to deploy-moe-vnet-legacy.sh

* rename legacy vnet folder

* rerun readme.py to reflect folder changes

* Revert "rerun readme.py to reflect folder changes"

This reverts commit cf9eedb.

* Revert "rename legacy vnet folder"

This reverts commit 6ede0bf.

* clarify legacy without changing folder name

* add code for possible combinations

* fix: Reset PR size limit to 2MB (#2578)

---------

Co-authored-by: Fred Li <[email protected]>
Co-authored-by: Ilya Matiach <[email protected]>
Co-authored-by: pmanoj <[email protected]>
Co-authored-by: xuke444 <[email protected]>
Co-authored-by: Aditi Singh <[email protected]>
Co-authored-by: Man <[email protected]>
Co-authored-by: Facundo Santiago <[email protected]>
Co-authored-by: Sachin Paryani <[email protected]>
Co-authored-by: Lucas Pickup <[email protected]>
Co-authored-by: Lucas Pickup <[email protected]>
Co-authored-by: shail2208 <[email protected]>
Co-authored-by: Shail Paragbhai Shah <[email protected]>
Co-authored-by: Qianjun Xu <[email protected]>
Co-authored-by: rsethur <[email protected]>
Co-authored-by: Sethu Raman <[email protected]>
Co-authored-by: jeff-shepherd <[email protected]>
Co-authored-by: arun-rajora <[email protected]>
Co-authored-by: xia-xiao <[email protected]>
Co-authored-by: Xia Xiao <[email protected]>
Co-authored-by: erjms <[email protected]>
Co-authored-by: Manoj Bableshwar <[email protected]>
Co-authored-by: Ramu Vadthyavath <[email protected]>
Co-authored-by: Hannah Westra (SHE/HER) <[email protected]>
Co-authored-by: Bhavana <[email protected]>
Co-authored-by: bhavanatumma <[email protected]>
Co-authored-by: kdestin <[email protected]>
Co-authored-by: vijetajo <[email protected]>
Co-authored-by: tanmaybansal104 <[email protected]>
Co-authored-by: qjxu <[email protected]>
Co-authored-by: vbejan-msft <[email protected]>
Co-authored-by: shreeyaharma <[email protected]>
Co-authored-by: Sumadhva Sridhar <[email protected]>
Co-authored-by: Sumadhva Sridhar <[email protected]>
Co-authored-by: Gerard Woods <[email protected]>
Co-authored-by: Alexander Hughes <[email protected]>
Co-authored-by: eniac871 <[email protected]>
Co-authored-by: Anthony Hu <[email protected]>
Co-authored-by: Sheri Gilley <[email protected]>
Co-authored-by: Gaurav Singh <[email protected]>
Co-authored-by: Gaurav Singh <[email protected]>
Co-authored-by: Ayush Mishra <[email protected]>
Co-authored-by: Ayush Mishra <[email protected]>
Co-authored-by: Ke Xu <[email protected]>
Co-authored-by: Rahul Kumar <[email protected]>
Co-authored-by: Gerard <[email protected]>
Co-authored-by: Srujan Saggam <[email protected]>
Co-authored-by: Vivian Li <[email protected]>
Co-authored-by: nick863 <[email protected]>
Co-authored-by: Aishani Bhalla <[email protected]>
Co-authored-by: Aishani Bhalla <[email protected]>
Co-authored-by: Diondra <[email protected]>
Co-authored-by: SeokJin Han <[email protected]>
  • Loading branch information
Show file tree
Hide file tree
Showing 405 changed files with 81,513 additions and 2,554 deletions.
1 change: 1 addition & 0 deletions .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,5 @@


- [ ] I have read the [contribution guidelines](https://github.com/Azure/azureml-examples/blob/main/CONTRIBUTING.md)
- [ ] I have coordinated with the docs team ([email protected]) if this PR deletes files or changes any file names or file extensions.
- [ ] Pull request includes test coverage for the included changes.
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
{
"name": "check notebook output",
"params": {
"check": "warning stderr"
"check": "warning"
}
},
{
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/automated-cleanup-resources.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ name: automated-cleanup-resources
on:
workflow_dispatch:
schedule:
- cron: "45 */12 * * *"
- cron: "0 8 * * *"
pull_request:
branches:
- main
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,26 +46,26 @@ jobs:
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
az ml batch-endpoint delete -n customoutputsparquetendpoint -y
az ml batch-endpoint delete -n tomoutputsparquetendpoint8741 -y
working-directory: cli
continue-on-error: true
- name: create endpoint
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
cat endpoints/batch/deploy-models/custom-outputs-parquet/endpoint.yml
az ml batch-endpoint create -n customoutputsparquetendpoint -f endpoints/batch/deploy-models/custom-outputs-parquet/endpoint.yml
az ml batch-endpoint create -n tomoutputsparquetendpoint8741 -f endpoints/batch/deploy-models/custom-outputs-parquet/endpoint.yml
working-directory: cli
- name: create deployment
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
cat endpoints/batch/deploy-models/custom-outputs-parquet/deployment.yml
az ml batch-deployment create -e customoutputsparquetendpoint -f endpoints/batch/deploy-models/custom-outputs-parquet/deployment.yml
az ml batch-deployment create -e tomoutputsparquetendpoint8741 -f endpoints/batch/deploy-models/custom-outputs-parquet/deployment.yml
working-directory: cli
- name: cleanup endpoint
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
az ml batch-endpoint delete -n customoutputsparquetendpoint -y
az ml batch-endpoint delete -n tomoutputsparquetendpoint8741 -y
working-directory: cli
Original file line number Diff line number Diff line change
Expand Up @@ -46,19 +46,19 @@ jobs:
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
az ml batch-endpoint delete -n heartclassifiermlflowendpoint -y
az ml batch-endpoint delete -n tclassifiermlflowendpoint3191 -y
working-directory: cli
continue-on-error: true
- name: create endpoint
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
cat endpoints/batch/deploy-models/heart-classifier-mlflow/endpoint.yml
az ml batch-endpoint create -n heartclassifiermlflowendpoint -f endpoints/batch/deploy-models/heart-classifier-mlflow/endpoint.yml
az ml batch-endpoint create -n tclassifiermlflowendpoint3191 -f endpoints/batch/deploy-models/heart-classifier-mlflow/endpoint.yml
working-directory: cli
- name: cleanup endpoint
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
az ml batch-endpoint delete -n heartclassifiermlflowendpoint -y
az ml batch-endpoint delete -n tclassifiermlflowendpoint3191 -y
working-directory: cli
Original file line number Diff line number Diff line change
Expand Up @@ -46,26 +46,26 @@ jobs:
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
az ml batch-endpoint delete -n facetextsummarizationendpoint -y
az ml batch-endpoint delete -n textsummarizationendpoint2742 -y
working-directory: cli
continue-on-error: true
- name: create endpoint
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
cat endpoints/batch/deploy-models/huggingface-text-summarization/endpoint.yml
az ml batch-endpoint create -n facetextsummarizationendpoint -f endpoints/batch/deploy-models/huggingface-text-summarization/endpoint.yml
az ml batch-endpoint create -n textsummarizationendpoint2742 -f endpoints/batch/deploy-models/huggingface-text-summarization/endpoint.yml
working-directory: cli
- name: create deployment
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
cat endpoints/batch/deploy-models/huggingface-text-summarization/deployment.yml
az ml batch-deployment create -e facetextsummarizationendpoint -f endpoints/batch/deploy-models/huggingface-text-summarization/deployment.yml
az ml batch-deployment create -e textsummarizationendpoint2742 -f endpoints/batch/deploy-models/huggingface-text-summarization/deployment.yml
working-directory: cli
- name: cleanup endpoint
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
az ml batch-endpoint delete -n facetextsummarizationendpoint -y
az ml batch-endpoint delete -n textsummarizationendpoint2742 -y
working-directory: cli
Original file line number Diff line number Diff line change
Expand Up @@ -46,19 +46,19 @@ jobs:
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
az ml batch-endpoint delete -n elsimagenetclassifierendpoint -y
az ml batch-endpoint delete -n imagenetclassifierendpoint3948 -y
working-directory: cli
continue-on-error: true
- name: create endpoint
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
cat endpoints/batch/deploy-models/imagenet-classifier/endpoint.yml
az ml batch-endpoint create -n elsimagenetclassifierendpoint -f endpoints/batch/deploy-models/imagenet-classifier/endpoint.yml
az ml batch-endpoint create -n imagenetclassifierendpoint3948 -f endpoints/batch/deploy-models/imagenet-classifier/endpoint.yml
working-directory: cli
- name: cleanup endpoint
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
az ml batch-endpoint delete -n elsimagenetclassifierendpoint -y
az ml batch-endpoint delete -n imagenetclassifierendpoint3948 -y
working-directory: cli
Original file line number Diff line number Diff line change
Expand Up @@ -46,19 +46,19 @@ jobs:
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
az ml batch-endpoint delete -n modelsmnistclassifierendpoint -y
az ml batch-endpoint delete -n lsmnistclassifierendpoint9980 -y
working-directory: cli
continue-on-error: true
- name: create endpoint
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
cat endpoints/batch/deploy-models/mnist-classifier/endpoint.yml
az ml batch-endpoint create -n modelsmnistclassifierendpoint -f endpoints/batch/deploy-models/mnist-classifier/endpoint.yml
az ml batch-endpoint create -n lsmnistclassifierendpoint9980 -f endpoints/batch/deploy-models/mnist-classifier/endpoint.yml
working-directory: cli
- name: cleanup endpoint
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
az ml batch-endpoint delete -n modelsmnistclassifierendpoint -y
az ml batch-endpoint delete -n lsmnistclassifierendpoint9980 -y
working-directory: cli
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
# This code is autogenerated.
# Code is generated by running custom script: python3 readme.py
# Any manual changes to this file may cause incorrect behavior.
# Any manual changes will be overwritten if the code is regenerated.

name: cli-endpoints-batch-deploy-pipelines-batch-scoring-with-preprocessing-endpoint
on:
workflow_dispatch:
schedule:
- cron: "18 11/12 * * *"
pull_request:
branches:
- main
paths:
- cli/endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/**
- cli/endpoints/batch/**
- infra/bootstrapping/**
- .github/workflows/cli-endpoints-batch-deploy-pipelines-batch-scoring-with-preprocessing-endpoint.yml
- cli/setup.sh
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: check out repo
uses: actions/checkout@v2
- name: azure login
uses: azure/login@v1
with:
creds: ${{secrets.AZUREML_CREDENTIALS}}
- name: bootstrap resources
run: |
bash bootstrap.sh
working-directory: infra/bootstrapping
continue-on-error: false
- name: setup-cli
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
bash setup.sh
working-directory: cli
continue-on-error: true
- name: delete endpoint if existing
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
az ml batch-endpoint delete -n withpreprocessingendpoint6601 -y
working-directory: cli
continue-on-error: true
- name: create endpoint
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
cat endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/endpoint.yml
az ml batch-endpoint create -n withpreprocessingendpoint6601 -f endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/endpoint.yml
working-directory: cli
- name: create deployment
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
cat endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/deployment.yml
az ml batch-deployment create -e withpreprocessingendpoint6601 -f endpoints/batch/deploy-pipelines/batch-scoring-with-preprocessing/deployment.yml
working-directory: cli
- name: cleanup endpoint
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
az ml batch-endpoint delete -n withpreprocessingendpoint6601 -y
working-directory: cli
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
# This code is autogenerated.
# Code is generated by running custom script: python3 readme.py
# Any manual changes to this file may cause incorrect behavior.
# Any manual changes will be overwritten if the code is regenerated.

name: cli-endpoints-batch-deploy-pipelines-hello-batch-endpoint
on:
workflow_dispatch:
schedule:
- cron: "18 11/12 * * *"
pull_request:
branches:
- main
paths:
- cli/endpoints/batch/deploy-pipelines/hello-batch/**
- cli/endpoints/batch/**
- infra/bootstrapping/**
- .github/workflows/cli-endpoints-batch-deploy-pipelines-hello-batch-endpoint.yml
- cli/setup.sh
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: check out repo
uses: actions/checkout@v2
- name: azure login
uses: azure/login@v1
with:
creds: ${{secrets.AZUREML_CREDENTIALS}}
- name: bootstrap resources
run: |
bash bootstrap.sh
working-directory: infra/bootstrapping
continue-on-error: false
- name: setup-cli
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
bash setup.sh
working-directory: cli
continue-on-error: true
- name: delete endpoint if existing
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
az ml batch-endpoint delete -n pelineshellobatchendpoint9006 -y
working-directory: cli
continue-on-error: true
- name: create endpoint
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
cat endpoints/batch/deploy-pipelines/hello-batch/endpoint.yml
az ml batch-endpoint create -n pelineshellobatchendpoint9006 -f endpoints/batch/deploy-pipelines/hello-batch/endpoint.yml
working-directory: cli
- name: create deployment
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
cat endpoints/batch/deploy-pipelines/hello-batch/deployment.yml
az ml batch-deployment create -e pelineshellobatchendpoint9006 -f endpoints/batch/deploy-pipelines/hello-batch/deployment.yml
working-directory: cli
- name: cleanup endpoint
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
az ml batch-endpoint delete -n pelineshellobatchendpoint9006 -y
working-directory: cli
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# This code is autogenerated.
# Code is generated by running custom script: python3 readme.py
# Any manual changes to this file may cause incorrect behavior.
# Any manual changes will be overwritten if the code is regenerated.

name: cli-endpoints-batch-deploy-pipelines-training-with-components-endpoint
on:
workflow_dispatch:
schedule:
- cron: "18 11/12 * * *"
pull_request:
branches:
- main
paths:
- cli/endpoints/batch/deploy-pipelines/training-with-components/**
- cli/endpoints/batch/**
- infra/bootstrapping/**
- .github/workflows/cli-endpoints-batch-deploy-pipelines-training-with-components-endpoint.yml
- cli/setup.sh
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: check out repo
uses: actions/checkout@v2
- name: azure login
uses: azure/login@v1
with:
creds: ${{secrets.AZUREML_CREDENTIALS}}
- name: bootstrap resources
run: |
bash bootstrap.sh
working-directory: infra/bootstrapping
continue-on-error: false
- name: setup-cli
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
bash setup.sh
working-directory: cli
continue-on-error: true
- name: delete endpoint if existing
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
az ml batch-endpoint delete -n ingwithcomponentsendpoint9219 -y
working-directory: cli
continue-on-error: true
- name: create endpoint
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
cat endpoints/batch/deploy-pipelines/training-with-components/endpoint.yml
az ml batch-endpoint create -n ingwithcomponentsendpoint9219 -f endpoints/batch/deploy-pipelines/training-with-components/endpoint.yml
working-directory: cli
- name: cleanup endpoint
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
az ml batch-endpoint delete -n ingwithcomponentsendpoint9219 -y
working-directory: cli
Original file line number Diff line number Diff line change
Expand Up @@ -46,19 +46,19 @@ jobs:
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
az ml online-endpoint delete -n odelminimalmultimodelendpoint -y
az ml online-endpoint delete -n minimalmultimodelendpoint9199 -y
working-directory: cli
continue-on-error: true
- name: create endpoint
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
cat endpoints/online/custom-container/minimal/multimodel/minimal-multimodel-endpoint.yml
az ml online-endpoint create -n odelminimalmultimodelendpoint -f endpoints/online/custom-container/minimal/multimodel/minimal-multimodel-endpoint.yml
az ml online-endpoint create -n minimalmultimodelendpoint9199 -f endpoints/online/custom-container/minimal/multimodel/minimal-multimodel-endpoint.yml
working-directory: cli
- name: cleanup endpoint
run: |
source "${{ github.workspace }}/infra/bootstrapping/sdk_helpers.sh";
source "${{ github.workspace }}/infra/bootstrapping/init_environment.sh";
az ml online-endpoint delete -n odelminimalmultimodelendpoint -y
az ml online-endpoint delete -n minimalmultimodelendpoint9199 -y
working-directory: cli
Loading

0 comments on commit a7ca64f

Please sign in to comment.