You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Just couple more points. If I provide a custom job id and a storage location in job.yml and place it in the working directory, cloudml_train() doesn't recognize it and takes the default job_id (cloudml_datetimestamp) and a default storage location
job.yml
jobId: local-r-heramb
storage: gs://data-science-storage-bucket/
custom_commands: ~
This is not a very critical issue. Just for the highlight
Thanks once again for resolving this! Appreciate it very much
-Heramb
The text was updated successfully, but these errors were encountered:
@javierluraschi - Another thing that would be very useful is if we could specify where to store the dry-run directory instead of getting auto-saved into the temp folder.
Dry-run option packages the training model script in a similar way of how a python script would be packaged before sending to the AI instance through airflow/composer jobs.
Follow up from #213
@javierluraschi That works! Thanks a lot! :-)
Just couple more points. If I provide a custom job id and a storage location in job.yml and place it in the working directory, cloudml_train() doesn't recognize it and takes the default job_id (cloudml_datetimestamp) and a default storage location
job.yml
jobId: local-r-heramb
storage: gs://data-science-storage-bucket/
custom_commands: ~
This is not a very critical issue. Just for the highlight
Thanks once again for resolving this! Appreciate it very much
-Heramb
The text was updated successfully, but these errors were encountered: