How can I connect to to inference machine in order to debug the custom image? #5087
Unanswered
andrew-aladjev
asked this question in
Help
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello, I've derived my custom docker image from your sagemaker image. Dockerfile is the following:
I built this image, uploaded it to ecr and used new custom image ARN in
image_uri
ofPyTorchModel
as follows:It works: sagemaker deployed my model, but inference code is still not working: torchaudio can't see
ffmpeg
backend. Now I want to debug my machine. In order to debug it I want to make any kind of terminal connection. But I am unable to do it, because I can't find any ability to do it.I've clicked everything in Amazon UI: Amazon SageMaker AI, Inference menu, Notebook menu, Jupiter menu. I have even opened fluffy toy "Studio" and I can't see anything. I see just endpoint without any ability to connect to the related machine(s). What should I do? For now I am thinking to move away from Amazon.
Beta Was this translation helpful? Give feedback.
All reactions