@@ -136,7 +136,7 @@ print(model_list)
136136Initialize a model, so you can check its status, load, run, or shut it down.
137137
138138``` py
139- model = client.model(" openai-community/gpt2" );
139+ model = client.model(" openai-community/gpt2" )
140140```
141141
142142## Load a Model
@@ -146,7 +146,7 @@ Convenience method for `model.start()`. Automatically waits for the instance to
146146Progress is printed as it executes.
147147
148148``` py
149- model.load();
149+ model.load()
150150```
151151
152152The options argument is _ optional_ and has two properties, concurrency, and timeout.
@@ -155,7 +155,7 @@ The options argument is _optional_ and has two properties, concurrency, and time
155155model.load({
156156 " concurrency" : 1 ,
157157 " timeout" : 300 ,
158- });
158+ })
159159```
160160
161161```
@@ -186,17 +186,17 @@ Check on the status of the model, to see if it's deploying, running, or stopped.
186186``` py
187187status = model.status()
188188
189- print (status);
189+ print (status)
190190```
191191
192192## Run a Model
193193
194194Run inference.
195195
196196``` py
197- output = model.run(" Once upon a time there was a" );
197+ output = model.run(" Once upon a time there was a" )
198198
199- print (output);
199+ print (output)
200200```
201201
202202## Run a Model with HuggingFace Params
@@ -239,27 +239,27 @@ By default, models will shut down based on their timeout (seconds) when loaded v
239239To shut down and save costs early, run the following:
240240
241241``` py
242- model.stop();
242+ model.stop()
243243```
244244
245245## List Your Running Instances
246246
247247``` py
248- instances = client.list_instances();
248+ instances = client.list_instances()
249249
250- print (instances);
250+ print (instances)
251251```
252252
253253## Request a Huggingface Model Not Yet on Bytez
254254
255255To request a model that exists on Huggingface but not yet on Bytez, you can do the following:
256256
257257``` py
258- model_id = " openai-community/gpt2" ;
258+ model_id = " openai-community/gpt2"
259259
260- job_status = client.process(model_id);
260+ job_status = client.process(model_id)
261261
262- print (job_status);
262+ print (job_status)
263263```
264264
265265This sends a job to an automated queue. When the job completes, you'll receive an email indicating the model is ready for use with the models API.
0 commit comments