Skip to content

Commit 36046ff

Browse files
committed
More cleanup.
1 parent 2f461a9 commit 36046ff

File tree

1 file changed

+12
-12
lines changed

1 file changed

+12
-12
lines changed

python/readme.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -136,7 +136,7 @@ print(model_list)
136136
Initialize a model, so you can check its status, load, run, or shut it down.
137137

138138
```py
139-
model = client.model("openai-community/gpt2");
139+
model = client.model("openai-community/gpt2")
140140
```
141141

142142
## Load a Model
@@ -146,7 +146,7 @@ Convenience method for `model.start()`. Automatically waits for the instance to
146146
Progress is printed as it executes.
147147

148148
```py
149-
model.load();
149+
model.load()
150150
```
151151

152152
The options argument is _optional_ and has two properties, concurrency, and timeout.
@@ -155,7 +155,7 @@ The options argument is _optional_ and has two properties, concurrency, and time
155155
model.load({
156156
"concurrency": 1,
157157
"timeout": 300,
158-
});
158+
})
159159
```
160160

161161
```
@@ -186,17 +186,17 @@ Check on the status of the model, to see if it's deploying, running, or stopped.
186186
```py
187187
status = model.status()
188188

189-
print(status);
189+
print(status)
190190
```
191191

192192
## Run a Model
193193

194194
Run inference.
195195

196196
```py
197-
output = model.run("Once upon a time there was a");
197+
output = model.run("Once upon a time there was a")
198198

199-
print(output);
199+
print(output)
200200
```
201201

202202
## Run a Model with HuggingFace Params
@@ -239,27 +239,27 @@ By default, models will shut down based on their timeout (seconds) when loaded v
239239
To shut down and save costs early, run the following:
240240

241241
```py
242-
model.stop();
242+
model.stop()
243243
```
244244

245245
## List Your Running Instances
246246

247247
```py
248-
instances = client.list_instances();
248+
instances = client.list_instances()
249249

250-
print(instances);
250+
print(instances)
251251
```
252252

253253
## Request a Huggingface Model Not Yet on Bytez
254254

255255
To request a model that exists on Huggingface but not yet on Bytez, you can do the following:
256256

257257
```py
258-
model_id = "openai-community/gpt2";
258+
model_id = "openai-community/gpt2"
259259

260-
job_status = client.process(model_id);
260+
job_status = client.process(model_id)
261261

262-
print(job_status);
262+
print(job_status)
263263
```
264264

265265
This sends a job to an automated queue. When the job completes, you'll receive an email indicating the model is ready for use with the models API.

0 commit comments

Comments
 (0)