Skip to content

Commit e5c88ad

Browse files
Auto-generated API code (#2680)
1 parent 75f31d9 commit e5c88ad

File tree

2 files changed

+45
-1
lines changed

2 files changed

+45
-1
lines changed

docs/reference.asciidoc

+12-1
Original file line numberDiff line numberDiff line change
@@ -7283,7 +7283,7 @@ To unset a version, replace the template without specifying one.
72837283
** *`create` (Optional, boolean)*: If true, this request cannot replace or update existing index templates.
72847284
** *`master_timeout` (Optional, string | -1 | 0)*: Period to wait for a connection to the master node. If no response is
72857285
received before the timeout expires, the request fails and returns an error.
7286-
** *`cause` (Optional, string)*
7286+
** *`cause` (Optional, string)*: User defined reason for creating/updating the index template
72877287

72887288
[discrete]
72897289
==== recovery
@@ -8093,6 +8093,17 @@ NOTE: The `chat_completion` task type only supports streaming and only through t
80938093
** *`service` (Enum("elastic"))*: The type of service supported for the specified task type. In this case, `elastic`.
80948094
** *`service_settings` ({ model_id, rate_limit })*: Settings used to install the inference model. These settings are specific to the `elastic` service.
80958095

8096+
[discrete]
8097+
==== put_mistral
8098+
Configure a Mistral inference endpoint
8099+
8100+
{ref}/infer-service-mistral.html[Endpoint documentation]
8101+
[source,ts]
8102+
----
8103+
client.inference.putMistral()
8104+
----
8105+
8106+
80968107
[discrete]
80978108
==== put_openai
80988109
Create an OpenAI inference endpoint.

src/api/api/inference.ts

+33
Original file line numberDiff line numberDiff line change
@@ -306,6 +306,39 @@ export default class Inference {
306306
return await this.transport.request({ path, method, querystring, body, meta }, options)
307307
}
308308

309+
/**
310+
* Configure a Mistral inference endpoint
311+
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.18/infer-service-mistral.html | Elasticsearch API documentation}
312+
*/
313+
async putMistral (this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithOutMeta): Promise<T.TODO>
314+
async putMistral (this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptionsWithMeta): Promise<TransportResult<T.TODO, unknown>>
315+
async putMistral (this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptions): Promise<T.TODO>
316+
async putMistral (this: That, params?: T.TODO | TB.TODO, options?: TransportRequestOptions): Promise<any> {
317+
const acceptedPath: string[] = ['task_type', 'mistral_inference_id']
318+
const querystring: Record<string, any> = {}
319+
const body = undefined
320+
321+
params = params ?? {}
322+
for (const key in params) {
323+
if (acceptedPath.includes(key)) {
324+
continue
325+
} else if (key !== 'body') {
326+
querystring[key] = params[key]
327+
}
328+
}
329+
330+
const method = 'PUT'
331+
const path = `/_inference/${encodeURIComponent(params.task_type.toString())}/${encodeURIComponent(params.mistral_inference_id.toString())}`
332+
const meta: TransportRequestMetadata = {
333+
name: 'inference.put_mistral',
334+
pathParts: {
335+
task_type: params.task_type,
336+
mistral_inference_id: params.mistral_inference_id
337+
}
338+
}
339+
return await this.transport.request({ path, method, querystring, body, meta }, options)
340+
}
341+
309342
/**
310343
* Create an OpenAI inference endpoint. Create an inference endpoint to perform an inference task with the `openai` service. When you create an inference endpoint, the associated machine learning model is automatically deployed if it is not already running. After creating the endpoint, wait for the model deployment to complete before using it. To verify the deployment status, use the get trained model statistics API. Look for `"state": "fully_allocated"` in the response and ensure that the `"allocation_count"` matches the `"target_allocation_count"`. Avoid creating multiple endpoints for the same model unless required, as each endpoint consumes significant resources.
311344
* @see {@link https://www.elastic.co/guide/en/elasticsearch/reference/8.18/infer-service-openai.html | Elasticsearch API documentation}

0 commit comments

Comments
 (0)