From 2b8d608c0eb5acc738f5bb15aea6603585d0f74c Mon Sep 17 00:00:00 2001 From: Rui Vieira Date: Tue, 25 Jun 2024 16:31:20 +0100 Subject: [PATCH] Change prediction to inference --- .../pages/saliency-explanations-on-odh.adoc | 20 +++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/docs/modules/ROOT/pages/saliency-explanations-on-odh.adoc b/docs/modules/ROOT/pages/saliency-explanations-on-odh.adoc index bf6a976..baa06d9 100644 --- a/docs/modules/ROOT/pages/saliency-explanations-on-odh.adoc +++ b/docs/modules/ROOT/pages/saliency-explanations-on-odh.adoc @@ -1,6 +1,6 @@ = Saliency explanations on ODH -This tutorial will walk you through setting up and using TrustyAI to provide saliency explanations for model predictions within a OpenShift environment using OpenDataHub. We will deploy a model, configure the environment, and demonstrate how to obtain predictions and their explanations. +This tutorial will walk you through setting up and using TrustyAI to provide saliency explanations for model inferences within a OpenShift environment using OpenDataHub. We will deploy a model, configure the environment, and demonstrate how to obtain inferences and their explanations. [NOTE] ==== @@ -107,12 +107,12 @@ export TOKEN=$(oc whoami -t) == Requesting Explanations -=== Issue a Prediction +=== Request an inference -In order to obtain an explanation, we first need to make a prediction. -The explanation request will be based on this prediction ID. +In order to obtain an explanation, we first need to make an inference. +The explanation request will be based on this inference ID. -Start by sending an inference request to the model to get a prediction. Replace `${TOKEN}` with your actual authorization token. +Start by sending an inference request to the model to get an inference. Replace `${TOKEN}` with your actual authorization token. [source,shell] ---- @@ -142,10 +142,10 @@ The response will be similar to ] ``` -Extract the latest prediction ID for use in obtaining an explanation. +Extract the latest inference ID for use in obtaining an explanation. ```shell -export PREDICTION_ID=$(curl -skv -H "Authorization: Bearer ${TOKEN}" \ +export INFERENCE_ID=$(curl -skv -H "Authorization: Bearer ${TOKEN}" \ https://${TRUSTYAI_ROUTE}/info/inference/ids/explainer-test?type=organic | jq -r '.[-1].id') ``` @@ -153,14 +153,14 @@ export PREDICTION_ID=$(curl -skv -H "Authorization: Bearer ${TOKEN}" \ We will use LIME as our explainer for this tutorial. More information on LIME can be found xref:local-explainers.adoc#LIME[here]. -Request a LIME explanation for the selected prediction ID. +Request a LIME explanation for the selected inference ID. [source,shell] ---- curl -sk -X POST -H "Authorization: Bearer ${TOKEN}" \ -H "Content-Type: application/json" \ -d "{ - \"predictionId\": \"$PREDICTION_ID\", + \"predictionId\": \"$INFERENCE_ID\", \"modelConfig\": { \"target\": \"modelmesh-serving.${NAMESPACE}.svc.cluster.local:8033\", \"name\": \"explainer-test\", @@ -173,7 +173,7 @@ curl -sk -X POST -H "Authorization: Bearer ${TOKEN}" \ === Results -The output will show the saliency scores and confidence for each input feature used in the prediction. +The output will show the saliency scores and confidence for each input feature used in the inference. [source,json] ----