diff --git a/docs/modules/ROOT/pages/saliency-explanations-on-odh.adoc b/docs/modules/ROOT/pages/saliency-explanations-on-odh.adoc index 3342533..baa06d9 100644 --- a/docs/modules/ROOT/pages/saliency-explanations-on-odh.adoc +++ b/docs/modules/ROOT/pages/saliency-explanations-on-odh.adoc @@ -1,6 +1,6 @@ = Saliency explanations on ODH -This tutorial will walk you through setting up and using TrustyAI to provide saliency explanations for model predictions within a OpenShift environment using OpenDataHub. We will deploy a model, configure the environment, and demonstrate how to obtain predictions and their explanations. +This tutorial will walk you through setting up and using TrustyAI to provide saliency explanations for model inferences within a OpenShift environment using OpenDataHub. We will deploy a model, configure the environment, and demonstrate how to obtain inferences and their explanations. [NOTE] ==== @@ -107,12 +107,12 @@ export TOKEN=$(oc whoami -t) == Requesting Explanations -=== Issue a Prediction +=== Request an inference -In order to obtain an explanation, we first need to make a prediction. -The explanation request will be based on this prediction ID. +In order to obtain an explanation, we first need to make an inference. +The explanation request will be based on this inference ID. -Start by sending an inference request to the model to get a prediction. Replace `${TOKEN}` with your actual authorization token. +Start by sending an inference request to the model to get an inference. Replace `${TOKEN}` with your actual authorization token. [source,shell] ---- @@ -121,26 +121,46 @@ curl -skv -H "Authorization: Bearer ${TOKEN}" \ -d '{"inputs": [{"name": "predict","shape": [1,5], "datatype": "FP64", "data": [1.0, 2.0, 1.0, 0.0, 1.0]}]}' ---- -=== Get a Random Prediction ID +=== Getting an inference ID -Extract the latest prediction ID for use in obtaining an explanation. +The TrustyAI service provides an endpoint to list stored inference ids. +You can list all (non-synthetic or _organic_) ids by running: ```shell -export PREDICTION_ID=$(oc exec $TRUSTYAI_POD -n explainer-tests -c trustyai-service -- sh -c "awk -F',' '{print \$2}' /inputs/explainer-test-internal_data.csv | tail -n 1") +curl -skv -H "Authorization: Bearer ${TOKEN}" \ + https://${TRUSTYAI_ROUTE}/info/inference/ids/explainer-test?type=organic +``` + +The response will be similar to + +```json +[ + { + "id":"a3d3d4a2-93f6-4a23-aedb-051416ecf84f", + "timestamp":"2024-06-25T09:06:28.75701201" + } +] +``` + +Extract the latest inference ID for use in obtaining an explanation. + +```shell +export INFERENCE_ID=$(curl -skv -H "Authorization: Bearer ${TOKEN}" \ + https://${TRUSTYAI_ROUTE}/info/inference/ids/explainer-test?type=organic | jq -r '.[-1].id') ``` === Request a LIME Explanation We will use LIME as our explainer for this tutorial. More information on LIME can be found xref:local-explainers.adoc#LIME[here]. -Request a LIME explanation for the selected prediction ID. +Request a LIME explanation for the selected inference ID. [source,shell] ---- curl -sk -X POST -H "Authorization: Bearer ${TOKEN}" \ -H "Content-Type: application/json" \ -d "{ - \"predictionId\": \"$PREDICTION_ID\", + \"predictionId\": \"$INFERENCE_ID\", \"modelConfig\": { \"target\": \"modelmesh-serving.${NAMESPACE}.svc.cluster.local:8033\", \"name\": \"explainer-test\", @@ -153,7 +173,7 @@ curl -sk -X POST -H "Authorization: Bearer ${TOKEN}" \ === Results -The output will show the saliency scores and confidence for each input feature used in the prediction. +The output will show the saliency scores and confidence for each input feature used in the inference. [source,json] ----