TrustyAI is an open source Responsible AI toolkit supported by Red Hat and IBM. TrustyAI provides tools for a variety of responsible AI workflows, such as:
- Local and global model explanations
- Fairness metrics
- Drift metrics
- Text detoxification
- Language model benchmarking
- Language model guardrails
TrustyAI is a default component of Open Data Hub and Red Hat Openshift AI, and has integrations with projects like KServe, Caikit, and vLLM.
The trustyai-explainability repo is our main hub, containing the TrustyAI Core Library and TrustyAI Service.
The TrustyAI Core Library is a Java library for explainable and transparent AI, containing XAI algorithms, drift metrics, fairness metrics, and language model accuracy metrics.
The TrustyAI Service exposes TrustyAI Core as a containerized REST server, enabling responsible AI workflows in cloud and distributed environments. The TrustyAI Service has the following integrations:
- Connectivity to Open Data Hub model servers
- Connectivity to Red Hat Openshift AI model servers
- KServe side-car explainer support
- MariaDB connectivity for storing model inferences
For example, you can deploy the TrustyAI service alongside KServe models in Open Data Hub to perform drift and bias measurements throughout your deployment.
TrustyAI Python provides a Python interface to the TrustyAI Core library, which lets you use TrustyAI in more traditional data science environments like Jupyter.
The LM-Eval K8s Service packages and serves EleutherAI's popular LM-Evaluation-Harness library in a Kubernetes environment, allowing for scalable evaluations running against K8s LLM servers such as vLLM.
The Guardrails project provides a Kubernetes LLM guardrailing ecosystem, with dynamic, request-time pipelining of specific detectors and text chunkers.
The TrustyAI Kubernetes Operator manages the deployment of various TrustyAI components into a Kubernetes cluster. The TrustyAI operator is a default component of both Open Data Hub and Red Hat Openshift AI.
While these are our largest and most active projects, also check out our full list of repos to see more experimental work like trustyai-detoxify-sft.
- TrustyAI Website Tutorials: Walkthroughs of a variety of different TrustyAI flows, like bias monitoring, drift monitoring, and language model evaluation.
- trustyai-explainability-python-examples: Examples on how to get started with the Python TrustyAI library.
- trustyai-odh-demos: Demos of the TrustyAI Service within Open Data Hub.
- Coming Soon
- TrustyAI Reference provides scratch notes on various common development and testing flows
Check out our community repository for discussions and our Community Meeting information.
The project roadmap offers a view on new tools and integration the project developers are planning to add.
TrustyAI uses the ODH governance model and code of conduct.