Skip to content
@trustyai-explainability

TrustyAI

TrustyAI Explainability Toolkit

Welcome to TrustyAI 👋

TrustyAI is an open source Responsible AI toolkit supported by Red Hat and IBM. TrustyAI provides tools for a variety of responsible AI workflows, such as:

  • Local and global model explanations
  • Fairness metrics
  • Drift metrics
  • Text detoxification
  • Language model benchmarking
  • Language model guardrails

TrustyAI is a default component of Open Data Hub and Red Hat Openshift AI, and has integrations with projects like KServe, Caikit, and vLLM.


🗂️ Our Projects 🗂️

TrustyAI Explainability

The trustyai-explainability repo is our main hub, containing the TrustyAI Core Library and TrustyAI Service.

The TrustyAI Core Library is a Java library for explainable and transparent AI, containing XAI algorithms, drift metrics, fairness metrics, and language model accuracy metrics.

The TrustyAI Service exposes TrustyAI Core as a containerized REST server, enabling responsible AI workflows in cloud and distributed environments. The TrustyAI Service has the following integrations:

  • Connectivity to Open Data Hub model servers
  • Connectivity to Red Hat Openshift AI model servers
  • KServe side-car explainer support
  • MariaDB connectivity for storing model inferences

For example, you can deploy the TrustyAI service alongside KServe models in Open Data Hub to perform drift and bias measurements throughout your deployment.

TrustyAI Python Library

TrustyAI Python provides a Python interface to the TrustyAI Core library, which lets you use TrustyAI in more traditional data science environments like Jupyter.

Language Model Evaluation Service

The LM-Eval K8s Service packages and serves EleutherAI's popular LM-Evaluation-Harness library in a Kubernetes environment, allowing for scalable evaluations running against K8s LLM servers such as vLLM.

Language Model Guardrails Project

The Guardrails project provides a Kubernetes LLM guardrailing ecosystem, with dynamic, request-time pipelining of specific detectors and text chunkers.

TrustyAI Operator

The TrustyAI Kubernetes Operator manages the deployment of various TrustyAI components into a Kubernetes cluster. The TrustyAI operator is a default component of both Open Data Hub and Red Hat Openshift AI.

While these are our largest and most active projects, also check out our full list of repos to see more experimental work like trustyai-detoxify-sft.


📖 Resources 📖

Documentation

Tutorials

Demos

  • Coming Soon

Blog Posts

Papers

Development Notes

  • TrustyAI Reference provides scratch notes on various common development and testing flows

🤝 Join Us 🤝

Check out our community repository for discussions and our Community Meeting information.

The project roadmap offers a view on new tools and integration the project developers are planning to add.

TrustyAI uses the ODH governance model and code of conduct.

Links

Pinned Loading

  1. trustyai-explainability trustyai-explainability Public

    TrustyAI Explainability Toolkit

    Java 31 31

  2. trustyai-explainability-python trustyai-explainability-python Public

    Python bindings for TrustyAI's explainability library

    Python 13 12

  3. trustyai-service-operator trustyai-service-operator Public

    TrustyAI's Kubernetes operator

    Go 3 20

  4. community community Public

    TrustyAI community information

    1 6

  5. trustyai-explainability-python-examples trustyai-explainability-python-examples Public

    Examples for the Python bindings for TrustyAI's explainability library

    9 6

Repositories

Showing 10 of 30 repositories

Top languages

Loading…

Most used topics

Loading…