Skip to content

matthew-mcateer/practicing_trustworthy_machine_learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Contributors Forks Stargazers Issues Apache License LinkedIn Maintainability Rating Technical Debt Lines of Code Code Smells Security Rating Bugs Vulnerabilities Duplicated Lines (%) Reliability Rating


book-cover

Practicing Trustworthy Machine Learning

A book on how to make ML products that users can trust
Explore the docs »

View Demo · Report Bug · Request Feature

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Roadmap
  5. Contributing
  6. License
  7. Contact
  8. Acknowledgments

FOSSA Status

About The Project

book-cover

FOSSA Status

A book on how to make ML products that users can trust. Available on Amazon

(back to top)

Built With

For a full list of all GitHub projects referenced in the book, see the "📕 Practicing Trustworthy ML" GitHub List

(back to top)

Getting Started

For running the notebooks on the cloud, you can skip ahead to the Usage section. Note that most chapters require a GPU to run in a reasonable amount of time, so we recommend one of the cloud platforms as they come pre-installed with CUDA.

To get a local copy up and running follow these simple steps:

Prerequisites

Here is a list of prerequisites you need to install before you can start using the examples in this repo.

  • python 3.6 or later

Installation

  1. To run the notebooks on your own machine, first clone the repository and navigate to it:
$ git clone https://github.com/matthew-mcateer/practicing_trustworthy_machine_learning.git
$ cd practicing_trustworthy_machine_learning
  1. Next, run the following command to create a conda virtual environment that contains all the libraries needed to run the notebooks:
$ conda env create -f environment.yml

Alternatlively, you can use mamba, which is a faster way of building conda environments:

$ mamba env create -f environment.yml
  1. Once you've installed the dependencies, you can activate the conda environment and spin up the notebooks as follows:
$ conda activate book
$ jupyter notebook

(if using mamba, you should still use conda for activation and deactivation.)

(back to top)

Usage

The usage of these code samples is detailed in the O'Reilly book "Practicing Trustworthy Machine Learning".

For more details on these examples, please refer to the Book these samples come from

A few places where you can get access to a GPU without paying for it:

  1. Google Colab
  2. Kaggle
  3. Paperspace Gradient
  4. AWS SageMaker Studio Lab

To use these cloud resources, you can click on one of the following links:

Chapter GitHub Colab Kaggle Gradient Studio Lab Binder
Chapter 0: Safely Loading Saved Models GitHub Open In Colab Kaggle Gradient Open In SageMaker Studio Lab Binder
Chapter 1: BERT attack GitHub Open In Colab Kaggle Gradient Open In SageMaker Studio Lab Binder
Chapter 1: Pytorch DP Demo GitHub Open In Colab Kaggle Gradient Open In SageMaker Studio Lab Binder
Chapter 1: SMPC Example GitHub Open In Colab Kaggle Gradient Open In SageMaker Studio Lab Binder
Chapter 2: Evaluating Causal LMs on BOLD GitHub Open In Colab Kaggle Gradient Open In SageMaker Studio Lab Binder
Chapter 3: CLIP Saliency mapping Part1 GitHub Open In Colab Kaggle Gradient Open In SageMaker Studio Lab Binder
Chapter 3: CLIP Saliency mapping Part2 GitHub Open In Colab Kaggle Gradient Open In SageMaker Studio Lab Binder
Chapter 3: Interpreting GPT GitHub Open In Colab Kaggle Gradient Open In SageMaker Studio Lab Binder
Chapter 3: Intrinsically Interpretable Models GitHub Open In Colab Kaggle Gradient Open In SageMaker Studio Lab Binder
Chapter 3: LIME for Transformers GitHub Open In Colab Kaggle Gradient Open In SageMaker Studio Lab Binder
Chapter 3: SHAP for Transformers GitHub Open In Colab Kaggle Gradient Open In SageMaker Studio Lab Binder
Chapter 4: HopSkipJump Attack on ImageNet GitHub Open In Colab Kaggle Gradient Open In SageMaker Studio Lab Binder
Chapter 4: Simple Transparent Adversarial Examples GitHub Open In Colab Kaggle Gradient Open In SageMaker Studio Lab Binder
Chapter 5: Synthetic Data Blender GitHub Open In Colab Kaggle Gradient Open In SageMaker Studio Lab Binder
Chapter 5: Synthetic Data Fractals GitHub Open In Colab Kaggle Gradient Open In SageMaker Studio Lab Binder
Chapter 6: Federated Learning Simulations GitHub Open In Colab Kaggle Gradient Open In SageMaker Studio Lab Binder
Chapter 6: Homomorphic Encryption NN GitHub Open In Colab Kaggle Gradient Open In SageMaker Studio Lab Binder
Chapter 7: Bootstrap Confidence Intervals GitHub Open In Colab Kaggle Gradient Open In SageMaker Studio Lab Binder
Chapter 7: Triangle Plot GitHub Open In Colab Kaggle Gradient Open In SageMaker Studio Lab Binder

Other resources include

  1. Google Cloud
  2. Deepnote
  3. Microsoft Azure (Microsoft Machine Learning Studio)
  4. Amazon SageMaker

Some of these are free forever, some have quotas. Usually most of these are better than buying expensive hardware.

If you need a cloud GPU, two good options are...

(back to top)

Roadmap

The first edition of this book is available for order on O'Reilly as well as Amazon

See the open issues for a full list of proposed features (and known issues).

(back to top)

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

(back to top)

License

Distributed under the Apache-2.0 license. See LICENSE for more information.

(back to top)

Contact

Matthew Mcateer (Co-First Author)

Subhabrata Majumdar (Co-First Author)

Yada Pruksachatkun (Co-First Author)

Project Link: https://github.com/matthew-mcateer/practicing_trustworthy_machine_learning

(back to top)

Acknowledgments

We'd like to thank the following people for their reviews and proofreading of the book, as well as for their work on many of the core technologies and libraries this book dives into:

We would also like to dedicate this book to the memory of security researcher, internet privacy activist, and AI ethics researcher Peter Eckersley (1979 to 2022). Thanks for your work on tools such as Let’s Encrypt, Privacy Badger, Certbot, HTTPS Everywhere, SSL Observatory and Panopticlick, for advancing AI ethics in a pragmatic, policy-focused, and actionable way. Thank you also for offering to proofread this book in what unexpectedly turned out to be your last months.

(back to top)

About

GitHub Repo associated with the O'Reilly book "Practicing Trustworthy Machine Learning"

Resources

License

Code of conduct

Stars

Watchers

Forks

Contributors 4

  •  
  •  
  •  
  •  

Languages