Skip to content

Commit

Permalink
Merge pull request #3 from Data4Democracy/ethics-research-resources
Browse files Browse the repository at this point in the history
Adding file with resources from #ethics-research
  • Loading branch information
cheapwebmonkey authored Feb 26, 2019
2 parents 9debf73 + ff094da commit 46d115e
Showing 1 changed file with 115 additions and 0 deletions.
115 changes: 115 additions & 0 deletions ethics-research-resources.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
# Ethics-Research Resources

A collection of relevant resources discussed in the #ethics-research D4D Slack channel.

## Fairness and Bias

[Fair prediction with disparate impact: A study of bias in recidivism prediction instruments](https://arxiv.org/abs/1610.07524) by Alexandra Chouldechova

[*"Fairness Through Awareness"*](https://arxiv.org/pdf/1104.3913.pdf) by Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, Richard Zemel

[Equality of Opportunity in Supervised Learning](https://arxiv.org/pdf/1610.02413.pdf) by Moritz Hardt Eric Price Nathan Srebro

[Why Is My Classifier Discriminatory?](https://papers.nips.cc/paper/7613-why-is-my-classifier-discriminatory)

[ProPublica’s “Machine Bias” story](https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing) A more technical description of their analysis is available [here](https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm).

[“A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics”](https://arxiv.org/pdf/1807.00553.pdf)

[“Make “Fairness by Design” Part of Machine Learning”](https://hbr.org/2018/08/make-fairness-by-design-part-of-machine-learning)

[“Local Interpretable Model-Agnostic Explanations”](https://arxiv.org/pdf/1602.04938.pdf)

[“Anchors: High-Precision Model-Agnostic Explanations”](https://homes.cs.washington.edu/~marcotcr/aaai18.pdf)

[Google's developer training program on algorithmic fairness](https://developers.google.com/machine-learning/fairness-overview/).

[AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias](https://arxiv.org/pdf/1810.01943.pdf). The AI Fairness 360 API is available [here](https://aif360.mybluemix.net/) and the documentation is available [here](https://aif360.readthedocs.io/en/latest/).

[The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence](https://medium.com/s/story/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53)

[Supposedly ‘Fair’ Algorithms Can Perpetuate Discrimination](https://www.wired.com/story/ideas-joi-ito-insurance-algorithms/amp)

## Opennness

### Interpretable/Explainable AI

[Geoff Hinton Dismissed The Need For Explainable AI: 8 Experts Explain Why He's Wrong](https://www.forbes.com/sites/cognitiveworld/2018/12/20/geoff-hinton-dismissed-the-need-for-explainable-ai-8-experts-explain-why-hes-wrong/amp/)

[Evolving the IRB: Building Robust Review for Industry Research ](https://scholarlycommons.law.wlu.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1042&context=wlulr-online)

[Interpretable Machine Learning: A Guide for Making Black Box Models Explainable](https://christophm.github.io/interpretable-ml-book/index.html)

[Fundamentals of Data Visualization](https://serialmentor.com/dataviz/).

[Distill.pub](https://distill.pub/2018/building-blocks/)

[The Mythos Of Model Interpretability](https://arxiv.org/abs/1606.03490)

[Manipulating and Measuring Model Interpretability](https://arxiv.org/pdf/1802.07810.pdf)

[“Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems”](https://arxiv.org/abs/1806.07552).

[Sanity Checks for Saliency Maps](https://papers.nips.cc/paper/8160-sanity-checks-for-saliency-maps)

[TIP: Typifying the Interpretability of Procedures](https://arxiv.org/abs/1706.02952)

[Towards A Rigorous Science of Interpretable Machine Learning](https://arxiv.org/abs/1702.08608)

[European Union regulations on algorithmic decision-making and a "right to explanation"](https://arxiv.org/abs/1606.08813)

[Accountability of AI Under the Law: the Role of Explanation](https://arxiv.org/abs/1711.01134)

[Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation](https://academic.oup.com/idpl/article/7/2/76/3860948)

[Meaningful information and the right to explanation](https://academic.oup.com/idpl/article/7/4/233/4762325)

[Slave to the algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for](https://strathprints.strath.ac.uk/61618/8/Edwards_Veale_DLTR_2017_Slave_to_the_algorithm_why_a_right_to_an_explanation_is_probably.pdf)

[Explanation in Artificial Intelligence:
Insights from the Social Sciences](https://arxiv.org/pdf/1706.07269.pdf)

### Transparency

[We Need Transparency in Algorithms, But Too Much Can Backfire](https://hbr.org/2018/07/we-need-transparency-in-algorithms-but-too-much-can-backfire)

[Machine learning and AI research for Patient Benefit: 20 Critical Questions on Transparency, Replicability, Ethics and Effectiveness](https://arxiv.org/abs/1812.10404v1)

[Stealing Machine Learning Models via Prediction APIs ](https://www.usenix.org/system/files/conference/usenixsecurity16/sec16_paper_tramer.pdf)

### Accountability

[Accountability of AI Under the Law: The Role of Explanation](https://arxiv.org/abs/1711.01134)

[AI Can Be Made Legally Accountable for Its Decisions](https://www.google.com/amp/s/www.technologyreview.com/s/609495/ai-can-be-made-legally-accountable-for-its-decisions/amp/)

### Community Engagement

[What worries me about AI](https://medium.com/@francois.chollet/what-worries-me-about-ai-ed9df072b704) by Francois Chollet.

[Interaction is the key to machine learning applications](https://web.media.mit.edu/~lieber/Lieberary/AI/Interaction-Is/Interaction-Is.html) by Henry Lieberman

[IEEE 2018 workshop on **Machine Learning from User Interaction for Visualization and Analytics](https://learningfromusersworkshop.github.io).

[Artificial Intelligence and Ethics](https://harvardmagazine.com/2019/01/artificial-intelligence-limitations).

[Responsible AI Practices](https://ai.google/education/responsible-ai-practices) by Google AI

[Advancing Both A.I. and Privacy Is Not a Zero-Sum Game](https://fortune.com/2018/12/27/ai-privacy-innovation-machine-learning/)

## Other resources

### Adversarial Attacks

- [“Adversarial Examples: Attacks and Defenses for Deep Learning”](https://arxiv.org/pdf/1712.07107.pdf)

- [OpenAI’s blog post on Adversarial Examples](https://blog.openai.com/adversarial-example-research/)

- [Berkeley’s blog post](https://ml.berkeley.edu/blog/2018/01/10/adversarial-examples/)

### Bayesian Networks

- [“Bayesian networks for decision making under uncertainty: How to combine data, evidence, opinion and
guesstimates to make decisions”](https://rcc.uq.edu.au/filething/get/4349/Bayesian-networks-UQ_Seminar_Nov2016.pdf)
- [“A Tutorial on Learning With Bayesian Networks”](https://www.cis.upenn.edu/~mkearns/papers/barbados/heckerman.pdf)
- [The Book of Why: The New Science of Cause and Effect](https://www.amazon.com/Book-Why-Science-Cause-Effect/dp/046509760X)

0 comments on commit 46d115e

Please sign in to comment.