From b706247021412676c48c616fa38f5ce7a7eae866 Mon Sep 17 00:00:00 2001 From: Ria Cheruvu Date: Sun, 10 Feb 2019 17:20:34 -0700 Subject: [PATCH 1/2] Adding file with resources from #ethics-research --- ethics-research-resources.md | 114 +++++++++++++++++++++++++++++++++++ 1 file changed, 114 insertions(+) create mode 100644 ethics-research-resources.md diff --git a/ethics-research-resources.md b/ethics-research-resources.md new file mode 100644 index 0000000..d758658 --- /dev/null +++ b/ethics-research-resources.md @@ -0,0 +1,114 @@ +# Ethics-Research Resources + +A collection of relevant resources discussed in the #ethics-research D4D Slack channel. + +## Fairness and Bias + +[Fair prediction with disparate impact: A study of bias in recidivism prediction instruments]((https://arxiv.org/pdf/1703.00056.pdf) ) by Alexandra Chouldechova + +[*"Fairness Through Awareness"*](https://arxiv.org/pdf/1104.3913.pdf) by Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, Richard Zemel + +[Equality of Opportunity in Supervised Learning](https://arxiv.org/pdf/1610.02413.pdf) by Moritz Hardt Eric Price Nathan Srebro + +[Why Is My Classifier Discriminatory?](https://papers.nips.cc/paper/7613-why-is-my-classifier-discriminatory) + +[ProPublica’s “Machine Bias” story](https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing) A more technical description of their analysis is available [here](https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm). + +[“A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics”](https://arxiv.org/pdf/1807.00553.pdf) + +[“Make “Fairness by Design” Part of Machine Learning”](https://hbr.org/2018/08/make-fairness-by-design-part-of-machine-learning) + +[“Local Interpretable Model-Agnostic Explanations”](https://arxiv.org/pdf/1602.04938.pdf) + +[“Anchors: High-Precision Model-Agnostic Explanations”](https://homes.cs.washington.edu/~marcotcr/aaai18.pdf) + +[Google's developer training program on algorithmic fairness](https://developers.google.com/machine-learning/fairness-overview/). + +IBM has a fairness toolkit called AI Fairness 360 that utilizes statistical measures and others tools and techniques. More details available in the paper [here](https://arxiv.org/pdf/1810.01943.pdf). The API is available [here](https://aif360.readthedocs.io/en/latest/). + +[The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence](https://medium.com/s/story/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53) + +An interesting Wired article by Joi Ito on definitions and measures of algorithmic fairness in relation to social/moral contexts, available [here.](https://www.wired.com/story/ideas-joi-ito-insurance-algorithms/amp) + +## Opennness + +### Interpretable/Explainable AI + +[Geoff Hinton Dismissed The Need For Explainable AI: 8 Experts Explain Why He's Wrong](https://www.forbes.com/sites/cognitiveworld/2018/12/20/geoff-hinton-dismissed-the-need-for-explainable-ai-8-experts-explain-why-hes-wrong/amp/) + +[Evolving the IRB: Building Robust Review for Industry Research ](https://scholarlycommons.law.wlu.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1042&context=wlulr-online) + +[Interpretable Machine Learning: A Guide for Making Black Box Models Explainable](https://christophm.github.io/interpretable-ml-book/index.html) + +[Fundamentals of Data Visualization](https://serialmentor.com/dataviz/). + +[Distill.pub](https://distill.pub/2018/building-blocks/) + +[The Mythos Of Model Interpretability](https://arxiv.org/abs/1606.03490) + +[Manipulating and Measuring Model Interpretability](https://arxiv.org/pdf/1802.07810.pdf) + +[“Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems”](https://arxiv.org/abs/1806.07552). + +[Sanity Checks for Saliency Maps](https://papers.nips.cc/paper/8160-sanity-checks-for-saliency-maps) + +[TIP: Typifying the Interpretability of Procedures](https://arxiv.org/abs/1706.02952) + +[Towards A Rigorous Science of Interpretable Machine Learning](https://arxiv.org/abs/1702.08608) + +[European Union regulations on algorithmic decision-making and a "right to explanation"](https://arxiv.org/abs/1606.08813) + +[Accountability of AI Under the Law: the Role of Explanation](https://arxiv.org/abs/1711.01134) + +[Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation](https://academic.oup.com/idpl/article/7/2/76/3860948) + +[Meaningful information and the right to explanation](https://academic.oup.com/idpl/article/7/4/233/4762325) + +[Slave to the algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for](https://strathprints.strath.ac.uk/61618/8/Edwards_Veale_DLTR_2017_Slave_to_the_algorithm_why_a_right_to_an_explanation_is_probably.pdf) + +[Explanation in Artificial Intelligence: +Insights from the Social Sciences](https://arxiv.org/pdf/1706.07269.pdf) + +### Transparency + +[We Need Transparency in Algorithms, But Too Much Can Backfire](https://hbr.org/2018/07/we-need-transparency-in-algorithms-but-too-much-can-backfire) + +[Machine learning and AI research for Patient Benefit: 20 Critical Questions on Transparency, Replicability, Ethics and Effectiveness](https://arxiv.org/abs/1812.10404v1) + +[Stealing Machine Learning Models via Prediction APIs ](https://www.usenix.org/system/files/conference/usenixsecurity16/sec16_paper_tramer.pdf) + +### Accountability + +[Accountability of AI Under the Law: The Role of Explanation](https://arxiv.org/abs/1711.01134) + +[AI Can Be Made Legally Accountable for Its Decisions](https://www.google.com/amp/s/www.technologyreview.com/s/609495/ai-can-be-made-legally-accountable-for-its-decisions/amp/) + +### Community Engagement + +[What worries me about AI](https://medium.com/@francois.chollet/what-worries-me-about-ai-ed9df072b704) by Francois Chollet. + +[Interaction is the key to machine learning applications](http://web.media.mit.edu/~lieber/Lieberary/AI/Interaction-Is/Interaction-Is.html) by Henry Lieberman + +[IEEE 2018 workshop on **Machine Learning from User Interaction for Visualization and Analytics](https://learningfromusersworkshop.github.io). + +[Artificial Intelligence and Ethics](http://harvardmagazine.com/2019/01/artificial-intelligence-limitations). + +[Responsible AI Practices](https://ai.google/education/responsible-ai-practices) by Google AI + +[Advancing Both A.I. and Privacy Is Not a Zero-Sum Game](http://fortune.com/2018/12/27/ai-privacy-innovation-machine-learning/) + +## Other resources + +### Adversarial Attacks + +- “Adversarial Examples: Attacks and Defenses for Deep Learning”: https://arxiv.org/pdf/1712.07107.pdf + +- OpenAI’s blog post: https://blog.openai.com/adversarial-example-research/ + +- Berkeley’s blog post: https://ml.berkeley.edu/blog/2018/01/10/adversarial-examples/ + +### Bayesian Networks + +- “Uncertainty and Bayesian Networks”: http://www1.se.cuhk.edu.hk/~hcheng/seg4560/tuto/tutorial03.pdf +- “A Tutorial on Learning With Bayesian Networks”: http://www.cis.upenn.edu/~mkearns/papers/barbados/heckerman.pdf +- [The Book of Why: The New Science of Cause and Effect](https://www.amazon.com/Book-Why-Science-Cause-Effect/dp/046509760X) \ No newline at end of file From ff094da3b6b3ad7c23b3044c7f54144c71a37e85 Mon Sep 17 00:00:00 2001 From: Ria Cheruvu Date: Sun, 10 Feb 2019 21:16:55 -0700 Subject: [PATCH 2/2] Changing HTTP links to HTTPS --- ethics-research-resources.md | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/ethics-research-resources.md b/ethics-research-resources.md index d758658..e02f6b5 100644 --- a/ethics-research-resources.md +++ b/ethics-research-resources.md @@ -4,7 +4,7 @@ A collection of relevant resources discussed in the #ethics-research D4D Slack c ## Fairness and Bias -[Fair prediction with disparate impact: A study of bias in recidivism prediction instruments]((https://arxiv.org/pdf/1703.00056.pdf) ) by Alexandra Chouldechova +[Fair prediction with disparate impact: A study of bias in recidivism prediction instruments](https://arxiv.org/abs/1610.07524) by Alexandra Chouldechova [*"Fairness Through Awareness"*](https://arxiv.org/pdf/1104.3913.pdf) by Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, Richard Zemel @@ -24,11 +24,11 @@ A collection of relevant resources discussed in the #ethics-research D4D Slack c [Google's developer training program on algorithmic fairness](https://developers.google.com/machine-learning/fairness-overview/). -IBM has a fairness toolkit called AI Fairness 360 that utilizes statistical measures and others tools and techniques. More details available in the paper [here](https://arxiv.org/pdf/1810.01943.pdf). The API is available [here](https://aif360.readthedocs.io/en/latest/). +[AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias](https://arxiv.org/pdf/1810.01943.pdf). The AI Fairness 360 API is available [here](https://aif360.mybluemix.net/) and the documentation is available [here](https://aif360.readthedocs.io/en/latest/). [The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence](https://medium.com/s/story/the-seductive-diversion-of-solving-bias-in-artificial-intelligence-890df5e5ef53) -An interesting Wired article by Joi Ito on definitions and measures of algorithmic fairness in relation to social/moral contexts, available [here.](https://www.wired.com/story/ideas-joi-ito-insurance-algorithms/amp) +[Supposedly ‘Fair’ Algorithms Can Perpetuate Discrimination](https://www.wired.com/story/ideas-joi-ito-insurance-algorithms/amp) ## Opennness @@ -87,28 +87,29 @@ Insights from the Social Sciences](https://arxiv.org/pdf/1706.07269.pdf) [What worries me about AI](https://medium.com/@francois.chollet/what-worries-me-about-ai-ed9df072b704) by Francois Chollet. -[Interaction is the key to machine learning applications](http://web.media.mit.edu/~lieber/Lieberary/AI/Interaction-Is/Interaction-Is.html) by Henry Lieberman +[Interaction is the key to machine learning applications](https://web.media.mit.edu/~lieber/Lieberary/AI/Interaction-Is/Interaction-Is.html) by Henry Lieberman [IEEE 2018 workshop on **Machine Learning from User Interaction for Visualization and Analytics](https://learningfromusersworkshop.github.io). -[Artificial Intelligence and Ethics](http://harvardmagazine.com/2019/01/artificial-intelligence-limitations). +[Artificial Intelligence and Ethics](https://harvardmagazine.com/2019/01/artificial-intelligence-limitations). [Responsible AI Practices](https://ai.google/education/responsible-ai-practices) by Google AI -[Advancing Both A.I. and Privacy Is Not a Zero-Sum Game](http://fortune.com/2018/12/27/ai-privacy-innovation-machine-learning/) +[Advancing Both A.I. and Privacy Is Not a Zero-Sum Game](https://fortune.com/2018/12/27/ai-privacy-innovation-machine-learning/) ## Other resources ### Adversarial Attacks -- “Adversarial Examples: Attacks and Defenses for Deep Learning”: https://arxiv.org/pdf/1712.07107.pdf +- [“Adversarial Examples: Attacks and Defenses for Deep Learning”](https://arxiv.org/pdf/1712.07107.pdf) -- OpenAI’s blog post: https://blog.openai.com/adversarial-example-research/ +- [OpenAI’s blog post on Adversarial Examples](https://blog.openai.com/adversarial-example-research/) -- Berkeley’s blog post: https://ml.berkeley.edu/blog/2018/01/10/adversarial-examples/ +- [Berkeley’s blog post](https://ml.berkeley.edu/blog/2018/01/10/adversarial-examples/) ### Bayesian Networks -- “Uncertainty and Bayesian Networks”: http://www1.se.cuhk.edu.hk/~hcheng/seg4560/tuto/tutorial03.pdf -- “A Tutorial on Learning With Bayesian Networks”: http://www.cis.upenn.edu/~mkearns/papers/barbados/heckerman.pdf -- [The Book of Why: The New Science of Cause and Effect](https://www.amazon.com/Book-Why-Science-Cause-Effect/dp/046509760X) \ No newline at end of file +- [“Bayesian networks for decision making under uncertainty: How to combine data, evidence, opinion and +guesstimates to make decisions”](https://rcc.uq.edu.au/filething/get/4349/Bayesian-networks-UQ_Seminar_Nov2016.pdf) +- [“A Tutorial on Learning With Bayesian Networks”](https://www.cis.upenn.edu/~mkearns/papers/barbados/heckerman.pdf) +- [The Book of Why: The New Science of Cause and Effect](https://www.amazon.com/Book-Why-Science-Cause-Effect/dp/046509760X)