Artificial Intelligence


Module 10: ethical and responsible AI

Executive Summary

  • Topics:
    • Ethical and responsible AI 
  • Length: This module will take three weeks to complete
  • Assigned chapters: Chapter 27 plus readings below

AI and ethics

So far this semester we have focused on learning about the technical aspects of AI:  the different methods and ways in which they may be used.  For the final part of the semester, we are going to learn about how AI needs to be developed and used in an ethical and responsible manner.  AI that is simply deployed with no though to the ethical and responsible use can be very dangerous!

When you enrolled in this class, you were interested in AI and probably had seen examples of examples of how AI can be used in the news.  Many of these new stories focus on the negative, highlighting AI (or another automated system) where it made mistakes that affected someone or a group of people in a negative way. Unfortunately the news does not tend to show us any of the potential wins of using AI or automated technology!

For this final module of the AI class, we are going to learn about how we (as computer scientists and potentially AI researchers) can create and use AI in an ethical and responsible manner.  While many of the negative impacts of AI and automation are unintentional, that does not make them any better or more tolerable! While you read and listen to the videos, think about ways in which these negative impacts could have been avoided in the first place. Would hiring a more diverse team have been enough? While it may be a good start, it is not the full solution. We really need to think about the potential uses of our technology as we develop it and ensure that it is used responsibly.

Note: Some of the resources in this module came from the AI & ethics class that was held in Spring 2021 at OU (also online).

AI and ethics

XKCD AI Hire algorithm

From XKCD 

Assignments for Module 10

Topic 1: coded bias movie

This first topic is really done best as a social event!  Since we don’t have an official class meeting time, you will have to make this a social event on your own by inviting your friends!  This is really an important movie and important issues for you to learn about as scientists. This applies not only to the majority of you who are CS majors but also to the rest of you as creators of technology in other ways!  We must ensure that what we do is not recreating injustices and in fact is actively helping to make the world better.

your assignment:  Watch the Coded Bias MOVIE

  • Watch the movie Coded Bias
  • OU has purchased the movie for any OU member (anyone with a valid login to OU libraries) to watch.  The link is below but it will require you to sign in to watch the movie (after you sign it, it will take you to a mymedia link).  Note, this movie is also available on netflix and other platforms and you are welcome to watch it on one of those platforms as well. The link below is available for free but the movie is the same!  Please make this a social opportunity and watch the movie together and discuss!

POST-MOVIE READING

The movie that we watched, Coded Bias, was created by the founder of the Algorithmic Justice League. After watching the movie, please explore the following websites:

  • Coded Bias resources and discussions
    • There are two recordings of discussions with the key researcher in the film (they are long, up to you if you want to watch them or read transcripts or just explore around)
    • Read through the Activist Toolkit – this contains a lot of resources including the questions that we will use to start our discussion (though you are very welcome to add more!).  It includes two really interesting declarations as well.  
  • Coded Bias movie take action part of their website 
  • The Algorithmic Justice League website.  Explore their research projects and other parts of the website.

 

DISCUSSION

This discussion will happen in the #coded-bias channel – please make sure you are IN the coded-bias channel.  I added everyone I believe but you can always add yourself by going to the browse channels and then adding #coded-bias.

We are going to have an active discussion on slack!  Make sure you follow all the classroom rules of conduct and always be respectful, even if you disagree.

If we were in a room together, the conversation would be free-flowing.  I want our conversation to be in that same spirit.  I have seeded the discussion with the questions from the activist toolkit above but I want you to add your own questions and thoughts to the discussion!  Remember to use threading to make it easier to keep track of what is happening in the discussion.

The following are the initial questions.  Please add more!  I made each one of these a thread to make it easier to follow.

  • What did you learn from watching the Coded Bias movie that applies to your life and career?
  • When was the last time you were aware of an interaction with an algorithm?
  • What does the AI you interact with nudge you to do?
  • What data did AI use to decide what you see?
  • What choices did the AI take away?
  • How can AI be used in an equal and ethical manner?
  • What are you willing to do to protect your privacy and autonomy?

 

Grading declaration

Go to the canvas grading declaration and fill out your declaration for this part of the module!

Topic 2: bias in AI

Videos

For this topic, let’s watch several more great videos on bias in AI and then discuss them.  I’m going to let you do this in a choose-your-own-adventure style where I give you some choices and you decide what to watch.

Choices (please choose at least 2 videos, these are all equal choices and not ordered by what I think you should do!):

  • Watch the first keynote talk in the video online here
  • You can also watch the 2nd keynote in the same session.  We watched it at a different time in our full class on AI & ethics.  
    • This talk is by Ann Bostrom on decision making and ethics
  • Watch the Ted talk From park bench to lab bench – What kind of future are we designing? by Ruha Benjamin
    • In our full AI & ethics class, we also read the book Race After Technology by Ruha Benjamin.  This is an excellent book if you want to learn more about this topic!  I am not requiring the book here but it is a good one if you want to learn more! 
  • Watch the Ted Talk The Era of Blind Faith in Big Data Must End by Cathy O’Neil. 
    • Another good book that we read was Weapons of Math Destruction by Cathy O’Neil.  There are lots of other great books on AI & ethics that have come out in the last year so if you are interested in the topic, go into amazon or your favorite bookseller and look up even more on AI & ethics. 
  • Watch Michael Kearns and Aaron Roth’s talk on their book The Ethical Algorithm
    • This is a relatively recent book focused on how we can make algorithms more ethical from the start and in their design.  It is written for a general audience but dives deeply into some fascinating subjects on fairness and ethics.  
  • Watch Why Ethical AI Needs A Focus On The Fundamentals w/David Danks
    • David Danks is a leader in the field of AI & ethics
  • Find a video of your own choosing on AI & Ethics.  Requirements: 
    • It must be in-depth (no TikTok or YouTube shorts).  Aiming for at least TedTalk length (which is usually 15-20 min)
    • It must be G-rated so you can share it with the class
    • It should be by an expert on some aspect of AI & ethics (could be a domain expert on a specific application that you are interested in)

 

DISCUSSION

I’m not sure a slack discussion will work well given all of the choices above so we are going to try a google docs discussion instead.  Each one of the choices above has a separate page in the google doc linked below.  Your job is to go into the document and 1) Make sure there is a summary of the videos (you can write one if you are the first one there or edit it if you are later!) and 2) Make sure you discuss main ideas of the videos you watched.  

Google discussion doc

The following are the initial questions to get you thinking but please feel free to add your own!

  • What role does fairness play in decision making?
  • How do the ideas of bias and fairness and issues with data collection apply to applications outside of what she discussion?  For example, we discussed geoscience applications in the chat after her the talk at AMS.  Think outside the box of what we have already discussed with Race After Technology and Coded Bias.
  • What are the impacts of AI related policy (Dr Turner-Lee works for a NGO and focuses a lot on policy) on bias, fairness, and overall?

 

Grading declaration

Go to the canvas grading declaration and fill out your declaration for this part of the module!

Thanksgiving vacation bonus

I want you to enjoy your thanksgiving week!  This is 100% a bonus assignment:  if you have enjoyed learning about AI & ethics, your bonus assignment is to discuss any of the topics on AI & ethics with your friends or family during a visit.  The idea is to make more people aware of the issues that AI can cause if it is developed in an irresponsible manner and then to share that we can do better!  Making people more aware of the issues will help us to address them in the long term.

If you completed this assignment, click on the bonus grading declaration.  Again, this is an optional bonus!  

Topic 3: AI & liability

Reading

Assignment

The assignment for topics 3 and 4 is below.

Topic 4: AI & humanity

Reading

We are going to read something totally different today but it will really get you thinking about AI’s impact on humanity in a different way! 

  • (2 min) As a warm-up and very short video, I want you to watch the trailer for the movie AI
  • (60 min) Read Telling Stories on Culturally Responsive AI.  This is a really fun book full of short stories from a recent workshop held to discuss the impact of AI on humanity and our future.  They wanted to focus on culturally relevant stories and they are great!

Optional Topic 5: philosophy, ethics, and safety of AI

Reading

  • This is really optional as you have done a lot of reading and videos on AI & ethics above.  The book has a nice summary of AI & ethics as well and I encourage you to take a quick read of Chapter 27 in the AIMA book.

project for module 10

Project 6

  • Your final project is a writing project (no code!) and is detailed in the project 6 page.  It is due the final day of classes (Dec 9) as we cannot make anything due during finals week, even with no final for the class!  

suggested schedule for module 10

week 1

  • Watch the coded bias movie by Wednesday (Topic 1)
  • Participate actively in the discussion about the movie in the class slack

Thanksgiving week bonus

  • See above to see the thanksgiving week bonus

Week 2

  • Complete topic 2 by Wednesday
  • Complete topic 3 by Friday

Week 3 (last week of classes!)

  • Complete Topic 4 by Monday (you want to do this before you do your final project!)
  • Complete Topic 5 by Wednesday (it is short!)
  • Finish Project 6 by Friday