Skip Navigation

Teaching Ethics in Computer Science, Part I

Art by Klara Auerbach

OVERVIEW:

In this episode, hosts Morgan Awner ’21 and Rachel Lim ’21 examine how Brown’s Computer Science department has implemented an ethics curriculum to help students comprehend the real-world, ethical implications of the code they produce. This is the first episode in a two part series.

SPECIAL THANKS TO:

Our interviewees, Daniel Smits, Signe Golash, and Hal Triedman, who are current students at Brown University.

TRANSCRIPT:

 Morgan:  The loon of the moon

In the night, to me,

The air was full of stars;

The wild bird piped upon the branch,

And the blackbird, tuned his song

To the sweetest note

That ever a lark knew!

RACHEL: That was pretty nice, Morgan. Did you write that?

MORGAN: No actually. It was written by a computer.

RACHEL: Well of course, everyone writes on their computer now.

MORGAN: No, the computer literally wrote the poem! 

RACHEL: What?? 

MORGAN: Yeah! This February, an artificial intelligence lab called OpenAI, which was co-founded by Elon Musk, released a limited version of a new machine learning model. The model was trained to predict the next words in a textual prompt entered by a user.

RACHEL: Well that doesn’t sound too bad. Sounds kind of cool, can a computer write my English paper due next week?

MORGAN: Not totally sure if your professor will buy it, but the implications of this are massive, and they’re not all positive. For instance, the project raised significant concerns about the potential for misuse of this model in the generation of fake news. This is why OpenAI only released a limited model: the full model, the lab claimed, was too dangerous for full release. 

RACHEL: Over the summer, two master’s students at Brown published a re-creation of OpenAI’s model. In the wake of this announcement, OpenAI released a report saying that more than five separate groups had replicated their work at full scale. Now, it’s too late for this model to be withheld from the public. You can find the full code online and download it yourself.

MORGAN: So, basically, the cat is already out of the bag.

RACHEL: Yeah, that’s what it looks like to me. Once they’re created, technologies with dangerous capabilities often escape the control of their benevolent creators. 

— Intro —

MORGAN: Welcome back to BPRadio. Today, we are talking about incorporating ethics into computer science curriculum.  We all know that computer science and programming have infiltrated our lives, just look to the device you’re listening to this on. And for many students at Brown, computer science is integral to their education.

RACHEL: At Brown, one in six students now concentrates in computer science. Nationwide–and indeed worldwide–our best universities churn out thousands of graduates with computer science degrees every year. This begs the question: What are they being taught?

MORGAN: With a few exceptions, computer science courses at Brown prior to this year focused almost entirely, if not entirely, on teaching technical skills. And this makes sense: students at Brown want to be competitive in the labor force, and they want to develop a thorough understanding of the tricky concepts they’re learning. Computer science courses at Brown are rigorous because developing this thorough understanding takes time, practice, and exposure to a wide variety of concepts. But this purely technical conception of readiness ignores the need to teach students about the societal implications of what they’re building. In this sense, computer science education as we’ve known it for decades has failed us.

RACHEL: Hence, the Ethics TA program. You may have heard some of your friends who are in computer science classes at Brown talk about these new Teacher’s Assistants whose role is to start introducing ethical and societal considerations into technical computer science courses. It’s great that students will be made to think more about ethical questions in technical courses, but how will we know if this program is actually effective? 

MORGAN: Is this the answer to the concerns about the powers and dangers of computer science? Can we expect to see a broader transformation in the computer science department  when ethical questions are taken  more seriously?

RACHEL: Before asking what an ethical computer science program should look like, we wanted to first learn about the nature of the problem as students at Brown understand it. We asked some students at a philosophy student group to tell us why they thought ethics needs to be integrated into computer science education. 

DANIEL SMITS: as these algorithms get more and more complex, particularly Deep Learning, um, they become less and less tractable, less understandable, so you have a situation where you have these really complex networks with thousands if not millions of weights, and all of these parameters are all influencing the outputs and decisions it makes and uh, even the people who built these networks don’t really understand all of the reasons why certain outputs might occur, um, and the issue is, the issue starts when we start to put trust in these algorithms and we start to treat them as if they’re truly reflecting the way the world is, so if you imagine you could have an algorithm that could tell you whether someone’s, like, credit reliable, or whether someone is a threat to society, um– all of these things that can have impacts on people’s lives– once you introduce an algorithm into it, lay-people especially, have a tendency just to defer to the algorithms’ judgment, even though the people who built the algorithms, themselves, might not know what’s really going on inside of it, um, which means that it could have unintended consequences, both in terms of making decisions that we don’t understand, and also just, kind of amplifying the biases that already exist in society today, because a lot of these– for example, in deep learning, you have, you usually train it on labelled data which will reflect the biases that exist in society. 

MORGAN: What Daniel said gets at the heart of one of the main problems we face in computer science: tech isn’t really neutral. You might imagine that automating certain decision-making processes would lead to the elimination of human biases that go into those processes. In reality, those same human biases are often unintentionally built into the technology we create. That’s a huge problem. It means that no matter how much we trust the intentions of a programmer, we can’t be sure that what they’re building does what they want it to do, and we won’t understand all of the potential implications of the technology.

RACHEL: So how does the new TA program address these kinds of issues? To better understand the nature and logistics of the program, we interviewed Signe Golash. Signe is an Ethics TA for CS17, a popular intro computer science class.

SIGNE GOLASH: So I am one of the ETAs for CS 17. There are two of us, there’s two ETAs for every course that is implemented. Right now, I’m not doing too much because most of our role was writing assignments and curriculums over the summer. That was where the majority of our work is done. But right now, we’re still meeting and planning other sorts of projects, future workshops, maybe lectures, stuff like that. I’m thinking about courses for next semester, expansion, just sort of because the program is kind of in in its early stages and its beginnings.

MORGAN: Signe talked about how she had a lot of freedom to implement what she wanted for CS17. CS17 is an intro course, and it assumes no prior programming experience. It’s really important that CS17 includes exposure to these kinds of ethical questions because it’s so foundational to many students’ understanding of computer science. Still, because it’s an intro course, many of the ethical questions that are raised are not directly related to projects students are working on.

RACHEL:  The issues that students grapple with in intro courses are more abstract. Other courses, like Deep Learning, have projects that directly raise ethical questions.  Deep Learning is a class about machine learning techniques; and many of the algorithms taught in the course are at the forefront of world-changing technology. The algorithm we discussed at the beginning of this episode – which could predict what a person would say next –  was built on a model using techniques similar to those taught in the course.

HAL TRIEDMAN: My name is Hal Triedman. I’m an ethics ta for Deep Learning.

MORGAN: Hal’s a senior at Brown. He’s working directly with students who have reached a level of expertise in Deep Learning that they may very well be pioneers of the next groundbreaking technology. He believes that it’s incredibly important for these students to think about specific problems that could arise out of the kinds of things they’re building. Introducing ethical material into the curriculum of Deep Learning is no small task. Here’s Hal talking about some of the things he’s done as an ethics TA for Deep Learning:

HAL: At least the way that we’ve been doing it in the class that I’ve been involved in, has been, we’re sort of taking a multiple front approach to this. We wrote a new lab, which was entirely based around debiasing gender vector like word vectors based off of gender stereotypes. Not really important what that exactly means, except for the fact that it was basically trying to correct for bias data in the training data set. So like that was a week long lab assignment that we gave out. We’re also inserting like a whole slate of new slides. So I mean, these are being presented to, you know, the 250 people who are showing up at this class every time that convenes. And they range from like, highly technical explanations of how exactly differential privacy or algorithmic fairness could work. Right? So those like mathematical and theoretical explanations to, you know, much more broad scale, high level things talking about, you know, if you’re developing if you’re developing an algorithm, like a deep learning algorithm, you know, what is its life cycle going to be? How are you going to conceive of it? You know, how are you going to figure out what the right training set is how you’re going to test it to make sure that that training said it’s actually representative of the real world, like cat once the algorithm is deployed, how we’re going to Make sure that it’s doing the thing that you actually want it to do. So there’s all sorts of different you know, macro scale micro scale things that we’re adding in. So, that’s on the lecture front to live in lecture and then for the homework, we are also adding in to every homework a slate of ethics questions, and those are directly pertaining to you know, the lecture material that we put together and just the other lecture material, but large, so, in addition to you know, getting the ethics content, through one formula, right of seeing and in lecture, people are also having to, you know, grapple with some of these concepts themselves… 

MORGAN: Brown’s Deep Learning course has clearly introduced significant additions to the curriculum out of concern for these kinds of ethical questions. But requiring students to take a step back and think about they’re working on is vitally important. Hal talked about how important it is to get students thinking about the nuances of these questions, because these questions are almost never black and white. It’s not as if one algorithm is clearly bad and another is clearly good. Getting students to weigh the positives and negatives of these questions is one of the most important things that needs to come out of this program. Here’s Hal again, talking about the nuances of self-driving cars:

HAL: So, an example that I was just working on recently, is talking about self driving cars. You know, there are various and widely speculative claims about what the effects of self driving cars will be right there 4 million people who are currently employed in the United States with jobs that involve driving. I think it’s something like 3.1 million truck drivers, hundreds of thousands, if not millions of car drivers, Uber Lyft, all those ride sharing services taxis. So, you know, trying to speculate what the effect will be on them. And then also say, you know, those jobs are eventually going to stop if stop existing if autonomous cars become a mainstream commodity. At the same time, you know, if autonomous cars can, you know, bring the rate of fatal car crashes down from, you know, 12.5 deaths per billion miles driven to three deaths per billion miles traveled the amount of Human life that has been saved in that process is great. And the amount of value that that has for the world is huge, right? It’s a really difficult trade off, you know, you have to think about, on one hand, some people’s, you know, concrete well being economically. And on the other hand, you have to think about, you know, the possible positive effects for society, right? more accessible transportation, right, that eventually trends towards a cost of almost zero. So, again, these are the questions kinds of questions that we’re asking people to grapple with. And it’s not like there’s a right answer. It’s just we want people to think through these issues themselves, right, and to use their principles and their values to come up with a valid reason answers the answer that they feel like they can defend.

RACHEL: Getting students to grapple with these kinds of questions  represents a significant shift and improvement in the way Deep Learning and other courses are taught. At the very least, students will emerge from these courses with a basis for thinking about ethical quandaries in tech. But is this enough? Is this the final form of Brown’s commitment to introduce ethics into education? No, it’s not, says Signe. She has her eyes on more fundamental changes in the department, and says that these ethics TAs are just the first step in a broader transformation:

SIGNE: Ideally, we’d like it to be fully integrated into the department where right now it’s it’s a lot of like, either people don’t know who we are and what we’re doing or people sort of, we don’t want to give the impression that this is just something that’s being tacked on to courses, what we’d like is full integration, where ethics components are a part of the education, their standard part, and there’s something that people come to expect when they’re learning Computer Science at Brown. That’s, that’s really the final goal. that’ll probably take a while. It’ll take many generations of the program and iterations and just seeing what works, what doesn’t and really just trying our best to fully integrate the program into the department.

MORGAN: it is clear that we need to make some broad changes to the way we teach  computer science, beyond just adding ethics TAs. The kinds of concerns that motivated the creation of the ethics TA program are pressing, but will these concerns continue to transform the way computer science is taught at Brown? What would that even look like? Here’s Signe again:

SIGNE: I think it would be where ethical considerations are much more natural. For example, when you’re learning to code, one of the things you learn is comment your code. And that is something you do every time you write code because it makes code easier.

RACHEL: Signe just said something important: she talked about commenting your code. Commenting is something that programmers do to describe what their code does. It makes it easier to read and understand what the code is doing, and it’s something that’s taught as basic to the programming process. In several classes at Brown, students lose points for not adequately commenting their code. It’s that fundamental.

SIGNE: We would like to have ethical like thinking about ethical design and like the consequences of a decision that you make when designing your code to be just natural. Where you’re thinking whenever you’re. Maybe you’re designing an app: accessibility is something you naturally think about. And inclusivity: It is something you naturally think about where it’s not like there’s something committee where you finished your product. And there’s a committee saying, oh, but you didn’t implement this ethics thing. It’s where you were thinking about that from the start.

MORGAN: It’s really exciting to imagine that this program will continue to expand, and could eventually have profound effects on the nature of the department. But what would this look like? In the second episode of BPRadio’s two-part series on ethics in computer science, we’ll investigate what it really means to teach ethics as a fundamental part of computer science. We’ll try to understand how we can use a more ethically-minded computer science program to prepare ourselves for the potentially negative effects of groundbreaking technology. Join us as we interview three professors at Brown who teach classes in the computer science  department: Andy Van Dam, Deborah Hurley, and Tim Edgar. By understanding what we can hope to achieve from a new computer science program, we can start to make the changes necessary to realize our goals.

SUGGESTED ARTICLES