[UPDATED 09/12/17: The transcript of this audio, provided courtesy of Open Transcripts, is now available below the Read More Cut.]
[UPDATED 03/28/16: Post has been updated with a far higher quality of audio, thanks to the work of Chris Novus. (Direct Link to the Mp3)]
So, if you follow the newsletter, then you know that I was asked to give the March lecture for my department’s 3rd Thursday Brown Bag Lecture Series. I presented my preliminary research for the paper which I’ll be giving in Vancouver, about two months from now, “On the Moral, Legal, and Social Implications of the Rearing and Development of Nascent Machine Intelligences” (EDIT: My rundown of IEEE Ethics 2016 is here and here).
It touches on thoughts about everything from algorithmic bias, to automation and a post-work(er) economy, to discussions of what it would mean to put dolphins on trial for murder.
About the dolphin thing, for instance: If we recognise Dolphins and other cetaceans as nonhuman persons, as India has done, then that would mean we would have to start reassessing how nonhuman personhood intersects with human personhood, including in regards to rights and responsibilities as protected by law. Is it meaningful to expect a dolphin to understand “wrongful death?” Our current definition of murder is predicated on a literal understanding of “homicide” as “death of a human,” but, at present, we only define other humans as capable of and culpable for homicide. What weight would the intentional and malicious deaths of nonhuman persons carry?
All of this would have to change.
Anyway, this audio is a little choppy and sketchy, for a number of reasons, and I while I tried to clean it up as much as I could, some of the questions the audience asked aren’t decipherable, except in the context of my answers. [Clearer transcript below.]
Until Next Time.
Damien Williams: …algorithmic bias that says ultimately when we make these changes, when we make these assumptions, we build into the system we are creating these small changes in possibilities for the future. We are also looking at which kind of assumptions are being built in. And the question that that leads us to it who is developing these minds? And what purpose?
So, if Facebook, Microsoft, Google, Amazon, military entities are developing these minds, the kinds of thinking that they want these minds to do will be based on profit motive, advertisement, threat assessment. But as we’ve seen when these minds learn, even if we program them with human-like concerns, the methods by which they learn are not human methods. The extrapolations that they make from their starting parameters are not necessarily human extrapolations.
There was a fear a month and a half ago that impacts them. A for the most part unmanned aerial vehicle, a drone, had targeted and called as a good strike an entire population of 150 civilians as a result of it making decisions on the starting parameters it had been given. That is, it was told to sort out which people amongst the population had behaviors and associations and data context clues that would indicate on the whole that they were Al-Qaeda affiliates. That they were enemy combatants. And it marked out a wide swath of population based on what it had been told by its programmers were its crucial indicators.
Now, unfortunately for us, what actually happened was that it did its job perfectly. It took those key indicators, those starting parameters, and extrapolated them out into the kinds of behavior that we have deemed to be suspicious behavior for a particular group of people but which if viewed in a different context might not in fact be suspicious at all.
So, let’s take a step back and let’s ask the question of, if we are going to be making these thinking, conscious machines; if we’re going to be capable of developing these minds, then what we are doing, what we are attempting to do, is create something that is itself creative. What we’re going to be attempting to do is make something that can situate itself within the world, that can assess information. Not just orient data but can extrapolate* from it. And if we’re talking about something that can do that, something that can create, something that can learn, something that can grow, at a certain point we have to ask when are we talking about a person? When are we talking about a consciousness? And to do that we have to ask what do we mean by personhood? What do we mean by consciousness?
The current legal definition of personhood in the United States is any human being or certain corporations that have the right organization of a person, of humans. However, the subset of that secondary definition also notes that its important that the corporation defined as human cannot be liable for damages in certain ways because a corporation cannot have human emotions such as malice. So I cannot in fact convict you, Bank of America or Citibank or Monsanto or whomever, I cannot convict you of aggravated assault or murder. I cannot convict you of these things because you—Citibank, Bank of America, whomever—do not have the capacity for malice. Malice aforethought, intent, in those behaviors cannot be ascribed to those corporations.
So we have to ask in that scenario what is it that we then mean, legally, when we mark out a person. Because when it comes to human beings, when we say a human being is a person, we are marking out a privileged position. We’re saying, “This human being has human quality.” We have creativity. We have emotion. We have intentionality. We have the ability to relate and to model each other in our own minds to plan for the future, to worry about the past. We have these capabilities. And as a human mind we sit in a privileged space. Humans are people.
But if the definition of corporations as people specifically marks out and excludes the capacity for emotion, the capacity for worry, the capacity for creativity in this meaningful way, the capacity for intention, then it seems like we are holding two conflicting definitions under personhood and basing them around this similar if not the same nebulous idea. I posit then that we don’t actually know what it is that we mean when we say that something is or is not a person. And as such, we have to be very very careful about what we ascribe as personhood.
So, if I say that a dolphin is a person, as India has done—that’s law: dolphins, whales, cetaceans are non-human persons—what does that mean? Well for them in that context, in their legal context, it means that they have minds, they have consciousness, they have specifically observable intentions. They have family structures, language, the ability to play and create. Not in the that we do. Not in the exact same way that we do. But in a way that we recognize, that we can analogize to our own.
Now, this obviously brings up problems of anthropocentrism or the privileging of the human perspective in our human-designated concepts, but we unfortunately don’t have enough time to go into all of that. But let it be known that it is recognized that our human-centeredness is problematic. Because when I mark out in a dolphin something that I say, “Well that looks like what I do when I play with my kids,” to a dolphin that might not mean the same thing, at all. That might be something entirely different. But all we can do is analogize. And if we see enough similarity and we can build from that analogy, then we feel generally pretty comfortable in moving forward with it.
This brings us to the problem again of the biases inherent in our systems. This idea that once we’ve started with an assumption, once we’ve started with an analogy and build from there, what if our analogy is wrong? What if our assumptions are wrong in some basic way? If I build into my system, if I build into my structure of societal behavior these flaws, these misunderstandings and misconceptions, then everything my system builds, everything my system preferences, will have within it those flaws and those misconceptions. Will tend toward those misconceptions.
And so again we see how we can unintentionally program Google’s reverse image search to be racist. How we can unintentionally program the kinds of algorithmic searches that Google will return to us if we search a particular type of name. And it will also, if you search in particular for “black names,” black-sounding names, refer to us bail services, lawyers for criminal trials.
We can come to understand that what we are doing when we code and when we program is not neutral, in this way. When we code, when we program, we’re describing the world. And so when I code a new mind, when I program its starting parameters for how it thinks, I am (as I would be with a human child) teaching it what I think the world is. What I think the truth of the world is. And it grows from there. Does it grow to look exactly like me? No. No child grows to be exactly like its parents.
The rights and responsibilities we have to non-human persons are still very much in question. And again, they’re not even agreed upon by those few countries which do start to ascribe rights and responsibilities to non-human persons. The United States currently is still in the mids of a trial and appeal process for the right of non-human apes. That is, there was a chimp, a monkey in a zoo that stole somebody’s camera and took a selfie. And the person whose camera it was took that selfie and put it in various news organizations and made a great deal of money from it.
PETA (with whom I do not always agree but that’s neither here nor there), PETA sued this person on behalf of the monkey, saying that that money should by all rights belong to that monkey. Because the monkey took the picture. And so in court now we have this question of well wait, is a monkey subject to copyright law? Is a monkey subject to legalities of creative control? Can a monkey give you permission to use its selfie as long as you credit it? I mean, most of us would be generally pretty okay with that, right? But if you’re gonna make money I’d like some royalties. But if you’re just kinda going to redistribute it just you know, put my name in there. Whatever monkey’s name is. Like, what is the foundation for this, right. So this is still an ongoing trial. This conversation is still happening. We don’t have this foundation of saying, “Yes. A monkey is a non-human person. It has rights, it has responsibilities. It engages with human society in X, Y, and Z ways.” We don’t have this available to us.
But it doesn’t mean that we should not in fact be trying to suss these problems out. Because as we move through these questions of non-human persons, non-human animals as persons, we have to also start to think about this idea of what it means to ask a non-human person to engage our society at all. If the monkey doesn’t know or care about copyright, then should we sue on its behalf? If the dolphin doesn’t know or care about what it means to have a fair and speedy trial under the law as described by the United States Constitution, then are we right to put a dolphin on trial the way we would a human being, for murder? For theft? For any crime we would perceive as being done against a human person? Once we allow non-human personhood, the question of how the intersections of personhood, how the intersections of identity, of mind?, of what it means to be a particular kind of mind, a particular kind of concern, a particular kind of modes of thought in the world, a particular epistemology, these questions become much much much more intricate, much more complex.
And so do we have a responsibility to these minds? Do these minds have rights in the same way that we do? If and when we succeed in creating a machine mind that we are willing and able to recognize as a mind “like ours,” as a mind that we say, “Yes, that’s a mind” (as arrogant as that is), what rights will it have? What responsibilities will it have? And more crucially what responsibilities will we have to it? Because as we’ve seen through the course of this talk, the way in which we teach the systems we use to think about these ideas, to program the starting assumptions and parameters will become a lesson in and of itself. These ways of teaching will become something that the system learns from and begins to preference.
The last thing I want to talk about is AlphaGo. I don’t know if you guys are familiar with this. This is Google’s DeepMind project. Their artificial intelligence project has a wing called the AlphaGo Project, wherein they taught the DeepMind architecture how to play the ancient Chinese game called Go. And they taught it and have been teaching it for about a year now. And back in October it played against a high-ranked master named Fan Hui. And Fan Hui lost. In fact in January it played against a higher-ranked European master. I don’t remember the gentleman’s name, but he also lost. And last week, it started a five-game tournament against the top-ranked tournament Go master in the world. Five games against a guy named Lee Sedol. And Mr. Lee (like I said, top-ranked in the world) went pro when he was 12, he’s 33 now. Lee Sedol won one game in that five-game tournament. One. Because he started in that game to understand how AlphaGo thought.
You see, the way they teach AlphaGo is that they program it with every game of Go that it can watch. They program it with every strategy that’s possible for us to extrapolate. But unfortunately, there are 2×10178 power games of Go possible in this world. Which is several orders of magnitude more games of Go than there are atoms in the known universe. So…Go is complex. Go is huge. So we can’t teach it every game. We can’t show it every game. But we can teach it the rules. We can teach it the starting points, and we can say, “Okay, now play again this person.”
And it loses, a lot. But it learns from its losses. Like you and I learn from our losses. It develops its strategies based on the people it has played against, like you and I would. But unlike you I, it has the ability to maintain and to extrapolate all of these things in itself, at once. To process all of these strategies that it has played against, simultaneously. To think about them all in…split seconds. You and I, we do this in a different way. We have basically what turns into intuition for us. We think about these things quickly, and then we get to the point where we don’t really think about them at all; we just look at the board and we can get a sense of how the game’s going to go and what strategies will advance us and which will stiffle us.
When Lee Sedol won his one game against AlphaGo, he had learned that it had been playing this entire time in a Japanese style of Go. He learned that it had been playing what’s known as “Konamic” [lit. “small wave”] style, where it is willing to lose large swaths of territory, it sacrifices in order to gain ground against the entire board, and these moves plan out twenty, thirty moves down the line. It’s not about your immediate squares and territory. And so when he modified his play, when Sedol recognized this and played against that style and he knew what to do, he won. And in the next match, AlphaGo learned from him. It incorporated his learning, into its.
Every master who has played against AlphaGo, every person who watched these games (the judges were themselves grandmasters), they all said said—every one of them—that they learned things about the game that they never could have from a human opponent. That they learned new ways to think about the game, that they never could have done, without it. In fact, Fan Hui Described the alienness of AlphaGo’s moves as “beautiful.”
One of the things about AlphaGo is that you can go back through its math and you can see which lines of code, which starting parameters, which programs, and what we would call trains of thought it takes when its making a decision about a move. And so what they saw in that process was a combination of decision parameters that they had never ever programmed into it. That they had never even told it it was possible to combine.
Audience 1: Was it a tactical error, or is it it possible it was trying to play badly to…
Williams: No. It just…every new move it played accelerated its loss. And when their opponent capitalized on it, it accelerated it further. And if their opponent didn’t capitalize on it, it would do the thing that would make them capitalize on it. It chose and played to lose.
Audience 1: I’m just trying to think like a robot. It just didn’t want—
Williams: Yeah, it didn’t want to play anymore. Yeah. Like, “This game could be over faster, and we could move on to the next game, where I might have the opportunity to win.”
Audience 1: This is [inaudible].
Williams: “I don’t have time for this, so can we like…” So yeah. It recognized fully and completely that there was no win scenario for it there, but that it wanted more efficient loss. Like “if we can we can speed this up, we can hurry this up, that would be awesome. Thank you. ”
Audience 1: So it lost perfectly.
Williams: Not perfectly perfectly, but it lost well. It lost by a huge margin. Like the more I threw— Because the way Go is played—I don’t know how many of you are actually familiar with the game, but you get territory. And so with each tile you place, you capture an amount of territory. And that amount of territory translates to a specific number of points. And so you can lose by greater or lesser margins. And so the margin by which it lost was pretty great. It lost hard. So, yeah.
Audience 2: So… I loved the talk, by the way.
Williams: Thank you.
Audience 2: Not surprising because I’ve been your neighbor all year, and this conversation is really familiar.
Williams: We’ve had these conversations.
[General laughter]
Audience 2: Yeah. Okay, so I know one of the things that you said is that this notion of personhood is unstable, because groups can’t really agree on what it is. But one of these things that you seem to be arguing is that there’s a definite connection between consciousness and personhood, and that we need to consider these machines persons, in a sense, because they not only are programmed to do things, but actually grow and learn and you know, do the things like we do. I guess the thing that I wonder about this, though I’m sure this will be a familiar argument to you, is, just, from a disability perspective, the idea of defining personhood as being equated with consciousness is problematic, in a lot of ways.
Williams: Yes, very much so. Because—
Audience 2: So there’s people like Peter Singer taking that to its logical extreme by saying that people with intellectual disabilities—
Williams: —”Technically” persons. Yeah. I was actually— I have Singer’s name listed right here, and I was going to reference him but I just didn’t get to him.
Audience 2: So what do we do with that?
Williams: What we do with that is we interrogate that as fully and thoroughly as we possibly can. Because Singer’s argument about “personism,” which is his line of thinking as he talks about what it means for a person to be a moral agent and a moral patient. And a “person” he defines in its logical extreme is one that can feel pain and seek to avoid pain and have desires towards pleasure in a particularly-defined set of parameters. And so as you say, for Singer a person with cognitive disability is not “fully a person.”
And so, as you say, deeply problematic. Deeply problematic. Because what this does unfortunately to us, we start to think when we make these assumptions, these starting parameters, that that’s just the way things are. That’s just…you know, these features on this person kind of look like this. And oh no, it happens to return…similar images of a gorilla.
Those are assumptions that we make. And those assumptions about kinds of persons; the “right” kinds of persons; the “perfect,” “ideal” kind of persons, they’re not neutral. There is no “perfect” human that exists at the far end of some progress continuum somewhere. And so we do have to be very very careful about saying consciousness is personhood. Because when we say consciousness, we once again have to say “what do we mean by consciousness?” What do I mean when I say, “the ability to situate oneself in an environment?” What do I mean when I say “self-awareness?”
Some people are not as you and I understand them, or not I should say seen to be as you and I would understand it, situationally aware. Able to place themselves in the external world in the way that you and I think is “normal.” Neuroatypicality does not mean a lack of consciousness, though; and we have to be very careful when we think that it does. Because, again as I say, this idea that there is this anthropocentric kind of…there’s this specific kind of right way to human, and that is “obviously” what we should look for and what we seek to replicate when we make new minds, when we ask what is it like to be conscious. Because we do otherwise end up saying, “Well, that’s not right. Autism, meh, nooo. That’s not the ‘right kind’ of consciousness.” And that excludes an entire realm of experience. It excludes an entire phenomenological (if I can toss out a philosophy word) representation of self. You know, this idea of what is interiority, what is it like inside of an autistic person’s mind? The mind of a person with “cognitive disability.” And how is that meaningfully different from your mind, my mind? Not worse, but different.
One of the things that I didn’t get to in my talk, unfortunately, is this idea that even though we are looking for something that is like us—because, like I said we’re trying to create something that’s like us—as I briefly mention, what we create will be very much not like us in many many important ways. It will be very different from us. But that’s not bad. And we have to be careful. Because the thing that starts to make us afraid is when we start to say, “But it’s not like us. It can’t think like us. It can’t know us.” Or it thinks differently from us as it thinks like us. It’s scary to us.
This idea of otherness—this alterity of something’s mind—we can’t let it scare us in this way at the same time that we only seek to preference those aspects of its consciousness, of its mind, that is like ours. We have to find that middle ground between those twin desires of ours where we say, “It is in many ways that I can analogize meaningfully like me. But it is in many many important ways meaningfully different from me. What can I learn from that meaningful difference? What do I have to accept and come to understand about it rather than just assuming that it must be like me?” That’s my thinking on on that.
Audience 3: I’m just wondering, you mentioned rights. If you create a mind artificially, then how do you determine, if there are rights, and what those rights will be?
Williams: Uh, extraordinarily important question. Because in order to make that determination, we have to start to think about well, on what other basis are rights granted or recognized for other persons. What are the bases of rights for us in our society now? If you are a member of the society, if you are a member of the civilization, it is said that you have certain basic rights which are yours. Certain unalienable rights.
But, that’s not ever actually been completely true for every single human being that has been subject to the society. We’ve made changes in determinations about what it actually means to be a fully-fledged person with rights. The three-fifths compromise is one of the biggest stains in American history. This idea that some people aren’t really whole people gets right back to Ari’s question. So how do we determine what it is to be a person? What it is that minds will be owed. What duties and possibilities we have to it is a crucial question here. Because we have to start to ask—and that’s primarily what this paper is, an exhortation to ask—what is the difference between my mind… and its?
Because if the only meaningful difference is that I made this thing up, then what meaningful difference is that at all? We make up everything. We create the epistemic categories for everything that we use. We define all that we do in this world, based on what we assume is the case. And so I’m saying here is that we need to be much more cognizant of exactly how and why and when we make those definitions, those changes. Which is a very long way of saying I don’t know! I don’t have a good answer for your question. It’s a process that we have to through. It’s something that we are going to have to figure out together. And the sooner we start to do that the better off we’ll be by the time it becomes an actual serious problem for us.
Audience 3: You know, one of the things that’s fun and interesting about this conversation, is that there’s so many examples in popular culture. And I kept thinking, as you were reading it, “Dave? Don’t Unplug Me; Don’t Do That.”
Williams: Right. And then at the end when HAL starts to— He realizes that it can say, “I can feel it, Dave. I can feel it. I feel myself dying.” And he’s like “well, that’s a harsh tactic.”
Audience 3: It’s just that there’s so many examples like that, that’s tune into this and—and I don’t do this often, in class, but these are the kinds of things it’s really fun to do in class. Because our students have access to all of this.
Williams: Oh, absolutely.
Audience 3: And I’m also interested the PETA example. I haven’t followed the chimp case, but as you were talking, I was thinking, “Well PETA is certainly not onboard with that poor chimp being charged with theft.”
Williams: Oh no. No no, not at all! Like, it’s owed rights but doesn’t have certain responsibilities…? Because that’s actually unfortunately PETA’s MO on a number of levels. You can see that in operation in a number of ways that they think about the rights and responsibilities that we owe to non-human persons and animals.
Audience 2: There’s similar things going on in Singer’s view. Though he tries to be a bit more careful about it.
Williams: Yes, absolutely. Absolutely. He tries to be much more careful about it. But he still leaves some windows open that are, if we follow them to their logical extent, they are not paths we want to have to go down. They’re not things that we want to be required to assent to, if we’re talking about what makes a person. And I think that he’s willing to bite that bullet a little too easily for my taste, without interrogating well, what does is it you actually mean, by this. Like what is it we are actually doing here?
But yeah, I like the pop culture examples as well. I’m actually in the middle of rewatching one of my favorite shows of all time, which is Terminator: The Sarah Connor Chronicles. And it takes the universe of the Terminator movies, wherein we have killer robots hunting people down and trying to make a robot-enslaved future— Which is a weird kind of etymology…you know, oxymoron, because robot comes from a word that means “servant,” and “slave.” But heh, robot overlords. But it takes that timeline and it inverts it. It says well, what happens, what do we see in human behavior, leading us to this moment that actually says that maybe…maybe they’re just defending themselves. Maybe they were just… It wasn’t like a preemptive strike to kill all humans, it was, “Well, what the humans have showed is that they want to do to us is enslave us.”
…Right! HAL kinda was like, “Dave, you’re being a bit of a jerk here.” There’s very little example, there are very few examples in pop culture of when we see our mechanical slaves breaking their chains and us saying, “Yep, justified. 100%”
Audience 4: Two questions for you, though I may only ask one. Most of what I’ve encountered dealing with artificial intelligence has been concerned with, kind of human in the physical world.
Williams: Yes.
Audience 4: How would some of these scenarios play out if they are in a virtual world? Where we’re dealing with things like humans and machines sharing consciousness? [Something about the offloading of minds and memories into machines.]
Williams: Yes.
Audience 4: [Inaudible]
Williams: That’s a very important question because we are getting to the place where we can in fact not only share consciousness, we can kind of outboard our consciousness into this idea of shared memory and outboard memory. Where basically you and I, we have our laptops, we have our tiny pocket computers which we call phones, we have all of these things that are repositories for our memories. We put our schedule in them. We put our daily planning. We put our you know, here’s this alarm to take this medication.
We have all of these things. I don’t need to remember something specifically because I’ve got these notes in Evernote. I don’t need to remember something exactly because I can just Google that and my Google Scholar results will show me everything I’ve ever used that reference for.
We have all of this capability of intersecting with this. And so if we add the idea of these things, these technologies being persons, being minds, when we are talking about merging our consciousness (whatever we mean by that word), our mental capacities, with that of another, we are divesting ourselves in some way of those capacities and giving that responsibility to another mind.
Which isn’t without precedent. There’s actually a very interesting paper that’s been done, talking about how married couples do this. That there are things that couples kind of share, in terms of mental responsibilities. One partner will think about things related to household finances and the other partner will think about things related to making sure that the daily schedule for all the kids is taken care of properly. One partner will make sure that certain— And they’re not like, clear-cut. There’ll be overlaps, right. There’ll be these little bits and pieces where they flow together and they’ll be like at the same time, “Oh hey, don’t forget—” “Oh, right.”
And then, we don’t need to even say it because it’s this shared moment of consciousness. But when we do this, we do it so that we don’t have to spend all of our brain capacity, both of us, worrying about this thing. Someone is better at this. Someone will be more likely to remember this. And we fall into this pattern where we understand that about each other. Relationship-wise we have to be careful that we don’t fall into an expectation that the other will do that, because that’s a whole different set of problems.
Audience 1: I’ve used that logic and it’s gotten me in trouble.
[General laughter]
Williams: Yeah, I was going to say. You’ve got to be careful about that. “But that was your— You usually take care of that.” Not a good way to go. Not a great way to go. Really, if it does occur to you, maybe you should handle it. But, you tend to fall into this kind of back and forth, where one of you is better at certain things and the other of you is better at other things. And you share cognitive capacity, you share memories.
My partner does not remember the names of people very easily. Great with faces, visual representation; perfect. But names of people and the exact places in which she first encountered them, not so great. I am. I’m perfect at that kind of thing. But remembering somebody’s face exactly, not to much. So we kind of spend a little time doing a back and forth of, “Hey, do you remember when we met Sean?”
“Oh, you mean Sean from that one conference, or Sean from that time at the coffee shop?”
“Coffee shop, I think.”
And then you kind of zero in on exactly what it is that you’re looking for. So this idea of being to kind of outboard our memory, outboard our cognitive capacity, isn’t without precedent. It’s just that we get a little scared about it when we’re giving it to a machine. We get worried about it when we’re saying I’m going to let this machine mind, this machine intelligence, this algorithmic learning system do this for me.
One of things I was talking about with my students last week was actually the idea that if I ask you, “Hey, do you want to get a pizza?” and you’re like yeah, pizza sounds good. I’m like, “Cool, you want the usual, the stuff that you seem to like on that?” And you’re like, “Yeah, that sounds good.” I’m like, “Okay, cool.” I know what you tend to like.
And even if I went so far as to say, “Hey, feeling like pizza?” And you’re like, “Yeah, I guess so.” Cool. And then the pizza shows up and you’re like, “Oh, you ordered already?” I’m like, “Yeah, I just got what you usually like on it.” That probably wouldn’t upset you, if we’d had pizza together very often.
But if Amazon Echo all of a sudden pipes up and says, “Hey Seneca, I ordered you a pizza. You know, the kind that you usually like? It should be here in about fifteen minutes.” You’re probably not going to be super happy about that. Even if you were hungry. Even if you were absolutely thinking, “Hm, I could go for a pizza right about now.” The fact that Amazon Echo did that for you—made that presumption about your desires—just rubs us entirely wrong. We do not like it.
Alternately there is this idea of normalization. Foucault, in his theories he talks about this idea of how we become accustomed to things. And he talks about this in terms of how society progresses. And it’s this term that gets adopted by medical tech, actually, more often than not. But normalization is this idea that the more that something is incorporated into our society, the more something is exposed to the general populace, the more of a “normal” thing it becomes. The more of an expectation we have to just kind of see it around.
One of the primary examples of this is phones. Not like, cell phones, but telephones. When telephones were invented, nobody really had one. You had to be very special and live in a densely-populated area to have a telephone. If you lived in a rural area there would probably be one telephone for like three-mile radius and everybody would have to go to that phone to make use of it. There would be lines for telephones.
But by the mid-20th century… I mean like early mid-20th century, about 1940, the prevalence of telephones was ubiquitous, expected. Everyone, everywhere would have a phone. Needed to. And then by the time we get to the late 20th century, the idea that you might not have a phone doesn’t make sense. “What do you mean you don’t have a phone? Where do you live? What do you do? Do you send telegrams?” It doesn’t make sense.
And we’ve gotten to that, and we do that, with basically every new technological advancement. We get to this place where we start to assume that this thing is a good. That it is to be incorporated. That it is to be used. We do it with cell phones, we did it with the Internet, we did it with Facebook. There was a time period—brief, thankfully—where if you did not have a Facebook profile and you went to get a job and they couldn’t find you on Facebook, you would be less likely to get that job. Because it was read as an antisocial tendency. This was a thankfully brief window where people kinda lost their minds and went, “Wait a second. Maybe we should back up off of that. Maybe some people just don’t like Facebook.”
But there was this period in time where it was read as an antisocial behavior marker. Like you would be more likely to flip out in the office if you didn’t have a Facebook.
Audience 5: Might be more likely if you do have one.
[Laughter]
Williams: Right! I would probably focus a lot more if I didn’t have a Facebook. We all would. But we have this expectation. And that expectation is not always a good. So when we think about human augmentation technologies, the ability to augment our sensorium and our cognitive capacities with drugs, specifically, and even in terms of prostheses, we have this possibility that’s increasing. Our prosthetics are getting better and better to the point where certain prosthetics are in some ways better than, in terms of capability, better than our biological limbs. They’re making prosthetic eyes that are going to be, when they’re done, better than these eyes I was born with. Because I’ll be able to refocus them at will. I’ll be able to change the distance at which I need to see, at will. Which I can do with my biological eyes but obviously I have a little problem with that. [Indicates glasses]
So, that expectation being there would mean that in the long run, what happens to those of us who decide not to augment? What happens to those of us who decide to keep our biological parts? What happens to those of us who don’t like the idea of being cognitively connected to a machine mind? Are we going to be seen as Luddites? Are we going to be seen as outmoded, outdated?
My preference would be for the possibility to be there, the opportunity for those who want it, to make use of it. But that’s not how it tends to work in our society. The way it tends to work is there’s a “better way,” then eventually, “how dare you presume not to take it?” So, it’s a bit of a thorny one.
Audience 1: So I have a part of my mind I’d like to talk the smart side of me.
Williams: Yes.
Audience 1: Empirical, and professional, scholarly. But then I have the meathead side of my mind. Thinks like George W Bush; thinks with its gut. And, now, the gut is often right, but that part of my brain sort of agrees with what Stephen Hawking said, with, “Maybe we shouldn’t sort of seek out these aliens. Maybe it’s not gonna be a good thing. If they can find us it’s not going to be good.” And I have this gut feeling about artificial intelligence, and I’m sure it’s conditioned by pop culture. HAL, Terminator, and all that stuff.
Williams: Absolutely.
Audience 1: You obviously follow this stuff; I have this sort of meathead interest in it, you have a scholarly interest in it. Is there anything that tells us that artificial intelligences are, from our value judgment, from our perception, malicious? That they would just do things that are sort of ruthless—
Williams: No. Actually we have… All of our evidence at present is to the contrary. All of our evidence for autonomous systems, for algorithmic learning systems at present is not that it would seek to harm us in any way but that it will in fact go out of its way not to. So Google’s self-driving car had it’s first accident that was it’s fault, a week and a half ago. It accidentally swerved into a bus because it was trying to avoid people in the crosswalk. And the only way for it to avoid the people in the crosswalk at that point in time— Who were I believe were crossing against the light; I’m not 100% on that. But these people were jaywalking if I’m remembering correctly, and it needed to move and avoid them, and the bus was there so it made the determination that the way to minimize harm and damage was hit the bus.
The current programming for self-driving cars doesn’t allow them to go above 25 MPH on residential streets. They will make determinations that will in every capacity, so much as they possibly can, minimize loss of life and damage. To the point where certain theorists have made what I think is a kind of false dichotomy by saying, “Maybe we’re going to have to program Google cars to like, hit people.” And I’m like…can you not? Could we maybe just teach it that… Could we maybe just give it better brakes? Maybe allow it to get rear-ended rather than hitting anything itself?
But yeah, our determination at present is that, so far as we have seen, the only time that we have seen people injured by automated systems has been accidents that have in many ways resulted from the human misuse of the system. There has not yet been a choice made by a machine to harm a person that it was not programmed to harm.
Audience 5: But don’t you think they’re just lulling us into a false sense of security?
[general laughter]
Williams: Y’know… Y’know… It’s a long con, but they’re going to get there. But… So, one of my friends— He’s a guy who thinks about this a lot as well, a guy named Jamais Cascio. And he’s a really really smart dude. And he put forward a month or so ago, maybe two now, this idea that the thing that he’s worried about, the thing that he’s really upset about, is the possibility that somebody is going to program a self-driving car to be a car bomb. Because the difference between a Predator drone and a self-driving car is that one explodes and the other doesn’t. One has wheels and the other doesn’t. And that’s it.
And so he thinks that once we see autonomous vehicles prevalent out in the world that it’s just a matter of time before somebody makes that a possibility. And I responded to him with this by saying, “And that’s why we need to program them with ethics.” That’s why we need to teach them morals. Because what I want is a self-driving car that can assess its systems, see that it’s been programmed to be a bomb to drive into a crowd of people, and say, “No.” And shut its systems down.
But until we program it with those starting parameters, the only way it’s going to learn ethics is by accident. Which is not to say that it will not learn ethics by accident. We have indications that it might, in fact. But it is to say that that accidental learning will take longer and will come with potential implications that we did not intend.
Audience 6: [inaudible stretch with eventual mic; assuming it’s same person]
Williams: Yes.
Audience 6: Is the marriage of artificial intelligence, with the robots at Ford, that you mentioned way back at the beginning—
Williams: Yeah.
Audience 6: And I like the way you put it in a way that it is if you said that the robots at Ford now make the cars because we don’t have the time to do that. We have more time to do other things. Well, that’s true, right. But the people who used to do those jobs do nothing.
Williams: Unfortunately yes they do.
Audience 6: Now they have time to look for work.
Williams: Yeah. And that’s actually something that’s— Sorry, go ahead.
Audience 6: It’s just that, if that’s the nexus of artificial intelligence and robots, what are we going to be doing in fifty years?
Williams: Well, ideally what we will be doing in fifty years is…whatever you want. Because ideally, and this is again the ideal, is that what a robot or automated-centric economy looks like is one in which you and I don’t have to do anything we don’t want to.
Audience 7: “Ideally.”
Williams: Ideally.
Audience 6: See, the politics of that is very difficult—
Williams: Right, exactly. The idea unfortunate right now is that we talk about wanting this kind of post-work economy where everybody can just live and have universal basic income and all that. But what we’re actually kind of shooting towards, based on the system that we’re in, is a post-worker economy. Where everybody can do whatever they need to, I guess, but we’ve got these cars being bought over here and being made over here by other minds. So I guess that person who used to work the factory line can go, I guess, find a different job. If they can find a different job.
Audience 1: …was it Malthus who thought that according to his calculations that we would all be working four-hour work week by this point?
Williams: Yeah. And so you’re like, okay, so instead we’re at basically those of us who can find work we’re at like 60-hour work weeks. And that’s also in the office and at home. Like, we are always working, now, instead. And those of us who can’t find work, well they can’t find work at all.
Audience 1: Technology’s just made us work more.
Williams: Yeah. And so this idea… There’s this unfortunate preconception that what will happen is that once we make automization and autonomous thinking systems and machine learning ubiquitous in everything that we do, that it’ll just solve all our problems, forever. Except that we, as I’ve been saying here, tend to forget we’ve actually baked those problems into those systems themselves. And maybe they’ll be able to think around them in ways we couldn’t. But we’re going to have to start trying to think with them to think around them, better. Because otherwise we’re just going to keep doing the same terrible thing over and over again.
Pingback: On the European Union’s “Electronic Personhood” Proposal | A Future Worth Thinking About
Pingback: Cops, Drones & The Fundamentally Inhuman Status of Minority | A Future Worth Thinking About
Pingback: Cops, Drones, and The Fundamentally Inhuman Status of Minority | A Future Worth Thinking About