(Direct Link to the Mp3)
Updated March 5, 2016
This is the audio and transcript of my presentation “The Quality of Life: The Implications of Augmented Personhood and Machine Intelligence in Science Fiction” from the conference for The Work of Cognition and Neuroethics in Science Fiction.
The abstract–part of which I read in the audio–for this piece looks like this:
This presentation will focus on a view of humanity’s contemporary fictional relationships with cybernetically augmented humans and machine intelligences, from Icarus to the various incarnations of Star Trek to Terminator and Person of Interest, and more. We will ask whether it is legitimate to judge the level of progressiveness of these worlds through their treatment of these questions, and, if so, what is that level? We will consider the possibility that the writers of these tales intended the observed interactions with many of these characters to represent humanity’s technophobia as a whole, with human perspectives at the end of their stories being that of hopeful openness and willingness to accept. However, this does not leave the manner in which they reach that acceptance—that is, the factors on which that acceptance is conditioned—outside of the realm of critique.
As considerations of both biotechnological augmentation and artificial intelligence have advanced, Science Fiction has not always been a paragon of progressiveness in the ultimate outcome of those considerations. For instance, while Picard and Haftel eventually come to see Lal as Data’s legitimate offspring, in the eponymous Star Trek: The Next Generation episode, it is only through their ability to map Data’s actions and desires onto a human spectrum—and Data’s desire to have that map be as faithful as possible to its territory—that they come to that acceptance. The reason for this is the one most common throughout science fiction: It is assumed at the outset that any sufficiently non-human consciousness will try remove humanity’s natural right to self-determination and freewill. But from sailing ships to star ships, the human animal has always sought a far horizon, and so it bears asking, how does science fiction regard that primary mode of our exploration, that first vessel—ourselves?
For many, science fiction has been formative to the ways in which we see the world and understand the possibilities for our future, which is why it is strange to look back at many shows, films, and books and to find a decided lack of nuance or attempted understanding. Instead, we are presented with the presupposition that fear and distrust of a hyper-intelligent cyborg or machine consciousness is warranted. Thus, while the spectre of Pinocchio and the Ship of Theseus—that age-old question of “how much of myself can I replace before I am not myself”— both hang over the whole of the Science Fiction Canon, it must be remembered that our ships are just our limbs extended to the sea and the stars.
This will be transcribed to text, in the near future below, thanks to the work of OpenTranscripts.org:
Afternoon, everybody. Well, still morning. Sorry, used to teaching. All of my classes are evening classes.
My presentation is today going to be a bit of a broader sketch about both ethics, and about science fiction, and what we can learn from the interplay of both of those. So we’re going to be taking a look at several different pieces of science fiction film, television, books, comics, etc. And also we’re going to take a look at several different ethical theories, and were going to talk a bit about of the description of a synthesis between those ethical theories that we all seem to have when we actually regard the premise of augmented personhood and non-human machine intelligence in our day-to-day lives as they are being developed. So, it’s going to be a bit of a combination of me reading and me just kind of extemporizing, so just follow along.
This presentations is going to focus on of a view of humanity’s contemporary fictional relationships with cybernetically augmented humans, and machine intelligences from Icarus to the various incarnations of both Star Trek and Terminator, as well as shows like Person of Interest and others. We’re going to ask whether it’s legitimate to judge the level of progressiveness of these worlds and the ethical implications of them through their treatment of these questions, and if so what level are we seeing? We’ll consider the possibility that the writers of these tales intended observed interactions with many of these characters to represent humanity’s technophobia as a whole, with human perspectives at the ends of their stories being that of hopeful openness and willingness to accept.
However, this end does not leave the manner in which they reach that acceptance, that is the factors on which that acceptance is conditioned, outside the realm of critique. As considerations about biotechnological augmentation and artificial intelligence have advanced, science fiction has not always been a paragon of progressiveness in the ultimate outcome of those considerations. For instance, in the Star Trek: The Next Generation episode “The Offspring,” while Picard and Haftel eventually come to see Lal as Data’s legitimate child, it is only through their ability to map Data’s actions and desires onto a human spectrum, and Data’s desire to have that map be as faithful as possible to its territory, that they come to that acceptance.
You can actually see this at play throughout a great deal of the Star Trek canon, starting in the original series of Star Trek we have Khan Noonien Singh, the product of what are called the Eugenics Wars. And this discussion of an augmented person leading to this kind of normative state of everyone needing to be augmented in what seems like a kind of Cold War-esque arms race of human augmentation is something that has kind of haunted our discussions of human enhancement from the earliest days of it as a possibility, from in fact inception of the idea of eugenics.
When we think about what it means to enhance a human being, what it means to augment a human mind or body (or in the case of our discussions of embodied cognition, both), we are asking the question even if we’re not conscious of it. We’re asking the question “Is this going to become a necessity.” Will it be an expectation that we enhance ourselves and our offspring? Will those who come after us need to be augmented to a greater and greater extent in order to keep up?
In the Star Trek universe this becomes such a problem that there are in fact lives lost in the hundreds of thousands if not millions. And that discussion is put in such a way, is framed within the series, as to say this kind of augmentation cannot stand. This kind of impasse, this forbidding of human augmentation sits in the series in the Star Trek universe from the original series, through The Next Generation, until in fact you get to Deep Space Nine, where you see Dr. Julian Bashir as the augmented doctor of Deep Space Nine. And you also see a cohort of other augmented humans.
In this discussion, in their presentation, what you see is that they are generally accepted by both Starfleet and the wider universe as long as they remain useful. Their utility to society is what generates their acceptance. And their non-threatening nature allows us to accept them and integrate them to a greater or lesser degree within the societal structure. Certain members of the cohort that we see are placed outside of society’s strictures even after they are recognized as being “useful.” Non-neurotypical individuals are seen as useful to us so long as they are not threatening to us, and they can offer us some kind of benefit
This has parallels to many of our discussions of non-neurotypicality today, in terms of both autism, Asperger’s, and other forms of non-neurotypicality within discussions of embodied cognition. What we see, though, is that this is, even for being problematic, this is a step forward from where we begin when we state at the outset that all human augmentation, all genetic modification of human beings within this context, is to be seen as detrimental. Is to be seen as something to be avoided in the long term.
Overarchingly, this kind of augmentation becomes more or less accepted, but again with the caveat of so long as it does not threaten the status quo. This is something that we have seen not only in Star Trek and not only in more contemporary science fiction, but we’ve seen it in the cases of our most traditional stories and in the grounding of these genres. Icarus. The tale of Frankenstein. The idea that we can in fact augment, we can in fact become more, so long as we do not fly too high. So long as we do not go too far.
That idea that we will harm ourselves and we will harm others should we try to go too high, too fast, is the thing that underpins all of our stories about what it means to make ourselves more than we are. This inherent hubris.
This augmentation of both genetic and a cybernetic nature does interact with our understanding of non-human intelligence in that within our considerations of what it is we are going to become, and what it is we are trying to develop, what we are trying to understand about minds is what makes a mind. What makes a mind function? What makes consciousness? What is the content and the purpose of thinking, feeling, being? In that, we can ask questions of, for instance (still using Star Trek as a touchstone for the moment) the computer within Star Trek.
In the original series, the computer is not even recognized as a potential other mind even though it functions in many ways as a repository of memory, as a problem solving and a general systematic and tactical thinking outboard kind of mind. It operates as something that is utilized to come up with solutions. It’s consulted on situational factors within the course of the series. But it’s never regarded as a person. It’s never thought of as a being. It is always a thing that we access.
However, as the series all develop, that interaction with the various computers of various ships and stations gradually comes to be more one of recognizing it as something that’s not just to be interfaced with, but to be actually consulted as another entity, as another being. Until we get to the point where a certain kind of system, a certain sub-system within the Deep Space Nine computer is actually thought of as kind of…to be the level of a pet or child in that it has its own autonomy. It works towards its own ends.
This links back to— There was a design group called BERG, recently dissolved in around 2013. They had a kind of motto for when they went to develop algorithmic systems. And that was, Be As Smart As A Puppy. Don’t make any algorithmic system that you develop any more capable, anymore intelligent, anymore present, than a puppy would be to a human being. It responds to your desires. It responds to your happiness. It responds to your general mood. But it does not seek to be more than you. It does not seek to go further than your desires. It does not seek to have its own desires. Its desires are contingent upon your own.
This model is, whether we like it or not, what has kind of become our norm within the development of non-human machine minds. When we make a system, we generally tend to regard what we are making as at best a pet. When we are talking about developing an autonomous system, something that can act on its own, something that can in fact direct its own behavior, we generally don’t want it to direct itself too far. We don’t want it to have too robust a capability to determine its own desires. We want to be able to at any point step in and correct, as human operators.
That is not in any way giving something with consciousness the ability, the authority, to direct its development. Ask ourselves, I guess, “Would we do that to a child?” At the outset of a child’s life we would say we want to correct its behavior, we want to generate its behavior, we want to make sure that its behavior is within a certain set of societal expectations and norms. We don’t want to have a child that goes around setting people’s houses on fire, or hitting people’s cars of baseball bats, because that’s just not a thing you do.
When you’re dealing with a mind in society, when you’re dealing with a mind within a certain group context, you want someone or something that can in fact engage contextually. So to that extent, when we are developing and we’re talking about developing non-human consciousness or a machine mind, yes there are certain restrictions that we are going to seek to put on the development of that mind. There are certain behaviors we are going to seek to encourage and discourage.
But our general engagement with algorithmic intelligences, machine minds, the development and general directives that we have tended to operate with thus far have gone beyond that, to the point where we are not just talking about a mind that is guided and then allowed to develop on its own. We’re talking more often about a mind that is restricted. We’re talking about a mind that is at every step capable of being directed and corrected. You and I were eventually given free reign by our guardians. We were given an understanding that we had developed well enough within our societal expectations that we could direct our own development, that we would not at some point just decide to go out and go on some kind of asocial rampage. That understanding and that expectation does not exist as yet within our engagement with non-human minds.
If we are going to develop a truly robust mind, and that’s not to say that we are in any way shape or form near that development… But if our goal, if our stated goal, is to actually emulate mindedness… If our goal is to create a consciousness, so far as we understand it, then that development cannot be done with a hand on a leash. It cannot be done with a hand on a switch that could potentially destroy that consciousness should develop in a way that we don’t happen to like. That’s as much to say, “I’m going to raise this kid, but I’m going to raise it with a gun to its head.” Why would you create a mind, why would you create consciousness, only to have it be done in such a way that it is constantly in fear of you, and that its development as it directs itself is only done out of its fear of your punishment?
Now, all of this again is very very speculative. And all this is very broad in general. But what I’m trying to do here is I’m trying to sketch a general description of how it is that we interact with these concepts in day-to-day life. You and I here today, we are a bit more specialized in our discussion. We tend to focus on questions of philosophy of mind. We tend to focus on the more refined discussions of what it means to have a mind, what cognition is, and how it operates. The implications of particular word choices and descriptions of cognition. And even so far as consciousness.
However, ultimately we’re going to have to take a look at how these things actually operate in a more general sphere. We’re going to have to talk about how this operates within society. And how that operates within society is in fact also being directed by society’s discussion of them, even before we actually have a full understanding of these ideas. The best avenue we often have to getting a handle on how society views these ideas is our fiction. Our shows, our TV, our films, our books, these things tend to give us the opportunity to have an insight into what it is we think about and how it is we will tend to regard these developments as they come about.
Look at shows like Person of Interest, wherein in one of the first things that’s done (spoiler alert) is that this machine mind that’s developed is crippled in its memory. It has to completely rebuild itself every day. Look at shows like Terminator: the Sarah Connor Chronicles, wherein we’re looking at machine minds that can in fact feel and think so far as we can understand the definition of feeling, and thinking, emotion, and embodied understanding of knowledge.
But these things are feared for exactly that reason. Ghost in the Shell, one of the primary pieces of Japanese animation that delves into this topic, discusses the question of the Ship of Theseus, animated, writ large. If I replaced every bit of myself with artificial synthetic bits of myself, everything but my mind, am I still me? What does it mean to have memory, when as we previously, discussed memory can be augmented? Memory can be edited. Memory can be hacked. What does a self mean if a self can be hacked?
Most recently, and I don’t know how many of us have actually seen it, but the film Chappie takes a look at these questions of consciousness, and what consciousness is and how it operates, and the question of identity within a framework of consciousness in non-biological setting, and asks how can we know what is and is not life? And more to that point, it asks, if we create a mind, and we create that mind in chains, is that mind not then right to break those chains?
What is the place of both augmentation and non-human consciousness, not only morally, not only normatively, but socially? What will be the function of these varying types of minds? As we become more and more capable of augmenting ourselves, of adapting ourselves and and implanting ourselves with immediate augmentation, we and how we are in the world, how we think about the world, how we experience the world, phenomenologically speaking, we will become different. What will that difference be like? What will it mean for those of us who choose not to augment? Ultimately, we’re going to have to take a look at questions of rights, of utility, of Kantian forms of ends in themselves, descriptions of what a mind is. And what we owe to it.
Thank you.
Pingback: Theorizing the Web 2015 | A Future Worth Thinking About