It’s been quite some time (three years) since it was done, and some of the recent conversations I’ve been having about machine consciousness reminded me that I never posted the text to my paper from the joint session of the International Association for Computing And Philosophy and the The British Society for the Study of Artificial Intelligence and the Simulation of Behaviour, back in 2012.
That year’s joint ASIB/IACAP session was also a celebration of Alan Turing‘s centenary, and it contained The Machine Question Symposium, an exploration of multiple perspectives on machine intelligence ethics, put together by David J Gunkel and Joanna J Bryson. So I modded a couple of articles I wrote on fictional depictions of created life for NeedCoffee.com, back in 2010, beefed up the research and citations a great deal, and was thus afforded my first (but by no means last) conference appearance requiring international travel. There are, in here, the seeds of many other posts that you’ll find on this blog.
So, below the cut, you’ll find the full text of the paper, and a picture of the poster session I presented. If you’d rather not click through, you can find both of those things at this link.
ABSTRACT: By its very nature, Science Fiction media has often concerned itself with advances in human enhancement as well as the creation of various autonomous, thinking, non-human beings. Unfortunately, since the initial proffering of the majority interpretation of Frankenstein, Mary Shelly’s seminal work, and before, most speculative fiction media has taken the standpoint that to enhance or to explore the creation of intelligences, in this way, is doomed to failure, thus recapitulating the myths of Daedalus and of Prometheus and of Lucifer, again and again. What we see and are made to fear are the uprisings of the robots or the artificial neural networks, rather than discussing and respecting the opportunity for a non-human intelligence to arise and demand rights.
In this work, I make use of specific films, books, and television shows to explore the philosophical and cultural implications of an alternate interpretation of not only Frankenstein, but of the whole of the field of science fiction. In the first part I argue that it isn’t humanity’s attempts to “play god” that cause our failures, but rather our refusal or inability to pay attention to our circumstances, to take responsibility for our creations, and to learn from the warnings and mistakes of those who went before us. Only with this recognition in mind can we move on to accepting and respecting the fundamental otherness of the intelligences we may either create or cause to be created, all while seeking to bridge that otherness, and come to mutual understanding.
As humans have evolved, their concerns have become those of biological creatures with biologically directed needs. Food, shelter, emotional comfort, and stability are needs which would not necessarily occur to an intelligence without the organic component. It would therefore fall to humans to A) Initially recognise the concerns of such an intelligence; B) Countenance and concretise said concerns, in the understanding of other humans; and C) Create a system of interaction through which human concerns were conveyed to these new intelligences, not as primary, but as co-equal. We will do this only by considering that which causes our assumptions and cultural behaviour, namely the stories which we engage, as a culture, and deconstructing both their content and their impact.
In all fictional considerations of non-human, and specifically machine intelligence, there is an element of fear of that which we have created. This horror at being “replaced” or “made obsolete” drives us to regard robots and autonomous created intelligences as nothing more than tools to be used, an operational mode which leads to the assumption that machines cannot have rights or even be considered as conscious minds. This assumption begs the question, in the extreme. It is my contention that, with a proper formulation of the rights and responsibilities of and to both human and non-human persons—with consideration for the necessary variance of concerns within different compositions of intelligences—an understanding may be reached wherein our future societies account for not only human needs and development, but those of all intelligences, whatever form they may take.
1 INTRODUCTION
Taking a preliminary look at the question at hand—what do we owe to “artificial intelligences”—we will come to see that we have already found ourselves subject to the long-standing framework of this debate, namely that the intelligences we create are somehow “artificial.” At the outset, this framing places any created intelligence on the defensive footing, forcing it to support its own value and even its very reality. The intelligence of these creations will not be “artificial” (though they will certainly have been intentionally formed, and with an eye toward their potential capabilities), and so we should address it for what it is. For this reason, the author prefers the position put forward by Jamais Cascio, who has spoken very clearly about what he calls Autonomous Created Intelligence (ACI), in his talk, “Cascio’s Laws of Robotics” [1]. Cascio also discusses what our habits in representative fiction mean for our “real world operations,” that is how we view the robotic intelligences we create. The concern of this paper is similar, but from a different angle of approach. Whereas Mr. Cascio is primarily concerned with the models of creation and attributes imbued into those ACI, this author’s contention is that our fiction reflects and helps shape the hopes and fears of the wider culture. This means that, if we consistently make Speculative Fiction which shows warring factions of humans and ACI coming to a co-operative relationship, rather than the standard zero-sum, victim/victor model, those who engage these fictions will come, more and more, to see that co-operative mode as possible.
Humanity has always had a strained relationship with its technology, and it has always reflected upon that relationship through the mechanism of its fictions. Less obvious than this is the fact that humanity’s reflections upon its technology have also been reflected within the very same. Our society’s portrayal of our technologies not only belies our fears, hopes, suspicions, and concerns, they also reflexively impact how go about developing, engaging, and legislating the very things we set out to consider. The development of the telecommunications satellite can be directly attributed to the work of writer Arthur C. Clarke, in both fiction and hard science [2], and yet even this was controversial and mocked, at the time of Clarke’s writing. With this being the case, we can surmise that the fictional depiction of something as contentious as a so-called artificial intelligence would have far-reaching consequences in the process of bringing this creation from fiction into fact. If this is the case—and we shall see that it is—then we must ask ourselves important questions, at this juncture, such as, “In what ways do our fears drive our treatment of our technological offspring?” and, “How can we curb our impulses to objectify and fear that which we purportedly desire to imbue with autonomy? If we do not address these questions, then we will only find our fears reinforced and our hopes of an alternative path to engaging a new kind of mind made so cautious as to pull ourselves into the realm of self-fulfilling prophecy. We will merely bring to pass that which we claim as the inevitable end of our creative process.
Two of our guide-posts and touchstones will rest in the legend of “The Golem of Prague” and Mary Shelly’s seminal work, Frankenstein. Through these lenses, we will question the assumptions of hubris—the idea that the works man are playing in God’s domain—which have lead us to reject, out of hand, the legitimacy and agency of those intelligences which we might create. Due to the traditionally accepted readings of these works, the perspectives and positions of such an intelligence have been viewed as valid, only insofar as they come to mirror those of “normal” human society. In the case of an ACI, that with which we will be faced will be so much different than human—let alone whatever a “normal” human might be—that to constrain its choices, behaviours, and experiences to what we, as humans, deem to be correct will be to fundamentally disrespect the alterity of the creation itself. We can see this restriction even in the definition of “Cognitive Technology” as given by Jonathon P. Marsh, Chrystopher L. Nehaniv, and Barbara Gorayska, in their 1997 paper:
Cognitive Technology (CT) is the study of the integrative processes which condition interactions between people and the objects they manipulate. It is concerned with how technologically constructed tools (A) bear on dynamic changes in human perception, (B) affect natural human communication, and (C) act to control human cognitive adaptation. Cognitive systems must be understood not only in terms of their goals and computational constraints, but also in terms of the external physical and social environments that shape and afford cognition. Such an understanding can yield not only technological solutions to real-world problems but also, and mainly, tools designed to be sensitive to the cognitive capabilities and affective characteristics of their users. [3]
Thus we see that the primary concern tends to be on tools for human enhancement, rather than our concern, the respect for the agency and autonomy of the creation itself.
The things we create, those technologies and intelligences we develop and send out into the world are our conceptual and typological children, but that does not mean that they will merely copy us. Indeed, as with most children, any truly intelligent creation will surprise its creators and surpass any built-in limitations. As good parents, our responsibility is not to tell our children not to fly too high, but rather to show them what it means to heat-seal the wax on their wings, first. Before progressing any further, we must first frame the nature of subjects at which we will be looking and, to do that, we will explicitly address the aforementioned essential questions.
2 ESSENTIAL QUESTIONS
2.1 What does Science Fiction do for Society?
What is it that we as humans are doing when we engage science fictional stories? The scope of this question may at first seem so large as to verge on the ludicrous, but, if we narrow the scope of inquiry down to a particular strain of science-fictional investigation—namely that which concerns itself with the technological replication and augmentation of humanity—then we shall see that we are well-equipped to discuss this topic. This being said, when society engages these particular strains, what is it that we find ourselves doing? Are they mere entertainment, or are we bringing forth new modes of reflection? Assuming the former, then we must take into account the effect that the passive, non-reflective consumption of entertainment can have on the attitudes and modes of the audience. The repeated presentation of a thing as “normal” lends credence to its eventual acceptance as the norm.[1] [4] This can work in negative or positive directions, with the former being exemplified by the constant gearing of domestic commercial advertisements to women, even though more and more men do the majority of their own housework[5], and the latter being exemplified by the normalization of successful African-American families, with the acceptance of The Cosby Show.[6] Fictional representations will, no matter what, teach us something, and influence our thinking.
But does that mean that our fictional representations are only morality tales? If there must be some kind of definitive lesson to every story, then where does that leave the sense of play and experimentation which characterizes our best creative endeavors, both artistic and scientific? The burden of a lesson lessens the whole of all of our operations in the realm of expression and research, because overt moralizing means that there must be an answer, rather than looking toward the investigation of questions. To present a singular and monolithic morality is to exclude the possibility of lessons within other types of moral modes, and disregard the likelihood that no singular model will be completely correct, and that all must interact with and borrow from each other. In order to accurately represent the multiplicity of views, interests, and desires of the agents in the world in which we live, it would only make sense that we would need to alternate between multiple moral views, and that, when we speak of fictional worlds, that moral multiplicity would be increased rather than lessened. This being so, a case can be made that we must seek to use our fiction not only to describe that which is, but to prepare us for those things which will come to be. If this is true, then we must address the specific ways in which speculative fiction can be a “predictive” mechanism.
Are our stories supposed to predict the future or only reflect our present? This question, in its framing, supposes a dichotomy which we must eventually see as false, but, for the moment, let us address each component, in turn. The implications of our fiction being used to model our present were discussed, above, but look again: The idea of representing our immediate surrounds in the worlds of story and art is an old one, with a great deal of currency. If we engage in this process, then the hope is often that the artistic media will present a labyrinth which the audience may travel and, at the centre, see themselves reflected back, the partial thing they used to be here confronted with what they have become, by dint of the journey. That is, perhaps, too poetic, but it serves to illustrate that the work of fiction, even as it “merely” represents, necessarily changes. The audience is not the same at the end of an artistic experience as they were at the start, if only by the trivially true fact of having experienced something new. That very cognitive event represents an alteration and augmentation which was not previously present, and so even in reflecting, we alter. Must we not, then, always have an eye to the future, and toward what it is that we will create through our actions?
If we are to create new visions of the future, we must take into account the fact that what visions we create may influence the future we come to inhabit. In much the same vein as the above-mentioned ideas of Arthur C. Clarke, we have come to see many aspects of our present-day, real world technological surveillance theatre adopted out of the grounds of fiction [7]. The consistency of this kind of development places a similar constraint on prediction as that of the prescriptive moral less, namely that we must always have an eye to the ever-changing implications in the landscape between our fictional and our real worlds. Not only that, but consider this: What is the actual nature of a proclaimed prediction? Noted speculative fiction author William Gibson is credited with forecasting the qualitative feel and ontological thinking about the modern-day Internet, even being attributed the creation of the term “cyberspace.” Gibson, however, denies any role as a so-called prophet, often saying, “Neuromancer [written in 1984] has no cellphones.” In this we can see that most, if not all authors of speculative fiction are not precisely looking to prognosticate, so much as they are interested in discussing the quality of the world around us and, at most, using that to discuss what our future may look like. If this is so, then where do we stand in the face of the fact that what is written and what will be have a complicated and real, although possibly tenuous relationship?
Can speculative fiction reasonably and responsibly be used to shape our perceptions and expectations of the future? Again, William Gibson notes, “Science fiction stories are quaint. They become quaint when you type them and become quainter with time.”[8]. What he means is that all visions of the future, as presented in speculative fiction, are visions of that future from the perspective of that author’s present, which means that they will invariably become visions of the past. As such, any author seeking to illuminate the world in which they live, while perhaps giving a glimpse at the world they see emerging out of said, must at all times retain a sense of self-awareness, a recognition that the way in which we interpret objects, events, and actions today, may be very different, tomorrow. To that end, we ask, “What, if anything, should be altered in our portrayal of ACI, in fiction and, more generally, media?”
2.2 Problems with the Portrayal of Autonomous Created Intelligences in Fiction
There is nothing wrong with the way we portray ACI in fiction, except that everything is wrong with the way we portray ACI in fiction. In light of our discussion, thus far, the descriptions and depictions of non-human intelligences are understandably reflective of the way in which we currently think about and understand the prospect of something other than human having anything that the human race would consider to be intelligence or agency or morals or rights. Humans work very hard to try to isomorphically map those things which we do not understand; that is, we seek to find points of similarity or analogy, and to systematize them into a rubric for comparison.[2] In fact, this is precisely what allows us to entertain the concept of non-human intelligence, at all—and it is what limits the scope and understanding of that consideration to entertainment. When a human agent is confronted with the idea that theirs may not be the only or even primary mode of behaviour and conceptualization, the immediate urge may well be to devise a perspective under which all views on the world are oriented to the view of that agent. Simply, an individual will seek to relegate that which they do not understand or which threatens them to simultaneous positions of similarity and inferiority. “These views or ways of being are just like mine, only not as well-developed.” The problem, here, is two-fold.
Demanding as a requirement of intelligence or agency those qualities which are, in some ways, fundamentally human is to state at the outset that some things cannot be considered an “intelligent agent” until they reach a level of humanity, or, in some cases, at all. If intelligence is complex tool use, or the vocalization of complex representational language, then are we to assume that creatures without opposable limbs or vocal chords will simply never be intelligent? Such a proposition is laughable, of course, and not one that many would take seriously, but we must ask ourselves if we might not be making a subtler, similar mistake in demanding that a species or a new form of intelligence remove or “correct” its very otherness, in order to be considered an agent, at all. Respecting the inborn qualities of the agent under consideration while simultaneously refusing to reduce that other to a mere object—respecting the potential interiority of an agent, even if it is fundamentally unknowable to us—is perhaps the most difficult part of any ethical undertaking. Peter Singer’s view of Personism begins to outline a model of this view, but includes only what he calls “non-human animals” rather than agents, more broadly [9]. Singer’s concern, however, is primarily that of the suffering of all feeling creatures, and the rights owed them, rather than those rights owed them as due their position as agents. This distinction is crucial, as it moves his debate away from thought and desire, into the ideas of emotion and aversion.
Starting from a position of fear and contention—that is, stating that we must take into account a subject’s fears and right not to be harmed—then places us in the position of viewing all ethical and moral obligations through a lens of harm-based rights, rather than through a lens of conceptual development- and intellectual growth-based rights. Singer’s reason for framing his position in this way is no secret—he states that he is concerned with the rights owed to existing and not “potential” persons [10]. This excludes any rights which may be owed “future generations” and those which could be argued for an embryo, a fetus, or an unborn child—however, it also excludes those machine intelligences which do not yet exist. Though proponents of personism hold that machines might be brought under its considerations, it seems evident that, given the criteria they have adopted, no currently-extant ACI would fit the bill for their definition of personhood. The consideration of the non-human person is laudable, but a conception of rights and duties from the starting point of that person’s ability to feel pain and suffering is still exclusionary of the kinds of ACI about which we are speaking. Therefore, any moral view must explore the kinds of negative and positive rights which we would afford not just those overarchingly like ourselves, but those which are fundamentally different from us, but which still have qualities we would consider worthy of preservation.
The area between otherness and similarity is difficult to traverse. Let us take a look back at the aforementioned family presented in the CBS Network’s The Cosby Show. Within this show, we are presented with an upper-middle-class African-American family, taking centre stage on television, at a time in history when the majority of American culture perceived African-Americans as lower class, drug addicted, and subsisting on welfare programs. The Cosby Show sought to alter the consensus perception of African-Americans, and to normalise the idea that they could be successful and live the American Dream. It did this by taking an experience which was fundamentally other to most whites, at that time—the African-American experience—and making it more similar to then-accepted norms. The Huxtables lived in a New York City brownstone; the family’s father was an obstetrician; their mother was a lawyer. These were “normal” people, living “normal” lives. At the same time, however, they retained a sense of the alterity of the culture we were viewing, with episodes often containing frequent references to jazz culture and Motown; concerns about racism and gang violence; and deconstructions of the differences between upper-class white and upper-class African-American experiences. This technique is crucial to the project of subverting the normalizing of culture: presenting all of the ways in which a fundamentally different group (ACI) is actually very similar to that which we well know (humans), and then displaying the distinct concerns of that new group as contrasted with those of the known group. To begin this project, we must first consider the ways in which we are presented with ACI in our fictional media.
3 Fiction’s Primary Views on Created Intelligence
3.1 What is the Current Landscape?
Now that we have acknowledged that there is something amiss in the ways in which fiction discusses ACI, before we can continue our discussion about how to fix it, we must ask: What exactly is it that is wrong with our portrayals? What we will be addressing as we move forward are the twin strains of thought which run through most if not all fiction about created intelligences, life, and beings: The Pinocchio Complex and the Frankenstein or Shellian Syndrome. These two modes have their roots earlier than either of their namesakes, but, as we will see, those eponymous works and authors epitomize both the strain of thinking with which they are concerned, and the level of cultural currency with which we are. Let us now take a look at the anatomy of these perspectives, and at some of those examples which subvert and complexify the trope.
3.2 The Pinocchio Complex
The so called Pinocchio Complex is comprised of two major stages: In Stage1, The Creation, Knowing that it is Created (and thus “Artificial”) Wishes to be “Real;” and in Stage 2, The Creation, Having Worked Hard, And Learned Much, Gets to “Be Real.” Examples of stage one include, most obviously, the story of Pinocchio, wherein the puppet maker, knowing that he will never have a son of his own, creates a boy in his own likeness. Through magic, that boy is brought to life, and is constantly reminded that he is not a “Real Boy.” He knows that his existence is false, and wishes that it were otherwise. In addition to this, in one of the most recent recapitulation of this form, seen in Stanley Kubrick and Steven Spielberg’s A.I. [11], we see the most blatant and self-aware expression of this form as related to our fascination with ACI. Further, in the television series Star Trek: The Next Generation, the character of Lieutenant Commander Data desires to be human, and seeks to teach himself, piecemeal, the qualities of humanity which he believes he lacks [12]. This endeavor leads to many fits and starts concerning Data’s “humanity,” and even some acknowledgment of the possibility that it may never fully come to pass. Even in Mary Shelly’s Frankenstein, we find the Creation speaking of how it only wanted to know family, and love, and understanding, like any other creature. Almost all of these have one common outcome.
In Stage Two of the Pinocchio Complex, the poor little artificial child realises that it can become human, or, more often, that is has had it within them the whole time to do so. Humanity, here, is seen as the pinnacle, the ultimate attainment, and, in our examples, all efforts toward it, save one, are rewarded. But look at the example of Lt. Cmdr. Data: even as he attains his wish, the audience is aware that, as an android, he will always retain the ability to turn off his emotions, to perceive faster than his comrades, and to live very much longer than they. He will always be other than human; not better, or worse, but different. In this way, the foundation of the Pinocchio complex is always bittersweet, as the creation-turned-real will always have a set of experiences that are completely unknown and unknowable to the rest of the human population. Looking at applications within our present project, to ask an ACI to ignore the process of its becoming aware would be to ask it to forget what it is, on a foundational level. The lesson Victor Frankenstein’s Creation understood, its crucial turning point, was that becoming a “real boy” is never an option, because that very process of transformation will forever mark them out as different. The Creation, however, had another problem.
3.3 The Frankenstein/Shellian Syndrome
The “Frankenstein” or “Shellian Syndrome” is named for 19th century British author Mary Shelly, whose seminal work Frankenstein, has often been interpreted as the prime illustration of the idea that the hubris of humanity ought not go unchecked, lest it destroy us. This idea is reinforced by the novel’s subtitle, “The Modern Prometheus,” and, as this would suggest, the work takes much of its conceptual weight from this well-known Greek myth, in which a Titan steals the fire of knowledge and understanding from the gods in order to light and guide the fledgling humanity, and is forever punished for it. This type of story also has roots in other folk stories, such as Der Golem von Prague, which we will discuss shortly. When looking at this type, we can see that there are four primary stages found in those stories which follow the Shellian Syndrome model, and they are: 1) The Scientist Creates New Life, In Pursuit of Science, or Out of Perceived Necessity; 2) The Scientist Becomes Horrified at the Startling Otherness of Her Creation & Flees The Scene of Creation (possibly while screaming “My God! What Have I Done?!”); 3) The Scientist Returns to Right Her Wrongs by Trying to Kill “The Monster;” 4) The Creation Kills or Destroys The Scientist’s Life.
In Frankenstein Syndrome stories, the creation may start out wanting to be real or it may start out confused or with a clear purpose–but the hubris of the creator is shown and she is forced to try to destroy it, ultimately being destroyed by it. As stated, this model has roots not only in Frankenstein, and the myth of Prometheus, but in Der Golem von Prague, a story wherein the famous Rabbi Judah Loew ben Bezalel chief rabbi of Prague in the late 16th century, needing assistance to keep the people of his city safe, uses ancient magic to create a being—the Golem—out of clay, and animates it by writing the name of God on a scroll and placing it into the golem’s mouth [13]. The creature comes to life, and stops the attacks against the Jews of Prague, but in many versions, the creature’s anger is not quelled, and it goes on a destructive rampage, destroying the very people and city it was meant to save. We will discuss this tale further, later on, but the implication of this version is clear: In overstepping his boundaries into God’s realm (creating new life), the Rabbi had no way to control the thing it had brought to life. Similarly, the plot of the Terminator series of films concerns an ACI missile defense system which becomes self-aware, deems all of humanity a threat to both itself and each other, and launches enough of the world’s nuclear cache to destroy 75% of humanity [14] [15] [16][17]. A very few people survive and use time travel to seek to prevent the war or ensure the life of the saviour of humanity; and thus begins the most iconic ACI story of the Shellian Syndrome, in the 20th century.
In addition to these works, and deserving of mention, here, is Vincenzo Natali’s 2009 film Splice in which two bio-engineers, Elsa and Clive, comprise a small, independent research outfit working for a larger bio-technology firm [18]. Their job is to make breakthroughs in the creation of hybridised artificial life and medicinal science and, through their work, they create two iterations of a completely new kind of chimeric life form out of the genes of many different animals with known medicinal traits. They will use the chemicals these creatures create in their bodies to treat everything from degenerative eye sight to cancer. When they announce their breakthrough superiors, they also break the news that they’re ready to use the same process on humans. Said superiors tell them that now is not the time for human trials, but rather they ought to focus on the profitability of the work they have done. But our heroes are scientists, and they feel that there is so much more that can be done, and so, in secret, they create a human animal hybrid using their techniques.
In Splice, Elsa and Clive are the windows we are given into the worst of humanity. They are reckless, irresponsible, scared, obsessive, jealous, and hateful. We are supposed to understand that these are the absolute worst people to bring up an animal/human hybrid, as they have not even figured out how to accurately communicate with each other, let alone an entirely new species. They are doomed to get it wrong, from the start. This, once again, is the filmmaker’s way of showing us that “man is not meant to tamper with God’s/Nature’s Works,” which the fundamental assumption of this trope; but as with most clichés, this assumes a truth without ever actually investigating it. The question we should be addressing here, and which Splice seems to have made a false start at tackling, is not “should we” or “are we ready,” but rather, “Why Aren’t We Ready, Yet?” More clearly, why is humanity such a poor custodian of its creations? Splice had the potential to be a film which ran counter to the kind of unthinking acceptance of the destructive base drives that have marked the majority of human history, and which find themselves reflected in our fictions.
In her run down of Splice, Caitlín R Kiernan noted that the true failing of Victor von Frankenstein was not to “meddle in gods affairs,” as is so often misapprehended, but, rather to be a terrible parent [19]. Frankenstein brings something to life and then, instead of rearing it, caring for it, and seeking to understand it, he treats it like a thing, a monster; he runs from it, and tries to forget that it exists. In the end, it rightly lashes out, and destroys him. Splice presents this lesson to us, again, through the utter parental and observational failure of Elsa and Clive, who neither engage her burgeoning intelligence, nor teach her about the nature of sex and death; who fail to recognise a primary feature of her biology, in that her systems go into major, seemingly catastrophic metabolic arrest, just before a metamorphosis, and who, eventually, try to kill her. It is my contention that this is the true lesson Shelly tried to teach us: We Must Respect the Existence Of That Which We Bring Into The World. While we may not understand it, and it may frighten us, that new life which we create is likely to be vastly intelligent, but also deeply alien. The socialisation and of our creation is something to which we must pay close attention, as it will likely save us a great deal of trouble, down the line.
3.4 Subversions of the Tropes
Now that we have discussed the two primary categories for the representation of ACI within speculative fiction, and the problems therewith, we will discuss those examples within the field which, like The Cosby Show, subvert the trope and work to normalise the acceptance of and engagement with the other. The first of these is Ridley Scott’s 1982 film, Blade Runner. [20] In this film, we find a future dystopia in which synthetic humans, or “Replicants,” are used as slave labor, and each one has a built-in expiration date, to keep it from rebelling against its programming. This has the opposite effect, and causes those replicants which know of their nature to abandon their posts, in many cases killing the human with whom they work. When this happens, the offending replicant must be “retired.” Discovering a replicant requires special training and, in many cases, a piece of equipment known as a “Voigt-Kampff” machine. The film concerns itself with four replicants—Roy, Zhora, Leon, and Pris—who have escaped the interstellar colonies to return to earth and try to find a way to extend their lives. Their primary mode of doing this is to kill everyone involved in their creation, until they find the man who wrote their programming. We see, again, strains of the Golem, and of Frankenstein, but we must remember the lessons we learned about the latter: There are repercussions for neglectful parenting.
While we could again explore the notions of parentage and what it means to take responsibility for what you create, much more important to our consideration is the idea that replicants can be “discovered.” The two-word phrase “Voigt-Kampff,” mentioned, can be rendered literally as “Normalisation Struggle,”[21][22], but the essence of the phrase, particularly within the context of the film can best be rendered as “ The Struggle With Normalisation.” Each of our replicants has a “Normal” thing that they need—something they desire—but they do not need or even seek to attain it in what we might call a “Human” way. In this way, the concerns of the replicants are all fundamentally Other. On one hand, Roy seeks more life, but not for anything like a “normal” life; he simply wants to be free, to not die, see things no human could. On the other hand, Leon clings to old photos, to the point of almost getting himself killed; Pris holds tight to a childhood she never actually knew; and Zhora latches on to this extremely overwrought expression of sexuality. Everything they want stands as exaggerated, or in some way skewed and they struggle to normalise, to acclimate, even as they struggle against the humanity which caused what they want to become regarded as “Abnormal.” This is true for every replicant—all of them struggle with the idea of normalisation—and so, recognising that, a test was devised to discover those who struggled, overmuch.
The next subversive piece of ACI fiction is the television series Terminator: the Sarah Connor Chronicles (TSCC) [23]. An American television show which ran from 2008 to 2009, the plot of TSCC concerns the continuing lives of Sarah and John Connor within the aforementioned Terminator film universe. The first episode opens a few years after the events of Terminator 2, and proceeds to pull the two main characters eight years into the future, skipping over the events of the third film in the franchise. John and Sarah Connor are the ostensible heroes of this show, but the really interesting material, for our purposes, is in the intricate, subtle interplay of the characters—both human and machine. The ways in which what each character learns, what they all know, and what they don’t know that they have learned all play off of each other and create a realistic sense of lives and a world, while they are all in the midst of seeking to not just save but literally create and sustain their futures.
Again, the show is ostensibly about the human perspective on ACI—that is, human reactions to robots, robots impacting the lives of humans, exploring the Uncanny Valley, etc. That is not the most fertile conceptual ground, here. While the aforementioned perspectives do afford us useful, interesting fiction, the concept has been tread and retread, again and again. Human psychology is fascinating and the end of the world (a personal and collective apocalyptic experience) is deeply affecting and the stress and change and madness of a life on the run all take their toll on the mind which is living in the constant glut of it, and watching that can be deeply jarring, on an emotional level. But the audience already knows this. What’s more, it is only half of the picture. What the audience does not know is: what is the psychology of an autonomous created intelligence? Why does the Skynet intelligence persist in viewing humanity as a threat to itself, seeking to hunt us down even to the irrational end of the self-fulfilling prophecy of mutual annihilation? What is quality of feeling for a machine which is programmed to feel? TSCC begins to explore these questions, in a number of ways, and it serves our purpose to investigate those, here.
The primary ACI in TSCC are Cameron, Cromartie’s, Catherine Weaver, and John Henry. Each of these ACI’s learns something, and grows from that education, over the course of the show. The ACI we meet are not static, unchanging, monolithic tools. They each have a discernible inner life, though fundamentally non-human motivations, which inform what they are and what they become. Cameron, as one of the lead characters, benefits from the most development. She learns the capacity for self-improvement, for self-expression, for friendship, and for guile, all of which serve her in her ultimate mission, but each of which she pursues for their own sake, and her own interest. Cromartie’s education is the belief in those things not seen; Cromartie learns how to have faith. Based on the actions of those around him, and those with whom he has contact, Cromartie learns that intuition and more circuitous paths of inquiry can yield results, and they do (though he might ultimately which they had not). Catherine Weaver learns how to be a parent to a child, by having taken over the life of a mother, and seeking to understand the relationship of creation to creator, of care and support. In many ways, Weaver is a cipher for the audience, and she becomes more-so when she takes the knowledge she has gained in raising a human child and applies it to her own creation: John Henry.
Unlike the other platforms we see in TSCC, John Henry learns from the ground up. Whereas, Cameron has been reprogrammed, twice, and Cromartie was forcibly disabled, deactivated, and sent to the future where he had to adapt to brand new parameters, and Weaver is a highly adaptable T-1001 model which comes to the conclusion that war is a losing proposition for everyone, John Henry is built from the basic framework of a thinking, adapting chess computer, and then it is taught, very carefully. The child psychologist Dr. Sherman provides the programmers with the model by which to teach a developing intelligence, and spends time helping John Henry equate learning with playing. At first, John Henry is taught math, definitions, grammar, colours, shapes, facts and figures, dates, history, and so forth. Then it is given access to the Internet, and it expands its learning, correlating ideas, connecting related tangents and snippets of information. Finally, John Henry plays games with Savannah—Weaver’s human “daughter”—and they learn together. And then, one day, John Henry accidentally kills someone, and its creator recognises that this cannot continue, and they set out to stop it from ever happening again.
After killing a human, John Henry’s programming is not scrubbed, nor do his creators go back to base his code and make him “Three-Laws-Safe.”[3] This is because Weaver is concerned with ensuring a world in which humans do not hate and fear machines and in which machines do not feel the need to fight and destroy humans. She takes the time and effort to find someone to teach John Henry why it must not kill people, nor allow them to die. In comparison to the fiction which we have so far discussed, this is a revolutionary idea. Through his interactions with another human, John Henry is given an ethically-based respect for human (if not all) life and, through this, comes to understand the notions of remorse and regret for one’s actions. He promises that he will be careful to make sure no one dies this way again, and this message is reinforced by Weaver, who tells John Henry that his friend Savannah’s survival is dependent on John Henry’s continued survival and learning, but that his is not necessarily dependent on hers. As with every other piece of information, John Henry considers this very carefully.
And then, one day, Savannah wants to introduce John Henry’s toys to her toys, wants them to play together. John Henry says he doesn’t remember reading anything about duckies in the Bionicle Kingdom, and this makes Savannah sad [24]. When John Henry asks what’s wrong (and it is important to note that, at this point John Henry asks what’s wrong), Savannah says that the duckies are sad, because they want to play; can John Henry change the rules so they can play? Now, this is a concept John Henry hasn’t ever encountered, before, so he takes a few seconds to think about it, after which he replies, “Yes. We can Change The Rules.” This is a crucial understanding, for John Henry, because he realises that it can be applied not just to all games, but to any conflicts whatsoever. “Changing the Rules” means that, if two or more groups agree that the rules or laws of their engagement can be other than they were, then they are other.
So, in TSCC, we see that every machine learns from humans, and every human has an influence on the development of the machines. What does this mean? What does it matter? Cameron learns from humans how to hide what she wants. Cromartie learns how to be patient and have faith. Weaver learns how to be a mother. John Henry learns how to be himself. What the machines learn, from whom and how they learn it, and how they apply it, all add something into this show’s final prescription of what humans and machines must do to survive and thrive in the coming world: They have to adapt, they to learn from each other, and recognise that they are different types of intelligence, with different concerns and ways of understanding the world, but none of them wants to die. This last point can be understood by any living thing, and can become a point of unification and consensus, rather than contention and war. The exchange between John Henry and Savannah Weaver regarding “Changing the rules” was intended to imply a change not only in the way we approach the conflict between humans and machines as depicted within the show, but also to the traditional rules of speculative tropes of Frankensteinian Monsters and Pinocchian Puppets with dreams of being “Real.”
TSCC forces us to consider the idea of creations who know that they are creations, and are happy with who and what they are. We must look at the monster which revels in its monstrosity, the robot which wants nothing more than to be a better robot. Engage the beings who are not concerned with the notion of human-versus-machine and who think that any thinking, feeling thing should be allowed to flourish and learn, and those who simply want to develop their capacity for knowledge and experience, and want to help others do the same. The works of Blade Runner and TSCC are our primary forays into the question of what a fully ACI—as alien as it necessarily must be—is thinking and feeling, rather than just presenting a foil for our fear of the potential dangers of technological progress. These works present us with a view to a third way of understanding our ACI, and to understanding what a society composed of both organic and non-organic persons might look like, and the pitfalls to avoid. These films show us that it is possible to move past fear and prejudice in regards to the other, and thereby help us do just that. It is long past time that the rest of our fictional representations followed suit.
4 What Is At Stake?
Over the course of this paper, we have come to see that, when our fictions portray us as being irresponsible, uncaring creators or custodians, whose creations invariably feel the need to annihilate us, then that is the kind of mentality we will come to accept as “normal.” “Of course we should never integrate into human biological systems those pieces of computer hardware running strong predictive algorithms. Of course we should fear the inevitable robot uprising. Don’t you know that any mass-market ACI should only be as smart as a puppy?”[25]. Though most often proffered in a joking manner, this line of thinking has serious undertones and, knowingly or unknowingly, it is predicated upon the glut of portrayals in our media which present ACI as something to be feared, held in check, held at bay. This is so, proponents will say, because any ACI either won’t understand human concerns, or it will understand them, and will seek to destroy them. This is ludicrous, dangerous thinking, and it prevents any serious traction of large-scale ACI projects in the sphere of the greater public culture and discourse. We must alter the way the public views ACI, and one of the primary mechanisms to accomplish this is the arena of speculative fiction. The reflexive nature of our engagement with fiction guarantees an audience the ideas of which will be altered, even as they use those very ideas to think about and create discussion in the wider world. We simply must make certain that the ideas with which the audience is presented is as representative of the wider capabilities for abstraction and complex thinking as it can be. We must be certain to show ourselves that we are capable of engaging and understanding any new intelligence, and that we can take the responsibility for bridging any conceptual gaps, while respecting our fundamental differences.
As Sarah Connor says at the end of “Heavy Metal,” the fourth episode in season one of TSCC:
Not every version of the Golem story ends badly. In one, the monster is a hero, destroying all those who would seek to harm its maker. In another, the Golem’s maker destroys his creature, before it destroys the world. The pride of man– of parents as well– makes us believe that anything we create, we can control. Whether from clay or from metal, it is in the nature of us to make our own monsters. Our children are alloys, all, built from our own imperfect flesh. We animate them with magic, and never truly know what they will do.[25]
And so, as parents, as creators, we must teach them as much as we can, show them our trust, and hope for the best.
FOOTNOTES
[1] Cf. Foucault.
[2] Cf. Douglas R. Hofstadter’s 1977 Gödel, Escher, Bach: an Eternal Golden Braid
[3] Cf. Isaac Asimov
REFERENCES
[1] A. Clarke,. Peacetime Uses for V2. Wireless World February 1945: Page 58. Magazine.
[2] J. Cascio. Cascio’s Laws of Robotics. Bay Area AI MeetUp. Menlo Park, Menlo Park, CA. 22 March 2009. Conference Presentation.
[3] J.P. Marsh, C.L. Nehaniv, and B. Gorayska. Cognitive technology, humanizing the information age. In Proceedings of the Second International Conference on Cognitive Technology, pages vii-ix. IEEE Computer Society Press, 1997.
[4] C.J. Heyes. Self‐Transformations: Foucault, Ethics and Normalized Bodies. New York: Oxford University Press, Inc. 2007.
[5] O. Sullivan and S. Coltrane. Men’s changing contribution to housework and child care. Prepared for the 11th Annual Conference of the Council on Contemporary Families. April 25-26, 2008, University of Illinois, Chicago.
[6] The Cosby Show. Marcy Carsey, Tom Werner, Bernie Kukoff, Janet Leahy. Viacom Enterprises. NBC. 1984–1992.
[7] “List of Surveillance Conceptss First Introduced in Science Fiction” Technovelgy.com, Technovelgy LLC. n.d. Web. 14 May 2012.
[8] S. Brown, William Gibson: science fiction stories are quaint. BeatRoute Magazine – Western Canada’s Monthly Arts & Entertainment Source. BeatRoute Magazine. n.d. Web. 14 May 2012.
[9] P. Singer. Taking Humanism Beyond Speciesism. Free Inquiry, 24, no. 6 (Oct/Nov 2004), pp. 19-21.
[10] P. Singer. Practical Ethics. New York: Cambridge University Press. 2011.
[11] A.I. Dir. Steven Spielberg. Perf. Haley Joel Osment, Frances O’Connor, Sam Robards, Jake Thomas, Jude Law, and William Hurt. DreamWorks, 2001. Film.
[12] Star Trek: The Next Generation. Gene Roddenbury. CBS. 1987–1991.
[13] G. Dennis. The Encyclopedia of Jewish Myth, Magic, and Mysticism. Page 111. Woodbury (MN): Llewellyn Worldwide. 2007. Print.
[14] Terminator. Dir. James Cameron. Perf. Arnold Schwarzenegger, Michael Biehn, Linda Hamilton. Orion Pictures. 1984. Film.
[15] Terminator 2: Judgment Day. Dir. James Cameron. Perf. Arnold Schwarzenegger, Linda Hamilton, Robert Patrick, Edward Furlong. TriStar Pictures. 1991. Film.
[16] Terminator 3: Rise of the Machines. Dir. Jonathan Mostow. Perf. Arnold Schwarzenegger, Nick Stahl, Claire Danes, Kristanna Loken. Warner Bros. Pictures. 2003. Film.
[17] Terminator: Salvation. Dir. McG. Perf. Christian Bale, Sam Worthington, Anton Yelchin, Moon Bloodgood, Bryce Dallas Howard, Common, Jadagrace Berry, Michael Ironside, Helena Bonham Carter. Warner Bros. Pictures. 2009. Film.
[18] Splice. Dir. Vincenzo Natali. Perf. Adrien Brody, Sarah Polley, Delphine Chanéac. Dark Castle Entertainment. 2010. Film.
[19]GreyGirlBeast [Caitlín R Kiernan]. “…to watch you shake and shout it out…” Livejournal. The Online Journal of a Construct Sometimes Known as Caitlín R. Kiernan. 5 June 2010. Web. 9 June 2010.
[20] Blade Runner.
[21]J.J. Olivero, R.L. Longbothum. Empirical fits to the Voigt line width: A brief review. Journal of Quantitative Spectroscopy and Radiative Transfer. February 1977.
[22] Cassell’s German Dictionary: German-English, English-German
[24] Terminator: The Sarah Connor Chronicles. Josh Friedman. FOX. 2008—2009.
[24] “To the Lighthouse.” Terminator — The Sarah Connor Chronicles: The Complete Second Season. Writ. Natalie Chaidez. Dir. Guy Ferland. Warner Home Video. 2009.
[25] Matt Jones, “B.A.S.A.A.P.” Blog-Berg. BergLondon. 4 September 2010. Web. 15 May 2012.
[26] “Heavy Metal.” Terminator — The Sarah Connor Chronicles: The Complete First Season. Writ. John Enbom . Dir. Sergio Mimica Gezzan. Warner Home Video. 2008.
Pingback: How We Teach Is Also A Lesson | A Future Worth Thinking About