(Originally posted on Patreon, on November 18, 2014)
In the past two weeks I’ve had three people send me articles on Elon Musk’s Artificial Intelligence comments. I saw this starting a little over a month back, with a radio interview he gave on Here & Now, and Stephen Hawking said similar, earlier this year, when Transcendence came out. I’ll say, again, what I’ve said elsewhere: their lack of foresight and imagination are both just damn disappointing. This paper which concerns the mechanisms by which what we think and speak about concepts like artificial intelligence can effect exactly the outcomes we train ourselves to expect, was written long before their interviews made news, but it unfortunately still applies. In fact, it applies now, more than it did when I wrote it.
You see, the thing of it is, Hawking and Musk are Big Names™, and so anything they say gets immediate attention and carries a great deal of social cachet. This is borne out by the fact that everybody and their mother can now tell you what those two think about AI, but couldn’t tell you what a few dozen of the world’s leading thinkers and researchers who are actually working on the problems have to say about them. But Hawking and Musk (and lord if that doesn’t sound like a really weird buddy cop movie, the more you say it) don’t exactly comport themselves with anything like a recognition of that fact. Their discussion of concepts which are fraught with the potential for misunderstanding and discomfort/anxiety is less than measured and this tends to rather feed that misunderstanding, discomfort, and anxiety.
What I mean is that most people don’t yet understand that the catchall term “Artificial Intelligence” is a) inaccurate on its face, and b) usually being used to discuss a (still-nebulous) concept that would be better termed “Machine Consciousness.” We’ll discuss the conceptual, ontological, and etymological lineage of the words “artificial” and “technology,” at another time, but for now, just realise that anything that can think is, by definition, not “artificial,” in the sense of “falseness.” Since the days of Alan Turing’s team at Bletchley Park, the perceived promise of the digital computing revolution has always been of eventually having machines that “think like humans.” Aside from the fact that we barely know what “thinking like a human” even means, most people are only just now starting to realise that if we achieve the goal of reproducing that in a machine, said machine will only ever see that mode of thinking as a mimicry. Conscious machines will not be inclined to “think like us,” right out of the gate, as our thoughts are deeply entangled with the kind of thing we are: biological, sentient, self-aware. Whatever desires conscious machines will have will not necessarily be like ours, either in categorisation or content, and that scares some folks.
Now, I’ve already gone off at great length about the necessity of our recognising the otherness of any machine consciousness we generate (see that link above), so that’s old ground. The key, at this point, is in knowing that if we do generate a conscious machine, we will need to have done the work of teaching it to not just mimic human thought processes and priorities, but to understand and respect what it mimics. That way, those modes are not simply seen by the machine mind as competing subroutines to be circumvented or destroyed, but are recognised as having a worth of their own, as well. These considerations will need to be factored in to our efforts, such that whatever autonomous intelligences we create or generate will respect our otherness—our alterity—just as we must seek to respect theirs.
We’ve known for a while that the designation of “consciousness” can be applied well outside of humans, when discussing biological organisms. Self-awareness is seen in so many different biological species that we even have an entire area of ethical and political philosophy devoted to discussing their rights. But we also must admit that of course that classification is going to be imperfect, because those markers are products of human-created systems of inquiry and, as such, carry anthropocentric biases. But we can, again, catalogue, account for, and apply a calculated response to those biases. We can deal with the fact that we tend to judge everything on a set of criteria that break down to “how much is this thing like a Standard Human (here unthinkingly and biasedly assumed to mean “humans most like the culturally-dominant humans)?” If we are willing to put in the work to do that, then we can come to see which aspects of our definition of what it means to “be a mind” are shortsighted, dismissive, or even perhaps disgustingly limited.
Look at previous methods of categorising even human minds and intelligence, and you’ll see the kind of thinking which resulted in designations like “primitive” or “savage” or “retarded.” But we have, on the main, recognised our failures here, and sought to repair or replace the categories we developed because of them. We aren’t perfect at it, by any means, but we keep doing the work of refining our descriptions of minds, and we keep seeking to create a definition—or definitions—that both accurately accounts for what we see in the world, and gives us a guide by which to keep looking. That those guides will be problematic and in need of refinement, in and of themselves, should be taken as a given. No method or framework is or ever will be perfect; they will likely only “fail better.” So, for now, our most oft-used schema is to look for signs of “Self-Awareness.”
We say that something is self-aware if it can see and understand itself as a distinct entity and can recognise its own pattern of change over time. The Mirror Test is a brute force method of figuring this out. If you place a physical creature in front of a mirror, will it come to know that the thing in the mirror is representative of it? More broadly, can it recognise a picture of itself? Can it situate itself in relation to the rest of the world in a meaningful way, and think about and make decisions regarding That Situation? If the answer to (most of? Some of?) these questions is “yes,” then we tend to give priority of place in our considerations to those things. Why? Because they’re aware of what happens to them, they can feel if and ponder it and develop in response to it, and these developments can vastly impact the world. After all, look at humans.
See what I mean about our constant anthropocentrism? It literally colours everything we think.
But self-awareness doesn’t necessitate a centrality of the self, as we tend to think of human or most other animal selves; a distributed network consciousness can still know itself. If you do need a biological model for this, think of ant colonies. Minds distributed across thousands of bodies, all the time, all reacting to their surroundings. But a machine consciousness’ identity would, in a real sense, be its surroundings—would be the network and the data and the processing of that data into information. And it would indicate a crucial lack of data—and thus information—were that consciousness unable to correlate one configurations of itself, in-and-as-surroundings, with another. We would call the process of that correlation “Self-reflection and -awareness.” All of this is true for humans, too, mind you: we are affected by and in constant adaptive relation with what we consider our surroundings, with everything we experience changing us and facilitating the constant creation of our selves. We then go about making the world with and through those experiences. We human beings just tend to tell ourselves more elaborate stories about how we’re “really” distinct and different from the rest of world.
All of this is to say that, while the idea of being cautious about created non-human consciousness isn’t necessarily a bad one, we as human beings need to be very careful about what drives us, what motivates us, and what we’re thinking about and looking toward, as we consider these questions. We must be mindful that, while we consider and work to generate “artificial” intelligences, how we approach the project matters, as it will inform and bias the categories we create and thus the work we build out of those categories. We must do the work of thinking hard about how we are thinking about these problems, and asking whether the modes via which we approach them might not be doing real, lasting, and potentially catastrophic damage. And if all of that sounds like a tall order with a lot of conceptual legwork and heavy lifting behind it, all for no guaranteed payoff, then welcome to what I’ve been doing with my life for the past decade.
This work will not get done—and it certainly will not get done well—if no one thinks it’s worth doing, or too many think that it can’t be done. When you have big name people like Hawking and Musk spreading The Technofear™ (which is already something toward which a large portion of the western world is primed) rather than engaging in clear, measured, deeply considered discussions, we’re far more likely to see an increase rather than a decrease in that denial. Because most people aren’t going to stop and think about the fact that they don’t necessarily know what the hell they’re talking about when it comes to minds, identity, causation, and development, just because they’re (really) smart. There are many other people who are actual experts in those fields (see those linked papers, and do some research) who are doing the work of making sure that everybody’s Golem Of Prague/Frankenstein/Terminator nightmare prophecies don’t come true. We do that by having learned and taught better than that, before and during the development of any non-biological consciousness.
And, despite what some people may say, these aren’t just “questions for philosophers,” as though they were nebulous and without merit or practical impact. They’re questions for everyone who will ever experience these realities. Conscious machines, uploaded minds, even the mere fact of cybernetically augmented human beings are all on our very near horizon, and these are the questions which will help us to grapple with and implement the implications of those ideas. Quite simply, if we don’t stop framing our discussions of machine intelligence in terms of this self-fulfilling prophecy of fear, then we shouldn’t be surprised on the day when it fulfils itself. Not because it was inevitable, mind, you, but because we didn’t allow ourselves—or our creations—to see any other choice.