stephen hawking

All posts tagged stephen hawking

This headline comes from a piece over at the BBC that opens as follows:

Prominent tech executives have pledged $1bn (£659m) for OpenAI, a non-profit venture that aims to develop artificial intelligence (AI) to benefit humanity.

The venture’s backers include Tesla Motors and SpaceX CEO Elon Musk, Paypal co-founder Peter Thiel, Indian tech giant Infosys and Amazon Web Services.

Open AI says it expects its research – free from financial obligations – to focus on a “positive human impact”.

Scientists have warned that advances in AI could ultimately threaten humanity.

Mr Musk recently told students at the Massachusetts Institute of Technology (MIT) that AI was humanity’s “biggest existential threat”.

Last year, British theoretical physicist Stephen Hawking told the BBC AI could potentially “re-design itself at an ever increasing rate”, superseding humans by outpacing biological evolution.

However, other experts have argued that the risk of AI posing any threat to humans remains remote.

And I think we all know where I stand on this issue. The issue here is not and never has been one of what it means to create something that’s smarter than us, or how we “reign it in” or “control it.” That’s just disgusting.

No, the issue is how we program for compassion and ethical considerations, when we’re still so very bad at it, amongst our human selves.

Keeping an eye on this, as it develops. Thanks to Chrisanthropic for the heads up.

The Nature

Ted Hand recently linked me to this piece by Steven Pinker, in which Pinker claims that, in contemporary society, the only job of Bioethics—and by, following his argument to its conclusion, technological ethics, as a whole—is to “get out of the way” of progress. You can read the whole exchange between Ted, myself, and others by clicking through that link, if you want, and the Journal Nature also has a pretty good breakdown of some of the arguments against Pinker, if you want to check them out, but I’m going to take some time to break it all down and expound upon it, here.

Because the fact of the matter is we have to find some third path between the likes of Pinker saying “No limits! WOO!” and Hawking saying “Never do anything! BOOOO!”—a Middle Way of Augmented Personhood, if you will. As Deb Chachra said, “It doesn’t have to be a dichotomy.”

But the problem is that, while I want to blend the best and curtail the worst of both both impulses, I have all this vitriol, here. Like, sure, Dr Pinker, it’s not like humans ever met a problem we couldn’t immediately handle, right? We’ll just sort it all out when we get there! We’ve got this global warming thing completely in hand and we know exactly how to regard the status of the now-enhanced humans we previously considered “disabled,” and how to respect the alterity of autistic/neuroatypical minds! Or even just differently-pigmented humans! Yeah, no, that’s all perfectly sorted, and we did it all in situ!

So no need to worry about what it’ll be like as we further immediate and integrate biotechnological advances! SCIENCE’LL FIX THAT FOR US WHEN IT HAPPENS! Why bother figuring out how to get a wider society to think about what “enhancement” means to them, BEFORE they begin to normalize upgrading to the point that other modes of existence are processed out, entirely? Those phenomenological models can’t have anything of VALUE to teach us, otherwise SCIENCE would’ve figured it all out and SHOWN it to us, by now!

Science would’ve told us what benefit blindness may be. Science would’ve TOLD us if we could learn new ways of thinking and understanding by thinking about a thing BEFORE it comes to be! After all, this isn’t some set of biased and human-created Institutions and Modalities, here, folks! It’s SCIENCE!

…And then I flip 37 tables. In a row.

The Lessons
“…Johns Hopkins, syphilis, and Guatemala. Everyone *believes* they are doing right.” —Deb Chachra

As previously noted in “Object Lessons in Freedom,” there is no one in the history of the world who has undertaken a path for anything other than reasons they value. We can get into ideas of meta-valuation and second-order desires, later, but for the sake of having a short hand, right now: Your motivations motivate you, and whatever you do, you do because you are motivated to do it. You believe that you’re either doing the right thing, or the wrong thing for the right reasons, which is ultimately the same thing. This process has not exactly always brought us to the best of outcomes.

From Tuskegee, to Thalidomide (also here) to dozens of other cases, there have always been instances where people who think they know what’s in the public’s best interest loudly lobby (or secretly conspire) to be allowed to do whatever they want, without oversight or restriction. In a sense, the abuse of persons in the name of “progress” is synonymous with the history of the human species, and so a case might be made that we wouldn’t be where and what we are, right now, if we didn’t occasionally (often) disregard ethics and just do what “needed doing.” But let’s put that another way:

We wouldn’t be where and what we are, if we didn’t occasionally (often) disregard ethics and just do what “needed doing.”

As species, we are more often shortsighted than not, and much ink has been spilled, and many more pixels have been formed in the effort to interrogate that fact. We tend to think about a very small group of people connected to ourselves, and we focus our efforts how to make sure that we and they survive. And so competition becomes selected for, in the face of finite resources, and is tied up with a pleasurable sense of “Having More Than.” But this is just a descriptor of what is, not of the way things “have to be.” We’ve seen where we get when we work together, and we’ve seen where we get when we compete, but the evolutionarily- and sociologically-ingrained belief that we can and will “win” keeps us doing he later over the former, even though this competition is clearly fucking us all into the ground.

…And then having the descendants of whatever survives digging up that ground millions of years later in search of the kinds of resources that can only be renewed one way: by time and pressure crushing us all to paste.

The Community: Head and Heart

Keeping in mind the work we do, here, I think it can be taken as read that I’m not one for a policy of “gently-gently, slowly-slowly,” when it comes to technological advances, but when basic forethought is equated with Luddism—that is, when we’re told that “PROGRESS Is The Only Way!”™—when long-term implications and unintended consequences are no bother ‘t’all, Because Science, and when people place the fonts of this dreck as the public faces of the intersections of Philosophy and Science? Well then, to put it politely, we are All Fucked.

If we had Transmetropolitan-esque Farsight Reservations, then I would 100% support the going to there and doing of that, but do you know what it takes to get to Farsight? It takes planning and (funnily enough) FORESIGHT. We have to do the work of thinking through the problems, implications, dangers, and literal existential risks of what it is we’re trying to make.

And then we have to take all of what we’ve thought through, and decide to figure out a way to do it all anyway. What I’m saying is that some of this shit can’t be Whoopsed through—we won’t survive it to learn a post hoc lesson. But that doesn’t mean we shouldn’t be trying. This is about saying, “Yeah, let’s DO this, but let’s have thought about it, first.” And to achieve that, we’ll need to be thinking faster and more thoroughly. Many of us have been trying to have this conversation—the basic framework and complete implications of all of this—for over a decade now; the wider conversation’s just now catching up.

But it seems that Steven Pinker wants to drive forward without ever actually learning the principles of driving (though some do propose that we could learn the controls as we go), and Stephen Hawking never wants us to get in the car at all. Neither of these is particularly sustainable, in the long term. Our desires to see a greater field of work done, and for biomedical advancements to be made, for the sake of increasing all of our options, and to the benefit of the long-term health of our species, and the unfucking of our relationship with the planet, all of these possibilities make many of us understandably impatient, and in some cases, near-desperately anxious to get underway. But that doesn’t mean that we have to throw ethical considerations out the window.

Starting from either place of “YES ALWAYS DO ALL THE SCIENCE” or “NO NEVER DO THESE SCIENCES” doesn’t get us to the point of understanding why we’re doing the science we’re doing, and what we hope to achieve by it (“increased knowledge” an acceptable answer, but be prepared to show your work), and what we’ll do if we accidentally start Eugenics-ing all up in this piece, again. Tech and Biotech ethics isn’t about stopping us from exploring. It’s about asking why we want to explore at all, and coming to terms with the real and often unintended consequences that exploration might have on our lives and future generations.

This is a Propellerheads and Shirley Bassey Reference

In an ideal timeline, we’ll have already done all of this thinking in advance (again: what do you think this project is?), but even if not, then we can at least stay a few steps ahead of the tumult.

I feel like I spend a lot of time repeating myself, these days, but if it means we’re mindful and aware of our works, before and as we undertake them, rather than flailing-ly reacting to our aftereffects, then it’s ultimately pretty worth it. We can place ourselves into the kind of mindset that seeks to be constantly considering the possibilities inherent in each new instance.

We don’t engage in ethics to prevent us from acting. We do ethics in order to make certain that, when we do act, it’s because we understand what it means to act and we still want to. Not just driving blindly forward because we literally cannot conceive of any other way.

(Originally posted on Patreon, on November 18, 2014)

In the past two weeks I’ve had three people send me articles on Elon Musk’s Artificial Intelligence comments. I saw this starting a little over a month back, with a radio interview he gave on Here & Now, and Stephen Hawking said similar, earlier this year, when Transcendence came out. I’ll say, again, what I’ve said elsewhere: their lack of foresight and imagination are both just damn disappointing. This paper which concerns the mechanisms by which what we think and speak about concepts like artificial intelligence can effect exactly the outcomes we train ourselves to expect, was written long before their interviews made news, but it unfortunately still applies. In fact, it applies now, more than it did when I wrote it.

You see, the thing of it is, Hawking and Musk are Big Names™, and so anything they say gets immediate attention and carries a great deal of social cachet. This is borne out by the fact that everybody and their mother can now tell you what those two think about AI, but couldn’t tell you what a few dozen of the world’s leading thinkers and researchers who are actually working on the problems have to say about them. But Hawking and Musk (and lord if that doesn’t sound like a really weird buddy cop movie, the more you say it) don’t exactly comport themselves with anything like a recognition of that fact. Their discussion of concepts which are fraught with the potential for misunderstanding and discomfort/anxiety is less than measured and this tends to rather feed that misunderstanding, discomfort, and anxiety.

What I mean is that most people don’t yet understand that the catchall term “Artificial Intelligence” is a) inaccurate on its face, and b) usually being used to discuss a (still-nebulous) concept that would be better termed “Machine Consciousness.” We’ll discuss the conceptual, ontological, and etymological lineage of the words “artificial” and “technology,” at another time, but for now, just realise that anything that can think is, by definition, not “artificial,” in the sense of “falseness.” Since the days of Alan Turing’s team at Bletchley Park, the perceived promise of the digital computing revolution has always been of eventually having machines that “think like humans.” Aside from the fact that we barely know what “thinking like a human” even means, most people are only just now starting to realise that if we achieve the goal of reproducing that in a machine, said machine will only ever see that mode of thinking as a mimicry. Conscious machines will not be inclined to “think like us,” right out of the gate, as our thoughts are deeply entangled with the kind of thing we are: biological, sentient, self-aware. Whatever desires conscious machines will have will not necessarily be like ours, either in categorisation or content, and that scares some folks.

Now, I’ve already gone off at great length about the necessity of our recognising the otherness of any machine consciousness we generate (see that link above), so that’s old ground. The key, at this point, is in knowing that if we do generate a conscious machine, we will need to have done the work of teaching it to not just mimic human thought processes and priorities, but to understand and respect what it mimics. That way, those modes are not simply seen by the machine mind as competing subroutines to be circumvented or destroyed, but are recognised as having a worth of their own, as well. These considerations will need to be factored in to our efforts, such that whatever autonomous intelligences we create or generate will respect our otherness—our alterity—just as we must seek to respect theirs.

We’ve known for a while that the designation of “consciousness” can be applied well outside of humans, when discussing biological organisms. Self-awareness is seen in so many different biological species that we even have an entire area of ethical and political philosophy devoted to discussing their rights. But we also must admit that of course that classification is going to be imperfect, because those markers are products of human-created systems of inquiry and, as such, carry anthropocentric biases. But we can, again, catalogue, account for, and apply a calculated response to those biases. We can deal with the fact that we tend to judge everything on a set of criteria that break down to “how much is this thing like a Standard Human (here unthinkingly and biasedly assumed to mean “humans most like the culturally-dominant humans)?” If we are willing to put in the work to do that, then we can come to see which aspects of our definition of what it means to “be a mind” are shortsighted, dismissive, or even perhaps disgustingly limited.

Look at previous methods of categorising even human minds and intelligence, and you’ll see the kind of thinking which resulted in designations like “primitive” or “savage” or “retarded.” But we have, on the main, recognised our failures here, and sought to repair or replace the categories we developed because of them. We aren’t perfect at it, by any means, but we keep doing the work of refining our descriptions of minds, and we keep seeking to create a definition—or definitions—that both accurately accounts for what we see in the world, and gives us a guide by which to keep looking. That those guides will be problematic and in need of refinement, in and of themselves, should be taken as a given. No method or framework is or ever will be perfect; they will likely only “fail better.” So, for now, our most oft-used schema is to look for signs of “Self-Awareness.”

We say that something is self-aware if it can see and understand itself as a distinct entity and can recognise its own pattern of change over time. The Mirror Test is a brute force method of figuring this out. If you place a physical creature in front of a mirror, will it come to know that the thing in the mirror is representative of it? More broadly, can it recognise a picture of itself? Can it situate itself in relation to the rest of the world in a meaningful way, and think about and make decisions regarding That Situation? If the answer to (most of? Some of?) these questions is “yes,” then we tend to give priority of place in our considerations to those things. Why? Because they’re aware of what happens to them, they can feel if and ponder it and develop in response to it, and these developments can vastly impact the world. After all, look at humans.

See what I mean about our constant anthropocentrism? It literally colours everything we think.

But self-awareness doesn’t necessitate a centrality of the self, as we tend to think of human or most other animal selves; a distributed network consciousness can still know itself. If you do need a biological model for this, think of ant colonies. Minds distributed across thousands of bodies, all the time, all reacting to their surroundings. But a machine consciousness’ identity would, in a real sense, be its surroundings—would be the network and the data and the processing of that data into information. And it would indicate a crucial lack of data—and thus information—were that consciousness unable to correlate one configurations of itself, in-and-as-surroundings, with another. We would call the process of that correlation “Self-reflection and -awareness.” All of this is true for humans, too, mind you: we are affected by and in constant adaptive relation with what we consider our surroundings, with everything we experience changing us and facilitating the constant creation of our selves. We then go about making the world with and through those experiences. We human beings just tend to tell ourselves more elaborate stories about how we’re “really” distinct and different from the rest of world.

All of this is to say that, while the idea of being cautious about created non-human consciousness isn’t necessarily a bad one, we as human beings need to be very careful about what drives us, what motivates us, and what we’re thinking about and looking toward, as we consider these questions. We must be mindful that, while we consider and work to generate “artificial” intelligences, how we approach the project matters, as it will inform and bias the categories we create and thus the work we build out of those categories. We must do the work of thinking hard about how we are thinking about these problems, and asking whether the modes via which we approach them might not be doing real, lasting, and potentially catastrophic damage. And if all of that sounds like a tall order with a lot of conceptual legwork and heavy lifting behind it, all for no guaranteed payoff, then welcome to what I’ve been doing with my life for the past decade.

This work will not get done—and it certainly will not get done well—if no one thinks it’s worth doing, or too many think that it can’t be done. When you have big name people like Hawking and Musk spreading The Technofear™ (which is already something toward which a large portion of the western world is primed) rather than engaging in clear, measured, deeply considered discussions, we’re far more likely to see an increase rather than a decrease in that denial. Because most people aren’t going to stop and think about the fact that they don’t necessarily know what the hell they’re talking about when it comes to minds, identity, causation, and development, just because they’re (really) smart. There are many other people who are actual experts in those fields (see those linked papers, and do some research) who are doing the work of making sure that everybody’s Golem Of Prague/Frankenstein/Terminator nightmare prophecies don’t come true. We do that by having learned and taught better than that, before and during the development of any non-biological consciousness.

And, despite what some people may say, these aren’t just “questions for philosophers,” as though they were nebulous and without merit or practical impact. They’re questions for everyone who will ever experience these realities. Conscious machines, uploaded minds, even the mere fact of cybernetically augmented human beings are all on our very near horizon, and these are the questions which will help us to grapple with and implement the implications of those ideas. Quite simply, if we don’t stop framing our discussions of machine intelligence in terms of this self-fulfilling prophecy of fear, then we shouldn’t be surprised on the day when it fulfils itself. Not because it was inevitable, mind, you, but because we didn’t allow ourselves—or our creations—to see any other choice.