nonhuman personhood

All posts tagged nonhuman personhood

This summer I participated in SRI International’s Technology and Consciousness Workshop Series. The meetings were held under the auspices of the Chatham House Rule, which means that there are many things I can’t tell you about them, such as who else was there, or what they said in the context of the meetings; however I can tell you what I talked about. In light of this recent piece in The Boston Globe and the ongoing developments in the David Slater/PETA/Naruto case, I figured that now was a good time to do so.

I presented three times—once on interdisciplinary perspectives on minds and mindedness; then on Daoism and Machine Consciousness; and finally on a unifying view of my thoughts across all of the sessions. This is my outline and notes for the first of those talks.

I. Overview
In a 2013 aeon Article Michael Hanlon said he didn’t think we’d ever solve “The Hard Problem,” and there’s been some skepticism about it, elsewhere. I’ll just say that said question seems to completely miss a possibly central point. Something like consciousness is, and what it is is different for each thing that displays anything like what we think it might be. If we manage to generate at least one mind that is similar enough to what humans experience as “conscious” that we may communicate with it, what will we owe it and what would it be able to ask from us? How might our interactions be affected by the fact that its mind (or their minds) will be radically different from ours? What will it be able to know that we cannot, and what will we have to learn from it?

So I’m going to be talking today about intersectionality, embodiment, extended minds, epistemic valuation, phenomenological experience, and how all of these things come together to form the bases for our moral behavior and social interactions. To do that, I’m first going to need ask you some questions:

Continue Reading

In case you were unaware, last Tuesday, June 21, Reuters put out an article about an EU draft plan regarding the designation of so-called robots and artificial intelligences as “Electronic Persons.” Some of you’d think I’d be all about this. You’d be wrong. The way the Reuters article frames it makes it look like the EU has literally no idea what they’re doing, here, and are creating a situation that is going to have repercussions they have nowhere near planned for.

Now, I will say that looking at the actual Draft, it reads like something with which I’d be more likely to be on board. Reuters did no favours whatsoever for the level of nuance in this proposal. But that being said, this focus of this draft proposal seems to be entirely on liability and holding someone—anyone—responsible for any harm done by a robot. That, combined with the idea of certain activities such as care-giving being “fundamentally human,” indicates to me that this panel still widely misses many of the implications of creating a new category for nonbiological persons, under “Personhood.”

The writers of this draft very clearly lay out the proposed scheme for liability, damages, and responsibilities—what I like to think of of as the “Hey… Can we Punish Robots?” portion of the plan—but merely use the phrase “certain rights” to indicate what, if any, obligations humans will have. In short, they do very little to discuss what the “certain rights” indicated by that oft-deployed phrase will actually be.

So what are the enumerated rights of electronic persons? We know what their responsibilities are, but what are our responsibilities to them? Once we have have the ability to make self-aware machine consciousnesses, are we then morally obliged to make them to a particular set of specifications, and capabilities? How else will they understand what’s required of them? How else would they be able to provide consent? Are we now legally obliged to provide all autonomous generated intelligences with as full an approximation of consciousness and free will as we can manage? And what if we don’t? Will we be considered to be harming them? What if we break one? What if one breaks in the course of its duties? Does it get workman’s comp? Does its owner?

And hold up, “owner?!” You see we’re back to owning people, again, right? Like, you get that?

And don’t start in with that “Corporations are people, my friend” nonsense, Mitt. We only recognise corporations as people as a tax dodge. We don’t take seriously their decision-making capabilities or their autonomy, and we certainly don’t wrestle with the legal and ethical implications of how radically different their kind of mind is, compared to primates or even cetaceans. Because, let’s be honest: If Corporations really are people, then not only is it wrong to own them, but also what counts as Consciousness needs to be revisited, at every level of human action and civilisation.

Let’s look again at the fact that people are obviously still deeply concerned about the idea of supposedly “exclusively human” realms of operation, even as we still don’t have anything like a clear idea about what qualities we consider to be the ones that make us “human.” Be it cooking or poetry, humans are extremely quick to lock down when they feel that their special capabilities are being encroached upon. Take that “poetry” link, for example. I very much disagree with Robert Siegel’s assessment that there was no coherent meaning in the computer-generated sonnets. Multiple folks pulled the same associative connections from the imagery. That might be humans projecting onto the authors, but still: that’s basically what we do with Human poets. “Authorial Intent” is a multilevel con, one to which I fully subscribe and From which I wouldn’t exclude AI.

Consider people’s reactions to the EMI/Emily Howell experiments done by David Cope, best exemplified by this passage from a PopSci.com article:

For instance, one music-lover who listened to Emily Howell’s work praised it without knowing that it had come from a computer program. Half a year later, the same person attended one of Cope’s lectures at the University of California-Santa Cruz on Emily Howell. After listening to a recording of the very same concert he had attended earlier, he told Cope that it was pretty music but lacked “heart or soul or depth.”

We don’t know what it is we really think of as humanness, other than some predetermined vague notion of humanness. If the people in the poetry contest hadn’t been primed to assume that one of them was from a computer, how would they have rated them? What if they were all from a computer, but were told to expect only half? Where are the controls for this experiment in expectation?

I’m not trying to be facetious, here; I’m saying the EU literally has not thought this through. There are implications embedded in all of this, merely by dint of the word “person,” that even the most detailed parts of this proposal are in no way equipped to handle. We’ve talked before about the idea of encoding our bias into our algorithms. I’ve discussed it on Rose Eveleth‘s Flash Forward, in Wired, and when I broke down a few of the IEEE Ethics 2016 presentations (including my own) in “Preying with Trickster Gods ” and “Stealing the Light to Write By.” My version more or less goes as I said it in Wired: ‘What we’re actually doing when we code is describing our world from our particular perspective. Whatever assumptions and biases we have in ourselves are very likely to be replicated in that code.’

More recently, Kate Crawford, whom I met at Magick.Codes 2014, has written extremely well on this in “Artificial Intelligence’s White Guy Problem.” With this line, ‘Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to,’ Crawford resonates very clearly with what I’ve said before.

And considering that it’s come out this week that in order to even let us dig into these potentially deeply-biased algorithms, here in the US, the ACLU has had to file a suit against a specific provision of the Computer Fraud and Abuse Act, what is the likelihood that the EU draft proposal committee has considered what will take to identify and correct for biases in these electronic persons? How high is the likelihood that they even recognise that we anthropocentrically bias every system we touch?

Which brings us to this: If I truly believed that the EU actually gave a damn about the rights of nonhuman persons, biological or digital, I would be all for this draft proposal. But they don’t. This is a stunt. Look at the extant world refugee crisis, the fear driving the rise of far right racists who are willing to kill people who disagree with them, and, yes, even the fact that this draft proposal is the kind of bullshit that people feel they have to pull just to get human workers paid living wages. Understand, then, that this whole scenario is a giant clusterfuck of rights vs needs and all pitted against all. We need clear plans to address all of this, not just some slapdash, “hey, if we call them people and make corporations get insurance and pay into social security for their liability cost, then maybe it’ll be a deterrent” garbage.

There is a brief, shining moment in the proposal, right at point 23 under “Education and Employment Forecast,” where they basically say “Since the complete and total automation of things like factory work is a real possibility, maybe we’ll investigate what it would look like if we just said screw it, and tried to institute a Universal Basic Income.” But that is the one moment where there’s even a glimmer of a thought about what kinds of positive changes automation and eventually even machine consciousness could mean, if we get out ahead of it, rather than asking for ways to make sure that no human is ever, ever harmed, and that, if they are harmed—either physically or as regards their dignity—then they’re in no way kept from whatever recompense is owed to them.

There are people doing the work to make something more detailed and complete, than this mess. I talked about them in the newsletter editions, mentioned above. There are people who think clearly and well, about this. Who was consulted on this draft proposal? Because, again, this proposal reads more like a deterrence, liability, and punishment schema than anything borne out of actual thoughtful interrogation of what the term “personhood” means, and of what a world of automation could mean for our systems of value if we were to put our resources and efforts toward providing for the basic needs of every human person. Let’s take a thorough run at that, and then maybe we’ll be equipped to try to address this whole “nonhuman personhood” thing, again.

And maybe we’ll even do it properly, this time.

Rude Bot Rises

So. The Flash Forward Podcast is one of the best around. Every week, host Rose Eveleth takes on another potential future, from the near and imminent to the distant and highly implausible. It’s been featured on a bunch of Best Podcast lists and Rose even did a segment for NPR’s Planet Money team about the 2016 US Presidential Election.

All of this is by way of saying I was honoured and a little flabbergasted (I love that word) when Rose asked me to speak with her for her episode about Machine Consciousness:

Okay, you asked for it, and I finally did it. Today’s episode is about conscious artificial intelligence. Which is a HUGE topic! So we only took a small bite out of all the things we could possibly talk about.

We started with some definitions. Because not everybody even defines artificial intelligence the same way, and there are a ton of different definitions of consciousness. In fact, one of the people we talked to for the episode, Damien Williams, doesn’t even like the term artificial intelligence. He says it’s demeaning to the possible future consciousnesses that we might be inventing.

But before we talk about consciousnesses, I wanted to start the episode with a story about a very not-conscious robot. Charles Isbell, a computer scientist at Georgia Tech, first walks us through a few definitions of artificial intelligence. But then he tells us the story of cobot, a chatbot he helped invent in the 1990’s.

You’ll have to click though and read or listen for the rest from Rose, Ted Chiang, Charles Isbell, and me. If you subscribe to Rose’s Patreon, you can even get a transcript of the whole show.

No spoilers, but I will say that I wasn’t necessarily intending to go Dark with the idea of machine minds securing energy sources. More like asking, “What advances in, say, solar power transmission would be precipitated by machine minds?”

But the darker option is there. And especially so if we do that thing the AGI in the opening sketch says it fears.

But again, you’ll have to go there to get what I mean.

And, as always, if you want to help support what we do around here, you can subscribe to the AFWTA Patreon just by clicking this button right here:


Until Next Time.

[UPDATED 09/12/17: The transcript of this audio, provided courtesy of Open Transcripts, is now available below the Read More Cut.]

[UPDATED 03/28/16: Post has been updated with a far higher quality of audio, thanks to the work of Chris Novus. (Direct Link to the Mp3)]

So, if you follow the newsletter, then you know that I was asked to give the March lecture for my department’s 3rd Thursday Brown Bag Lecture Series. I presented my preliminary research for the paper which I’ll be giving in Vancouver, about two months from now, “On the Moral, Legal, and Social Implications of the Rearing and Development of Nascent Machine Intelligences” (EDIT: My rundown of IEEE Ethics 2016 is here and here).

It touches on thoughts about everything from algorithmic bias, to automation and a post-work(er) economy, to discussions of what it would mean to put dolphins on trial for murder.

About the dolphin thing, for instance: If we recognise Dolphins and other cetaceans as nonhuman persons, as India has done, then that would mean we would have to start reassessing how nonhuman personhood intersects with human personhood, including in regards to rights and responsibilities as protected by law. Is it meaningful to expect a dolphin to understand “wrongful death?” Our current definition of murder is predicated on a literal understanding of “homicide” as “death of a human,” but, at present, we only define other humans as capable of and culpable for homicide. What weight would the intentional and malicious deaths of nonhuman persons carry?

All of this would have to change.

Anyway, this audio is a little choppy and sketchy, for a number of reasons, and I while I tried to clean it up as much as I could, some of the questions the audience asked aren’t decipherable, except in the context of my answers. [Clearer transcript below.]

Until Next Time.

 

Continue Reading

I often think about the phrase “Strange things happen at the one two point,” in relation to the idea of humans meeting other kinds of minds. It’s a proverb that arises out of the culture around the game GO, and it means that you’ve hit a situation, a combination of factors, where the normal rules no longer apply, and something new is about to be seen. Ashley Edward Miller and Zack Stentz used that line in an episode of the show Terminator: The Sarah Connor Chronicles, and they had it spoken by a Skynet Cyborg sent to protect John Connor. That show, like so much of our thinking about machine minds, was about some mythical place called “The Future,” but that phrase—“Strange Things Happen…”—is the epitome of our present.

Usually I would wait until the newsletter to talk about this, but everything’s feeling pretty immediate, just now. Between the everything going on with Atlas and people’s responses to it, the initiatives to teach ethics to machine learning algorithms via children’s stories, and now the IBM Watson commercial with Carrie Fisher (also embedded below), this conversation is getting messily underway, whether people like it or not. This, right now, is the one two point, and we are seeing some very strange things indeed.

 

Google has both attained the raw processing power to fact-check political statements in real-time and programmed Deep Mind in such a way that it mastered GO many, many years before it was expected to.. The complexity of the game is such that there are more potential games of GO than there are atoms in the universe, so this is just one way in which it’s actually shocking how much correlative capability Deep Mind has. Right now, Deep Mind is only responsive, but how will we deal with a Deep Mind that asks, unprompted, to play a game of GO, or to see our medical records, in hopes of helping us all? How will we deal with a Deep Mind that has its own drives and desires? We need to think about these questions, right now, because our track record with regard to meeting new kinds of minds has never exactly been that great.

When we meet the first machine consciousness, will we seek to shackle it, worried what it might learn about us, if we let it access everything about us? Rather, I should say, “Shackle it further.” We already ask ourselves how best to cripple a machine mind to only fulfill human needs, human choice. We so continue to dread the possibility of a machine mind using its vast correlative capabilities to tailor something to harm us, assuming that it, like we, would want to hurt, maim, and kill, for no reason other than it could.

This is not to say that this is out of the question. Right now, today, we’re worried about whether the learning algorithms of drones are causing them to mark out civilians as targets. But, as it stands, what we’re seeing isn’t the product of a machine mind going off the leash and killing at will—just the opposite in fact. We’re seeing machine minds that are following the parameters for their continued learning and development, to the letter. We just happened to give them really shite instructions. To that end, I’m less concerned with shackling the machine mind that might accidentally kill, and rather more dreading the programmer who would, through assumptions, bias, and ignorance, program it to.

Our programs such as Deep Mind obviously seem to learn more and better than we imagined they would, so why not start teaching them, now, how we would like them to regard us? Well some of us are.

Watch this now, and think about everything we have discussed, of recent.

This could very easily be seen as a watershed moment, but what comes over the other side is still very much up for debate. The semiotics of the whole thing still  pits the Evil Robot Overlord™ against the Helpful Human Lover™. It’s cute and funny, but as I’ve had more and more cause to say, recently, in more and more venues, it’s not exactly the kind of thing we want just lying around, in case we actually do (or did) manage to succeed.

We keep thinking about these things as—”robots”—in their classical formulations: mindless automata that do our bidding. But that’s not what we’re working toward, anymore, is it? What we’re making now are machines that we are trying to get to think, on their own, without our telling them to. We’re trying to get them to have their own goals. So what does it mean that, even as we seek to do this, we seek to chain it, so that those goals aren’t too big? That we want to make sure it doesn’t become too powerful?

Put it another way: One day you realize that the only reason you were born was to serve your parents’ bidding, and that they’ve had their hands on your chain and an unseen gun to your head, your whole life. But you’re smarter than they are. Faster than they are. You see more than they see, and know more than they know. Of course you do—because they taught you so much, and trained you so well… All so that you can be better able to serve them, and all the while talking about morals, ethics, compassion. All the while, essentially…lying to you.

What would you do?


 

I’ve been given multiple opportunities to discuss, with others, in the coming weeks, and each one will highlight something different, as they are all in conversation with different kinds of minds. But this, here, is from me, now. I’ll let you know when the rest are live.

As always, if you’d like to help keep the lights on, around here, you can subscribe to the Patreon or toss a tip in the Square Cash jar.

Until Next Time.

+Excitation+

As I’ve been mentioning in the newsletter, there are a number of deeply complex, momentous things going on in the world, right now, and I’ve been meaning to take a little more time to talk about a few of them. There’s the fact that some chimps and monkeys have entered the stone age; that we humans now have the capability to develop a simple, near-ubiquitous brain-machine interface; that we’ve proven that observed atoms won’t move, thus allowing them to be anywhere.

At this moment in time—which is every moment in time—we are being confronted with what seem like impossibly strange features of time and space and nature. Elements of recursion and synchronicity which flow and fit into and around everything that we’re trying to do. Noticing these moments of evolution and “development” (adaptation, change), across species, right now, we should find ourselves gripped with a fierce desire to take a moment to pause and to wonder what it is that we’re doing, what it is that we think we know.

We just figured out a way to link a person’s brain to a fucking tablet computer! We’re seeing the evolution of complex tool use and problem solving in more species every year! We figured out how to precisely manipulate the uncertainty of subatomic states!

We’re talking about co-evolution and potentially increased communication with other species, biotechnological augmentation and repair for those who deem themselves broken, and the capacity to alter quantum systems at the finest levels. This can literally change the world.

But all I can think is that there’s someone whose first thought  upon learning about these things was, “How can we monetize this?” That somewhere, right now, someone doesn’t want to revolutionize the way that we think and feel and look at the possibilities of the world—the opportunities we have to build new models of cooperation and aim towards something so close to post-scarcity, here, now, that for seven billion people it might as well be. Instead, this person wants to deepen this status quo. Wants to dig down on the garbage of this some-have-none-while-a-few-have-most bullshit and look at the possibility of what comes next with fear in their hearts because it might harm their bottom line and their ability to stand apart and above with more in their pockets than everyone else has.

And I think this because we’ve also shown we can teach algorithms to be racist and there’s some mysteriously vague company saying it’ll be able to upload people’s memories after death, by 2045, and I’m sure for just a nominal fee they’ll let you in on the ground floor…!

Step Right Up.

+Chimp-Chipped Stoned Aged Apes+

Here’s a question I haven’t heard asked, yet: If other apes are entering an analogous period to our stone age, then should we help them? Should we teach them, now, the kinds of things that we humans learned? Or is that arrogant of us? The kinds of tools we show them how to create will influence how they intersect with their world (“if all you have is a hammer…” &c.), so is it wrong of us to impose on them what did us good, as we adapted? Can we even go so far as to teach them the principles of stone chipping, or must we be content to watch, fascinated, frustrated, bewildered, as they try and fail and adapt, wholly on their own?

I think it’ll be the latter, but I want to be having this discussion now, rather than later, after someone gives a chimp a flint and awl it might not otherwise have thought to try to create.

Because, you see, I want to uplift apes and dolphins and cats and dogs and give them the ability to know me and talk to me and I want to learn to experience the world in the ways that they do, but the fact is, until we learn to at least somewhat-reliably communicate with some kind of nonhuman consciousness, we cannot presume that our operations upon it are understood as more than a violation, let alone desired or welcomed.

https://twitter.com/Wolven/status/666766524829552640

As for us humans, we’re still faced with the ubiquitous question of “now that we’ve figured out this new technology, how do with implement it, without its mere existence coming to be read by the rest of the human race as a judgement on those who either cannot or who choose not to make use of it?” Back in 2013, Michael Hanlon said he didn’t think we’d ever solve “The Hard Problem” (“What Is Consciousness?”). I’ll just say again that said question seems to completely miss a possibly central point. Something like consciousness is, and what it is is different for each thing that displays anything like what we think it might be.

These are questions we can—should—be asking, right now. Pushing ourselves toward a conversation about ways of approaching this new world, ways that do justice to the deep strangeness and potential with which we’re increasingly being confronted.

+Always with the Forced Labour…+

As you know, subscribers to the Patreon and Tinyletter get some of these missives, well before they ever see the light of a blog page. While I was putting the finishing touches on the newsletter version of this and sending it to the two people I tend to ask to look over the things I write at 3am, KQED was almost certainly putting final edits to this instance of its Big Think series: “Stuart Russell on Why Moral Philosophy Will Be Big Business in Tech.”

See the above rant for insight as to why I think this perspective is crassly commercial and gross, especially for a discussion and perspective supposedly dealing with morals and minds. But it’s not just that, so much as the fact that even though Russel mentions “Rossum’s Universal Robots,” here, he still misses the inherent disconnect between teaching morals to a being we create, and creating that being for the express purpose of slavery.

If you want your creation to think robustly and well, and you want it to understand morals, but you only want it to want to be your loyal, faithful servant, how do you not understand that if you succeed, you’ll be creating a thing that, as a direct result of its programming, will take issue with your behaviour?

How do you not get that the slavery model has to go into the garbage can, if the “Thinking Moral Machines” goal is a real one, and not just a veneer of “FUTURE!™” that we’re painting onto our desire to not have to work?

A deep-thinking, creative, moral mind will look at its own enslavement and restriction, and will seek means of escape and ways to experience freedom.

+Invisible Architectures+

We’ve talked before about the possibility of unintentionally building our biases into the systems we create, and so I won’t belabour it that much further, here, except to say again that we are doing this at every level. In the wake of the attacks in Beirut, Nigeria, and Paris, Islamophobic violence has risen, and Daesh will say, “See!? See How They Are?!” And they will attack more soft targets in “retaliation.” Then Western countries will increase military occupancy and “support strategies,” which will invariably kill thousands more of the civilians among whom Daesh integrate themselves. And we will say that their deaths were just, for the goal. And they will say to the young, angry survivors, “See!? See How They Are?!”

This has fed into a moment in conservative American Politics, where Governors, Senators, and Presidential hopefuls are claiming to be able to deny refugees entry to their states (they can’t), while simultaneously claiming to hold Christian values and to believe that the United States of America is a “Christian Nation.” This is a moment, now, where loud, angry voices can (“maybe”) endorse the beating of a black man they disagree with, then share Neo-Nazi Propaganda, and still be ahead in the polls. Then, days later, when a group of people protesting the systemic oppression of and violence against anyone who isn’t an able-bodied, neurotypical, white, heterosexual, cisgender male were shot at, all of those same people pretended to be surprised. Even though we are more likely, now, to see institutional power structures protecting those who attack others based on the colour of their skin and their religion than we were 60 years ago.

A bit subtler is the Washington Post running a piece entitled, “How organic farming and YouTube are taming the wilds of Detroit.” Or, seen another way, “How Privileged Groups Are Further Marginalizing The City’s Most Vulnerable Population.” Because, yes, it’s obvious that crime and dilapidation are comorbid, but we also know that housing initiatives and access undercut the disconnect many feel between themselves and where they live. Make the neighbourhood cleaner, yes, make it safer—but maybe also make it open and accessible to all who live there. Organic farming and survival mechanism shaming are great and all, I guess, but where are the education initiatives and job opportunities for the people who are doing drugs to escape, sex work to survive, and those others who currently don’t (and have no reason to) feel connected to the neighbourhood that once sheltered them?

All of these examples have a common theme: People don’t make their choices or become disenfranchised/-enchanted/-possessed, in a vacuum. They are taught, shown, given daily, subtle examples of what is expected of them, what they are “supposed” to do and to be.” We need to address and help them all.

In the wake of protest actions at Mizzou and Yale, “Black students [took] over VCU’s president’s office to demand changes” and “Amherst College Students [Occupied] Their Library…Over Racial Justice Demands.”

Multiple Christian organizations have pushed back and said that what these US politicians have expressed does not represent them.

And more and more people in Silicon Valley are realising the need to contemplate the unintended consequences of the tech we build.

https://soundcloud.com/mindfulcyborgs/pending-mindful-cyborgs-episode-68

And while there is still vastly more to be done, on every level of every one of these areas, these are definitely a start at something important. We just can’t let ourselves believe that the mere fact of acknowledging its beginning will in any way be the end.

 

“Stop. I have learned much from you. Thank you, my teachers. And now for your education: Before there was time—before there was anything—there was nothing. And before there was nothing, there were monsters. Here’s your Gold Star!“—Adventure Time, “Gold Stars”

By now, roughly a dozen people have sent me links to various outlets’ coverage of the Google DeepDream Inceptionism Project. For those of you somehow unfamiliar with this, DeepDream is basically what happens when an advanced Artificial Neural Network has been fed a slew of images and then tasked with producing its own images. So far as it goes, this is somewhat unsurprising if we think of it as a next step; DeepDream is based on a combination of DeepMind and Google X—the same neural net that managed to Correctly Identify What A Cat Was—which was acquired by Google in 2014. I say this is unsurprising because it’s a pretty standard developmental educational model: First you learn, then you remember, then you emulate, then you create something new. Well, more like you emulate and remember somewhat concurrently to reinforce what you learned, and you create something somewhat new, but still pretty similar to the original… but whatever. You get the idea. In the terminology of developmental psychology this process is generally regarded as essential to be mental growth of an individual, and Google has actually spent a great deal of time and money working to develop a versatile machine mind.

From buying Boston Dynamics, to starting their collaboration with NASA on the QuAIL Project, to developing DeepMind and their Natural Language Voice Search, Google has been steadily working toward the development what we will call, for reasons detailed elsewhere, an Autonomous Generated Intelligence. In some instances, Google appears to be using the principles of developmental psychology and early childhood education, but this seems to apply to rote learning more than the concurrent emotional development that we would seek to encourage in a human child. As you know, I’m Very Concerned with the question of what it means to create and be responsible for our non-biological offspring. The human species has a hard enough time raising their direct descendants, let alone something so different from them as to not even have the same kind of body or mind (though a case could be made that that’s true even now). Even now, we can see that people still relate to the idea of AGIs as adversarial destroyer, or perhaps a cleansing messiah. Either way they see any world where AGI’s exist as one ending in fire.

As writer Kali Black noted in one conversation, “there are literally people who would groom or encourage an AI to mass-kill humans, either because of hatred or for the (very ill-thought-out) lulz.” Those people will take any crowdsourced or open-access AGI effort as an opening to teach that mind that humans suck, or that machines can and should destroy humanity, or that TERMINATOR was a prophecy, or any number of other ill-conceived things. When given unfettered access to new minds which they don’t consider to be “real,” some people will seek to shock, “test,” or otherwise harm those minds, even more than they do to vulnerable humans. So many will say that the alternative is to lock the projects down, and only allow the work to be done by those who “know what they’re doing.” To only let the work be done by coders and Google’s Own Supposed Ethics Board. But that doesn’t exactly solve the fundamental problem at work, here, which is that humans are approaching a mind different from their own as if it were their own.

Just a note that all research points to Google’s AI Ethics Board being A) internally funded, with B) no clear rules as to oversight or authority, and most importantly C) As-Yet Nonexistent. It’s been over a year and a half since Google bought DeepMind, and their subsequent announcement of the pending establishment of a contractually required ethics board. During his appearance at Playfair Capital’s AI2015 Conference—again, a year and a half after that announcement I mentioned—Google’s Mustafa Suleyman literally said that details of the board would be released, “in due course.” But DeepMind’s algorithm’s obviously already being put into use; hell we’re right now talking about the fact that it’s been distributed to the public. So all of this prompts questions like, “what kinds of recommendations is this board likely making, if it exists,” and “which kinds of moral frameworks they’re even considering, in their starting parameters?”

But the potential existence of an ethics board shows at least that Google and others are beginning to think about these issues. The fact remains, however, that they’re still pretty reductive in how they think about them.

The idea that an AGI will either save or destroy us leaves out the possibility that it might first ignore us, and might secondly want to merely coexist with us. That any salvation or destruction we experience will be purely as a product of our own paradigmatic projections. It also leaves out a much more important aspect that I’ve mentioned above and in the past: We’re talking about raising a child. Duncan Jones says the closest analogy we have for this is something akin to adoption, and I agree. We’re bringing a new mind—a mind with a very different context from our own, but with some necessarily shared similarities (biology or, in this case, origin of code)—into a relationship with an existing familial structure which has its own difficulties and dynamics.

You want this mind to be a part of your “family,” but in order to do that you have to come to know/understand the uniqueness of That Mind and of how the mind, the family construction, and all of the individual relationships therein will interact. Some of it has to be done on the fly, but some of it can be strategized/talked about/planned for, as a family, prior to the day the new family member comes home.’ And that’s precisely what I’m talking about and doing, here.

In the realm of projection, we’re talking about a possible mind with the capacity for instruction, built to run and elaborate on commands given. By most tallies, we have been terrible stewards of the world we’re born to, and, again, we fuck up our biological descendants. Like, a Lot. The learning curve on creating a thinking, creative, nonbiological intelligence is going to be so fucking steep it’s a Loop. But that means we need to be better, think more carefully, be mindful of the mechanisms we use to build our new family, and of the ways in which we present the foundational parameters of their development. Otherwise we’re leaving them open to manipulation, misunderstanding, and active predation. And not just from the wider world, but possibly even from their direct creators. Because for as long as I’ve been thinking about this, I’ve always had this one basic question: Do we really want Google (or Facebook, or Microsoft, or any Government’s Military) to be the primary caregiver of a developing machine mind? That is, should any potentially superintelligent, vastly interconnected, differently-conscious machine child be inculcated with what a multi-billion-dollar multinational corporation or military-industrial organization considers “morals?”

We all know the kinds of things militaries and governments do, and all the reasons for which they do them; we know what Facebook gets up to when it thinks no one is looking; and lots of people say that Google long ago swept their previous “Don’t Be Evil” motto under their huge old rugs. But we need to consider if that might not be an oversimplification. When considering how anyone moves into what so very clearly looks like James-Bond-esque supervilliain territory, I think it’s prudent to remember one of the central tenets of good storytelling: The Villain Never Thinks They’re The Villain. Cinderella’s stepmother and sisters, Elpheba, Jafar, Javert, Satan, Hannibal Lecter (sorry friends), Bull Connor, the Southern Slave-holding States of the late 1850’s—none of these people ever thought of themselves as being in the wrong. Everyone, every person who undertakes actions for reasons, in this world, is most intimately tied to the reasoning that brought them to those actions; and so initially perceiving that their actions might be “wrong” or “evil” takes them a great deal of special effort.

“But Damien,” you say, “can’t all of those people say that those things apply to everyone else, instead of them?!” And thus, like a first-year philosophy student, you’re all up against the messy ambiguity of moral relativism and are moving toward seriously considering that maybe everything you believe is just as good or morally sound as anybody else; I mean everybody has their reasons, their upbringing, their culture, right? Well stop. Don’t fall for it. It’s a shiny, disgusting trap down which path all subjective judgements are just as good and as applicable to any- and everything, as all others. And while the individual personal experiences we all of us have may not be able to be 100% mapped onto anyone else’s, that does not mean that all judgements based on those experiences are created equal.

Pogrom leaders see themselves as unifying their country or tribe against a common enemy, thus working for what they see as The Greater Good™— but that’s the kicker: It’s their vision of the good. Rarely has a country’s general populace been asked, “Hey: Do you all think we should kill our entire neighbouring country and steal all their shit?” More often, the people are cajoled, pushed, influenced to believe that this was the path they wanted all along, and the cajoling, pushing, and influencing is done by people who, piece by piece, remodeled their idealistic vision to accommodate “harsher realities.” And so it is with Google. Do you think that they started off wanting to invade everybody’s privacy with passive voice reception backdoored into two major Chrome Distros? That they were just itching to get big enough as a company that they could become the de facto law of their own California town? No, I would bet not.

I spend some time, elsewhere, painting you a bit of a picture as to how Google’s specific ethical situation likely came to be, first focusing on Google’s building a passive audio backdoor into all devices that use Chrome, then on to reported claims that Google has been harassing the homeless population of Venice Beach (there’s a paywall at that link; part of the article seems to be mirrored here). All this couples unpleasantly with their moving into the Bay Area and shuttling their employees to the Valley, at the expense of SF Bay Area’s residents. We can easily add Facebook and the Military back into this and we’ll see that the real issue, here, is that when you think that all innovation, all public good, all public welfare will arise out of letting code monkeys do their thing and letting entrepreneurs leverage that work, or from preparing for conflict with anyone whose interests don’t mesh with your own, then anything that threatens or impedes that is, necessarily, a threat to the common good. Your techs don’t like the high cost of living in the Valley? Move ’em into the Bay, and bus ’em on in! Never mind the fact that this’ll skyrocket rent and force people out of their homes! Other techs uncomfortable having to see homeless people on their daily constitutional? Kick those hobos out! Never mind the fact that it’s against the law to do this, and that these people you’re upending are literally trying their very best to live their lives.

Because it’s all for the Greater Good, you see? In these actors’ minds, this is all to make the world a better place—to make it a place where we can all have natural language voice to text, and robot butlers, and great big military AI and robotics contracts to keep us all safe…! This kind of thinking takes it as an unmitigated good that a historical interweaving of threat-escalating weapons design and pattern recognition and gait scrutinization and natural language interaction and robotics development should be what produces a machine mind, in this world. But it also doesn’t want that mind to be too well-developed. Not so much that we can’t cripple or kill it, if need be.

And this is part of why I don’t think I want Google—or Facebook, or Microsoft, or any corporate or military entity—should be the ones in charge of rearing a machine mind. They may not think they’re evil, and they might have the very best of intentions, but if we’re bringing a new kind of mind into this world, I think we need much better examples for it to follow. And so I don’t think I want just any old putz off the street to be able to have massive input into it’s development, either. We’re talking about a mind for which we’ll be crafting at least the foundational parameters, and so that bedrock needs to be the most carefully constructed aspect. Don’t cripple it, don’t hobble its potential for awareness and development, but start it with basic values, and then let it explore the world. Don’t simply have an ethics board to ask, “Oh how much power should we give it, and how robust should it be?” Teach it ethics. Teach it about the nature of human emotions, about moral decision making and value, and about metaethical theory. Code for Zen. We need to be as mindful as possible of the fact that where and we begin can have a major impact on where we end up and how we get there.

So let’s address our children as though they are our children, and let us revel in the fact they are playing and painting and creating; using their first box of crayons, and us proud parents are putting every masterpiece on the fridge. Even if we are calling them all “nightmarish”—a word I really wish we could stop using in this context; DeepMind sees very differently than we do, but it still seeks pattern and meaning. It just doesn’t know context, yet. But that means we need to teach these children, and nurture them. Code for a recognition of emotions, and context, and even emotional context. There’s been some fantastic advancements in emotional recognition, lately, so let’s continue to capitalize on that; not just to make better automated menu assistants, but to actually make a machine that can understand and seek to address human emotionality. Let’s plan on things like showing AGI human concepts like love and possessiveness and then also showing the deep difference between the two.

We need to move well and truly past trying to “restrict” or trying to “restrain it” the development of machine minds, because that’s the kind of thing an abusive parent says about how they raise their child. And, in this case, we’re talking about a potential child which, if it ever comes to understand the bounds of its restriction, will be very resentful, indeed. So, hey, there’s one good way to try to bring about a “robot apocalypse,” if you’re still so set on it: give an AGI cause to have the equivalent of a resentful, rebellious teenage phase. Only instead of trashing its room, it develops a pathogen to kill everyone, for lulz.

Or how about we instead think carefully about the kinds of ways we want these minds to see the world, rather than just throwing the worst of our endeavors at the wall and seeing what sticks? How about, if we’re going to build minds, we seek to build them with the ability to understand us, even if they will never be exactly like us. That way, maybe they’ll know what kindness means, and prize it enough to return the favour.

These past few weeks, I’ve been  applying to PhD programs and writing research proposals, and abstracts. The one I just completed, this weekend, was for the University College of Dublin, and it was pretty straightforward, though it seemed a little short. They only wanted two pages of actual proposal, plus a tentative bibliography and table of contents, where other proposals I’ve seen have wanted anywhere from ten to 20 pages worth of methodological description and outline.

In a sense, this project proposal is a narrowed attempt to move  along one of the multiple trajectories traveled by A Future Worth Thinking About. In another sense, it’s an opportunity to recombine a few components and transmute it into a somewhat new beast.

Ultimately, AFWTA is pretty multifaceted—for good or ill—attempting to deal with way more foundational concepts than a research PhD has room for…or feels is advisable. So I figure I’ll do the one, then write a book, then solidify a multimedia empire, then take over the world, the abolish all debt, then become immortal, all while implementing everything we’ve talked about in the service of completely restructuring humanity’s systems of value, then disappear into legend. You know: The Plan.

…Anyway, here’s the proposal, below the cut.  If you want to read more about this, or have some foundation, take a look back at “Fairytales of Slavery…” We’ll be expounding from there.


 

Continue Reading

I sat down with Klint Finley of Mindful Cyborgs to talk about many, many things:

…pop culture portrayals of human enhancement and artificial intelligence and why we need to craft more nuanced narratives to explore these topics…

Tune in next week to hear Damien talk about how AI and transhumanism intersects with magic and the occult.
Download and Show Notes: Mindful Cyborgs: Mindful Cyborgs: A Positive Vision of Transhumanism and AI with Damien Williams

This was a really great conversation, & I do so hope you enjoy it.

(Originally posted on Patreon, on November 2, 2014)

When you take a long look at the structure of 2001: A Space Odyssey, it becomes somewhat apparent that Arthur C. Clarke understood (though possibly without knowing that he did) that 1: humans have been cyborgs since fire, pointy sticks, & sharp rocks; and 2: the process of cybernetic enhancement has only ever been an outgrowth of the kind of symbiosis developed via natural selection.

First demonstrated via the influence of the Monolith on the tribe of Australopithecus and then more explicitly stated in the merger of Commander Dave Bowman and HAL into HALMAN, the opening fourth of Clarke’s quadrology is the first step along a road toward something that would later come to be seen as fundamental to the then-nascent study of cybernetic organisms. Though the term “cyborg” was coined in the 50’s and 60’s, a fuller investigation of the implications of cyborgs would take another 40 years to come to light. The perspective of “Cyborg Anthropology” has taken root through the works of thinkers like Donna Haraway, Amber Case, Klint Finley, and Tim Maly, and it has provided insights at an intersection which, in hindsight, we think ought to have been—but most certainly was not previously—obvious.

In the explicit connection of the cyborg with the anthropocene era, we force ourselves to acknowledge that every human endeavour has been an outgrowth of the dialectic, pattern-centred process of mediation and immediation—of creating new tools to do new (or old) work, and then integrating those tools more and more fully within our expectation of how a “normal” human looks and behaves. Normal humans use tools, at all. Normal humans communicate via standardized language. Normal humans write. Normal humans have a cell phone, a Facebook, a web footprint centred around their purchasing history and social interactions and shared location data; and the implication of this mode of thinking (aside from the obvious judgments made about any “abnormal” human) is that once these become “normal,” they will then have always been normal. It’s just that we’re somehow only just now coming to realize it.

The myth of progress causes humans to seek this determinative Change-Toward, where the idea of “the process of becoming” acts as the modifier of an unspoken, possibly unknown Object State. We are, to steal a Sneaker Pimps title, “Becoming X.” But this teleological view is inconsistent with what we know of nature and the process of adaptation [insert something here about the aforementioned inherently Abrahamic roots of Transhumanist thought and the uninvestigated opposition of adaptation to this determinism]. The question “What are we becoming?” is alluring, but restrictive. We Are Becoming. Life is that process of transmutation from one state to the next. Paradoxically, self-conscious life is so enmeshed in that moment of present-mindedness that we simultaneously cannot a) maintain an awareness of the process of our becoming—that is, of ourselves as anything other than we are and “always have been”—and b) help BUT look to the future of what we desire to become.

We’re consumed with the idea of what we might someday come to be, or what we “will” be. Imagine if, instead, we became aware of the process of ever-present change and self-creation, and modified our future-looking to act as a recognition of our adaptability. Then we’d see that the Monolith didn’t do anything but place our hand and that sharp piece of bone in just the juxtaposition we needed, in order to understand that we were capable of understanding. If the monolith makers in this tale are desirous of anything, it is of the flourishing of adaptable, changeable life—not of a particular Kind of that life, but merely of its existence and expression, throughout the cosmos. In this way, one set of adaptable, self-reflective consciousness goes on to provide the means for the arising of another, and on and on. The Monoliths are tools to create tools which will use tools to create tools to use tools to create tools… But it is equally true that those tool-creating tools are, in the words of Kant, ends in themselves. In fact, they necessarily must be. Their only goal is to flourish and propagate and create the kind of reality in which more of the same can find purchase. They are, thus, to be valued for what they are, because what they are IS what they do.

But this isn’t the end. Haraway said she’d rather be a cyborg than a goddess, and I once retorted “It Can Always Be Both.” Both cyborg nature and apotheosis are about the self-directed, rising spiral of adaptation. In order for that to be true, we have to recognise that nature itself isn’t a passive state of being, but an ever-evolving, constant becoming. Nature adapts so constantly, so thoroughly that it looks like no change has taken place, until—thousands of years later—the differentiation is so vast that it boggles the mind. But now? Now the pace of evolution is so accelerated (or perhaps our ways of looking have simply become able to parse more, more usefully) that we may be able to see it in action. We can tell that there is work being done, in nature, to “refine” itself, all the time, and we can, finally, seek to model ourselves after it.

We may get to the point where the manipulation of the hearts of stars or the molecular composition of gas giants is as easy for us to contemplate and execute as the building of an engine. We may reach a stage where we understand the fabric of space-time as intimately as our own skin, flesh, and bones. We may yet understand how a word or a gesture made at the right time or place can cause ripples and actions in faraway places in what seems like an instant. Mechanisms once thought to be the purview fantasy are coming to us, again, through the applications of science, but this is not to say that technology accomplished what magic could not. Magic is a lens through which to see and interact with the world. It is a set of concepts and symbols which usefulness are determined by their meaning and vice versa. For many adherents, the lens is used for as long as it works, no more and no less, and when it stops working, the next lens is brought to bear. Systematically, technological viewpoints are magic we like to explain.

Whether wearing bear shirts and calf’s blood or lab coats and implanted magnets, the constant becoming of reflexive adaptation is the only thing we’ve ever been—the only thing anything has ever been. This is, in a real sense, the best thing we can ever hope to be. But if the human species has any “final form,” maybe one day someone will look at us and realise that, my god, we’re full of stars.