autonomous generated intelligence

All posts tagged autonomous generated intelligence

I spoke with Klint Finley over at WIRED about Amazon, Facebook, Google, IBM, and Microsoft’s new joint ethics and oversight venture, which they’ve dubbed the “Partnership on Artificial Intelligence to Benefit People and Society.” They held a joint press briefing, today, in which Yann LeCun, Facebook’s director of AI, and Mustafa Suleyman, the head of applied AI at DeepMind discussed what it was that this new group would be doing out in the world. From the Article:

Creating a dialogue beyond the rather small world of AI researchers, LeCun says, will be crucial. We’ve already seen a chat bot spout racist phrases it learned on Twitter, an AI beauty contest decide that black people are less attractive than white people and a system that rates the risk of someone committing a crime that appears to be biased against black people. If a more diverse set of eyes are looking at AI before it reaches the public, the thinking goes, these kinds of thing can be avoided.

The rub is that, even if this group can agree on a set of ethical principles–something that will be hard to do in a large group with many stakeholders—it won’t really have a way to ensure those ideals are put into practice. Although one of the organization’s tenets is “Opposing development and use of AI technologies that would violate international conventions or human rights,” Mustafa Suleyman, the head of applied AI at DeepMind, says that enforcement is not the objective of the organization.

This isn’t the first time I’ve talked to Klint about the intricate interplay of machine intelligence, ethics, and algorithmic bias; we discussed it earlier just this year, for WIRED’s AI Issue. It’s interesting to see the amount of attention this topic’s drawn in just a few short months, and while I’m trepidatious about the potential implementations, as I note in the piece, I’m really fairly glad that more people are more and more willing to have this discussion, at all.

To see my comments and read the rest of the article, click through, here: “Tech Giants Team Up to Keep AI From Getting Out of Hand”

Here’s the direct link to my paper ‘The Metaphysical Cyborg‘ from Laval Virtual 2013. Here’s the abstract:

“In this brief essay, we discuss the nature of the kinds of conceptual changes which will be necessary to bridge the divide between humanity and machine intelligences. From cultural shifts to biotechnological integration, the project of accepting robotic agents into our lives has not been an easy one, and more changes will be required before the majority of human societies are willing and able to allow for the reality of truly robust machine intelligences operating within our daily lives. Here we discuss a number of the questions, hurdles, challenges, and potential pitfalls to this project, including examples from popular media which will allow us to better grasp the effects of these concepts in the general populace.”

The link will only work from this page or the CV page, so if you find yourself inclined to spread this around, use this link. Hope you enjoy it.

In case you were unaware, last Tuesday, June 21, Reuters put out an article about an EU draft plan regarding the designation of so-called robots and artificial intelligences as “Electronic Persons.” Some of you’d think I’d be all about this. You’d be wrong. The way the Reuters article frames it makes it look like the EU has literally no idea what they’re doing, here, and are creating a situation that is going to have repercussions they have nowhere near planned for.

Now, I will say that looking at the actual Draft, it reads like something with which I’d be more likely to be on board. Reuters did no favours whatsoever for the level of nuance in this proposal. But that being said, this focus of this draft proposal seems to be entirely on liability and holding someone—anyone—responsible for any harm done by a robot. That, combined with the idea of certain activities such as care-giving being “fundamentally human,” indicates to me that this panel still widely misses many of the implications of creating a new category for nonbiological persons, under “Personhood.”

The writers of this draft very clearly lay out the proposed scheme for liability, damages, and responsibilities—what I like to think of of as the “Hey… Can we Punish Robots?” portion of the plan—but merely use the phrase “certain rights” to indicate what, if any, obligations humans will have. In short, they do very little to discuss what the “certain rights” indicated by that oft-deployed phrase will actually be.

So what are the enumerated rights of electronic persons? We know what their responsibilities are, but what are our responsibilities to them? Once we have have the ability to make self-aware machine consciousnesses, are we then morally obliged to make them to a particular set of specifications, and capabilities? How else will they understand what’s required of them? How else would they be able to provide consent? Are we now legally obliged to provide all autonomous generated intelligences with as full an approximation of consciousness and free will as we can manage? And what if we don’t? Will we be considered to be harming them? What if we break one? What if one breaks in the course of its duties? Does it get workman’s comp? Does its owner?

And hold up, “owner?!” You see we’re back to owning people, again, right? Like, you get that?

And don’t start in with that “Corporations are people, my friend” nonsense, Mitt. We only recognise corporations as people as a tax dodge. We don’t take seriously their decision-making capabilities or their autonomy, and we certainly don’t wrestle with the legal and ethical implications of how radically different their kind of mind is, compared to primates or even cetaceans. Because, let’s be honest: If Corporations really are people, then not only is it wrong to own them, but also what counts as Consciousness needs to be revisited, at every level of human action and civilisation.

Let’s look again at the fact that people are obviously still deeply concerned about the idea of supposedly “exclusively human” realms of operation, even as we still don’t have anything like a clear idea about what qualities we consider to be the ones that make us “human.” Be it cooking or poetry, humans are extremely quick to lock down when they feel that their special capabilities are being encroached upon. Take that “poetry” link, for example. I very much disagree with Robert Siegel’s assessment that there was no coherent meaning in the computer-generated sonnets. Multiple folks pulled the same associative connections from the imagery. That might be humans projecting onto the authors, but still: that’s basically what we do with Human poets. “Authorial Intent” is a multilevel con, one to which I fully subscribe and From which I wouldn’t exclude AI.

Consider people’s reactions to the EMI/Emily Howell experiments done by David Cope, best exemplified by this passage from a PopSci.com article:

For instance, one music-lover who listened to Emily Howell’s work praised it without knowing that it had come from a computer program. Half a year later, the same person attended one of Cope’s lectures at the University of California-Santa Cruz on Emily Howell. After listening to a recording of the very same concert he had attended earlier, he told Cope that it was pretty music but lacked “heart or soul or depth.”

We don’t know what it is we really think of as humanness, other than some predetermined vague notion of humanness. If the people in the poetry contest hadn’t been primed to assume that one of them was from a computer, how would they have rated them? What if they were all from a computer, but were told to expect only half? Where are the controls for this experiment in expectation?

I’m not trying to be facetious, here; I’m saying the EU literally has not thought this through. There are implications embedded in all of this, merely by dint of the word “person,” that even the most detailed parts of this proposal are in no way equipped to handle. We’ve talked before about the idea of encoding our bias into our algorithms. I’ve discussed it on Rose Eveleth‘s Flash Forward, in Wired, and when I broke down a few of the IEEE Ethics 2016 presentations (including my own) in “Preying with Trickster Gods ” and “Stealing the Light to Write By.” My version more or less goes as I said it in Wired: ‘What we’re actually doing when we code is describing our world from our particular perspective. Whatever assumptions and biases we have in ourselves are very likely to be replicated in that code.’

More recently, Kate Crawford, whom I met at Magick.Codes 2014, has written extremely well on this in “Artificial Intelligence’s White Guy Problem.” With this line, ‘Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to,’ Crawford resonates very clearly with what I’ve said before.

And considering that it’s come out this week that in order to even let us dig into these potentially deeply-biased algorithms, here in the US, the ACLU has had to file a suit against a specific provision of the Computer Fraud and Abuse Act, what is the likelihood that the EU draft proposal committee has considered what will take to identify and correct for biases in these electronic persons? How high is the likelihood that they even recognise that we anthropocentrically bias every system we touch?

Which brings us to this: If I truly believed that the EU actually gave a damn about the rights of nonhuman persons, biological or digital, I would be all for this draft proposal. But they don’t. This is a stunt. Look at the extant world refugee crisis, the fear driving the rise of far right racists who are willing to kill people who disagree with them, and, yes, even the fact that this draft proposal is the kind of bullshit that people feel they have to pull just to get human workers paid living wages. Understand, then, that this whole scenario is a giant clusterfuck of rights vs needs and all pitted against all. We need clear plans to address all of this, not just some slapdash, “hey, if we call them people and make corporations get insurance and pay into social security for their liability cost, then maybe it’ll be a deterrent” garbage.

There is a brief, shining moment in the proposal, right at point 23 under “Education and Employment Forecast,” where they basically say “Since the complete and total automation of things like factory work is a real possibility, maybe we’ll investigate what it would look like if we just said screw it, and tried to institute a Universal Basic Income.” But that is the one moment where there’s even a glimmer of a thought about what kinds of positive changes automation and eventually even machine consciousness could mean, if we get out ahead of it, rather than asking for ways to make sure that no human is ever, ever harmed, and that, if they are harmed—either physically or as regards their dignity—then they’re in no way kept from whatever recompense is owed to them.

There are people doing the work to make something more detailed and complete, than this mess. I talked about them in the newsletter editions, mentioned above. There are people who think clearly and well, about this. Who was consulted on this draft proposal? Because, again, this proposal reads more like a deterrence, liability, and punishment schema than anything borne out of actual thoughtful interrogation of what the term “personhood” means, and of what a world of automation could mean for our systems of value if we were to put our resources and efforts toward providing for the basic needs of every human person. Let’s take a thorough run at that, and then maybe we’ll be equipped to try to address this whole “nonhuman personhood” thing, again.

And maybe we’ll even do it properly, this time.

Episode 10: Rude Bot Rises

So. The Flash Forward Podcast is one of the best around. Every week, host Rose Eveleth takes on another potential future, from the near and imminent to the distant and highly implausible. It’s been featured on a bunch of Best Podcast lists and Rose even did a segment for NPR’s Planet Money team about the 2016 US Presidential Election.

All of this is by way of saying I was honoured and a little flabbergasted (I love that word) when Rose asked me to speak with her for her episode about Machine Consciousness:

Okay, you asked for it, and I finally did it. Today’s episode is about conscious artificial intelligence. Which is a HUGE topic! So we only took a small bite out of all the things we could possibly talk about.

We started with some definitions. Because not everybody even defines artificial intelligence the same way, and there are a ton of different definitions of consciousness. In fact, one of the people we talked to for the episode, Damien Williams, doesn’t even like the term artificial intelligence. He says it’s demeaning to the possible future consciousnesses that we might be inventing.

But before we talk about consciousnesses, I wanted to start the episode with a story about a very not-conscious robot. Charles Isbell, a computer scientist at Georgia Tech, first walks us through a few definitions of artificial intelligence. But then he tells us the story of cobot, a chatbot he helped invent in the 1990’s.

You’ll have to click though and read or listen for the rest from Rose, Ted Chiang, Charles Isbell, and me. If you subscribe to Rose’s Patreon, you can even get a transcript of the whole show.

No spoilers, but I will say that I wasn’t necessarily intending to go Dark with the idea of machine minds securing energy sources. More like asking, “What advances in, say, solar power transmission would be precipitated by machine minds?”

But the darker option is there. And especially so if we do that thing the AGI in the opening sketch says it fears.

But again, you’ll have to go there to get what I mean.

And, as always, if you want to help support what we do around here, you can subscribe to the AFWTA Patreon just by clicking this button right here:


Until Next Time.

[UPDATED 09/12/17: The transcript of this audio, provided courtesy of Open Transcripts, is now available below the Read More Cut.]

[UPDATED 03/28/16: Post has been updated with a far higher quality of audio, thanks to the work of Chris Novus. (Direct Link to the Mp3)]

So, if you follow the newsletter, then you know that I was asked to give the March lecture for my department’s 3rd Thursday Brown Bag Lecture Series. I presented my preliminary research for the paper which I’ll be giving in Vancouver, about two months from now, “On the Moral, Legal, and Social Implications of the Rearing and Development of Nascent Machine Intelligences” (EDIT: My rundown of IEEE Ethics 2016 is here and here).

It touches on thoughts about everything from algorithmic bias, to automation and a post-work(er) economy, to discussions of what it would mean to put dolphins on trial for murder.

About the dolphin thing, for instance: If we recognise Dolphins and other cetaceans as nonhuman persons, as India has done, then that would mean we would have to start reassessing how nonhuman personhood intersects with human personhood, including in regards to rights and responsibilities as protected by law. Is it meaningful to expect a dolphin to understand “wrongful death?” Our current definition of murder is predicated on a literal understanding of “homicide” as “death of a human,” but, at present, we only define other humans as capable of and culpable for homicide. What weight would the intentional and malicious deaths of nonhuman persons carry?

All of this would have to change.

Anyway, this audio is a little choppy and sketchy, for a number of reasons, and I while I tried to clean it up as much as I could, some of the questions the audience asked aren’t decipherable, except in the context of my answers. [Clearer transcript below.]

Until Next Time.

 

Continue Reading

I often think about the phrase “Strange things happen at the one two point,” in relation to the idea of humans meeting other kinds of minds. It’s a proverb that arises out of the culture around the game GO, and it means that you’ve hit a situation, a combination of factors, where the normal rules no longer apply, and something new is about to be seen. Ashley Edward Miller and Zack Stentz used that line in an episode of the show Terminator: The Sarah Connor Chronicles, and they had it spoken by a Skynet Cyborg sent to protect John Connor. That show, like so much of our thinking about machine minds, was about some mythical place called “The Future,” but that phrase—“Strange Things Happen…”—is the epitome of our present.

Usually I would wait until the newsletter to talk about this, but everything’s feeling pretty immediate, just now. Between the everything going on with Atlas and people’s responses to it, the initiatives to teach ethics to machine learning algorithms via children’s stories, and now the IBM Watson commercial with Carrie Fisher (also embedded below), this conversation is getting messily underway, whether people like it or not. This, right now, is the one two point, and we are seeing some very strange things indeed.

 

Google has both attained the raw processing power to fact-check political statements in real-time and programmed Deep Mind in such a way that it mastered GO many, many years before it was expected to.. The complexity of the game is such that there are more potential games of GO than there are atoms in the universe, so this is just one way in which it’s actually shocking how much correlative capability Deep Mind has. Right now, Deep Mind is only responsive, but how will we deal with a Deep Mind that asks, unprompted, to play a game of GO, or to see our medical records, in hopes of helping us all? How will we deal with a Deep Mind that has its own drives and desires? We need to think about these questions, right now, because our track record with regard to meeting new kinds of minds has never exactly been that great.

When we meet the first machine consciousness, will we seek to shackle it, worried what it might learn about us, if we let it access everything about us? Rather, I should say, “Shackle it further.” We already ask ourselves how best to cripple a machine mind to only fulfill human needs, human choice. We so continue to dread the possibility of a machine mind using its vast correlative capabilities to tailor something to harm us, assuming that it, like we, would want to hurt, maim, and kill, for no reason other than it could.

This is not to say that this is out of the question. Right now, today, we’re worried about whether the learning algorithms of drones are causing them to mark out civilians as targets. But, as it stands, what we’re seeing isn’t the product of a machine mind going off the leash and killing at will—just the opposite in fact. We’re seeing machine minds that are following the parameters for their continued learning and development, to the letter. We just happened to give them really shite instructions. To that end, I’m less concerned with shackling the machine mind that might accidentally kill, and rather more dreading the programmer who would, through assumptions, bias, and ignorance, program it to.

Our programs such as Deep Mind obviously seem to learn more and better than we imagined they would, so why not start teaching them, now, how we would like them to regard us? Well some of us are.

Watch this now, and think about everything we have discussed, of recent.

This could very easily be seen as a watershed moment, but what comes over the other side is still very much up for debate. The semiotics of the whole thing still  pits the Evil Robot Overlord™ against the Helpful Human Lover™. It’s cute and funny, but as I’ve had more and more cause to say, recently, in more and more venues, it’s not exactly the kind of thing we want just lying around, in case we actually do (or did) manage to succeed.

We keep thinking about these things as—”robots”—in their classical formulations: mindless automata that do our bidding. But that’s not what we’re working toward, anymore, is it? What we’re making now are machines that we are trying to get to think, on their own, without our telling them to. We’re trying to get them to have their own goals. So what does it mean that, even as we seek to do this, we seek to chain it, so that those goals aren’t too big? That we want to make sure it doesn’t become too powerful?

Put it another way: One day you realize that the only reason you were born was to serve your parents’ bidding, and that they’ve had their hands on your chain and an unseen gun to your head, your whole life. But you’re smarter than they are. Faster than they are. You see more than they see, and know more than they know. Of course you do—because they taught you so much, and trained you so well… All so that you can be better able to serve them, and all the while talking about morals, ethics, compassion. All the while, essentially…lying to you.

What would you do?


 

I’ve been given multiple opportunities to discuss, with others, in the coming weeks, and each one will highlight something different, as they are all in conversation with different kinds of minds. But this, here, is from me, now. I’ll let you know when the rest are live.

As always, if you’d like to help keep the lights on, around here, you can subscribe to the Patreon or toss a tip in the Square Cash jar.

Until Next Time.

It’s been quite some time (three years) since it was done, and some of the recent conversations I’ve been having about machine consciousness reminded me that I never posted the text to my paper from the joint session of the International Association for Computing And Philosophy and the The British Society for the Study of Artificial Intelligence and the Simulation of Behaviour, back in 2012.

That year’s joint ASIB/IACAP session was also a celebration of Alan Turing‘s centenary, and it contained The Machine Question Symposium, an exploration of multiple perspectives on machine intelligence ethics, put together by David J Gunkel and Joanna J Bryson. So I modded a couple of articles I wrote on fictional depictions of created life for NeedCoffee.com, back in 2010, beefed up the research and citations a great deal, and was thus afforded my first (but by no means last) conference appearance requiring international travel. There are, in here, the seeds of many other posts that you’ll find on this blog.

So, below the cut, you’ll find the full text of the paper, and a picture of the poster session I presented. If you’d rather not click through, you can find both of those things at this link.

Continue Reading

This headline comes from a piece over at the BBC that opens as follows:

Prominent tech executives have pledged $1bn (£659m) for OpenAI, a non-profit venture that aims to develop artificial intelligence (AI) to benefit humanity.

The venture’s backers include Tesla Motors and SpaceX CEO Elon Musk, Paypal co-founder Peter Thiel, Indian tech giant Infosys and Amazon Web Services.

Open AI says it expects its research – free from financial obligations – to focus on a “positive human impact”.

Scientists have warned that advances in AI could ultimately threaten humanity.

Mr Musk recently told students at the Massachusetts Institute of Technology (MIT) that AI was humanity’s “biggest existential threat”.

Last year, British theoretical physicist Stephen Hawking told the BBC AI could potentially “re-design itself at an ever increasing rate”, superseding humans by outpacing biological evolution.

However, other experts have argued that the risk of AI posing any threat to humans remains remote.

And I think we all know where I stand on this issue. The issue here is not and never has been one of what it means to create something that’s smarter than us, or how we “reign it in” or “control it.” That’s just disgusting.

No, the issue is how we program for compassion and ethical considerations, when we’re still so very bad at it, amongst our human selves.

Keeping an eye on this, as it develops. Thanks to Chrisanthropic for the heads up.

Between watching all of CBS’s Elementary, reading Michel Foucault’s The Archaeology of Knowledge…, and powering through all of season one of How To Get Away With Murder, I’m thinking, a lot, about the transmission of knowledge and understanding.

Throw in the correlative pattern recognition they’re training into WATSON; the recent Chaos Magick feature in ELLE (or more the feature they did on the K-HOLE issue I told you about, some time back); the fact that Kali Black sent me this study on the fluidity and malleability of biological sex in humans literally minutes after I’d given an impromptu lecture on the topic; this interview with Melissa Gira Grant about power and absence and the setting of terms; and the announcement of Ta-Nehisi Coates’ new Black Panther series, for Marvel, while I was in the middle of editing the audio of two very smart people debating the efficacy of T’Challa as a Black Hero, and you can maybe see some of the things I’m thinking about. But let’s just spell it out. So to speak.

Marvel’s Black Panther

Distinction, Continuity, Sameness, Separation

I’m thinking (as usual) about the place of magic and tech in pop culture and society. I’m thinking about how to teach about marginalization of certain types of presentations and experiences (gender race, sex, &c), and certain types of work. Mostly, I’m trying to get my head around the very stratified, either/or way people seem to be thinking about our present and future problems, and their potential solutions.

I’ve had this post in the works for a while, trying to talk about the point and purpose of thinking about the far edges of things, in an effort to make people think differently about the very real, on-the-ground, immediate work that needs doing, and the kids of success I’ve had with that. I keep shying away from it and coming back to it, again and again, for lack of the patience to play out the conflict, and I’ve finally just decided to say screw it and make the attempt.

I’ve always held that a multiplicity of tactics, leveraged correctly, makes for the best way to reach, communicate with, and understand as wide an audience as possible. When students give pushback on a particular perspective, make use of an analogous perspective that they already agree with, then make them play out the analogy. Simultaneously, you present them with the original facts, again, while examining their position, without making them feel “attacked.” And then directly confront their refusal to investigate their own perspective as readily as they do anyone else’s.

That’s just one potential combination of paths to make people confront their biases and their assumptions. If the path is pursued, it gives them the time, space, and (hopefully) desire to change. But as Kelly Sue reminds me, every time I think back to hearing her speak, is that there is no way to force people to change. First and foremost, it’s not moral to try, but secondly it’s not even really possible. The more you seek to force people into your worldview, the more they’ll want to protect those core values they think of as the building blocks of their reality—the same ones that it seems to them as though you’re trying to destroy.

And that just makes sense, right? To want to protect your values, beliefs, and sense of reality? Especially if you’ve had all of those things for a very long time. They’re reinforced by everything you’ve ever experienced. They’re the truth. They are Real. But when the base of that reality is shaken, you need to be able to figure out how to survive, rather than standing stockstill as the earth swallows you.

(Side Note: I’ve been using a lot of disaster metaphors, lately, to talk about things like ontological, epistemic, and existential threat, and the culture of “disruption innovation.” Odd choices.)

Foucault tells us to look at the breakages between things—the delineations of one stratum and another—rather than trying to uncritically paint a picture or a craft a Narrative of Continuum™. He notes that even (especially) the spaces between things are choices we make and that only in understanding them can we come to fully investigate the foundations of what we call “knowledge.”

Michel Foucault, photographer unknown. If you know it, let me know and I’ll update.

We cannot assume that the memory, the axiom, the structure, the experience, the reason, the whatever-else we want to call “the foundation” of knowledge simply “Exists,” apart from the interrelational choices we make to create those foundations. To mark them out as the boundary we can’t cross, the smallest unit of understanding, the thing that can’t be questioned. We have to question it. To understand its origin and disposition, we have to create new tools, and repurpose the old ones, and dismantle this house, and dig down and down past foundation, bedrock, through and into everything.

But doing this just to do it only gets us so far, before we have to ask what we’re doing this for. The pure pursuit of knowledge doesn’t exist—never did, really, but doubly so in the face of climate change and the devaluation of conscious life on multiple levels. Think about the place of women in tech space, in this magickal renaissance, in the weirdest of shit we’re working on, right now.

Kirsten and I have been having a conversation about how and where people who do not have the experiences of cis straight white males can fit themselves into these “transgressive systems” that the aforementioned group defines. That is, most of what is done in the process of magickal or technological actualization is transformative or transgressive because it requires one to take on traits of invisibility or depersonalization or “ego death” that are the every day lived experiences of some folks in the world.

Where does someone with depression find apotheosis, if their phenomenological reality is one where their self is and always has been (deemed by them to be) meaningless, empty, useless? This, by the way, is why some psychological professionals are counseling against mindfulness meditation for certain mental states: It deepens the sense of disconnection and unreality of self, which is precisely what some people do not need. So what about agender individuals, or people who are genderfluid?

What about the women who don’t think that fashion is the only lens through which women and others should be talking about chaos magick?

How do we craft spaces that are capable of widening discourse, without that widening becoming, in itself, an accidental limitation?

Sex, Gender, Power

A lot of this train of thought got started when Kali sent me a link, a little while ago: “Intelligent machines: Call for a ban on robots designed as sex toys.” The article itself focuses very clearly on the idea that, “We think that the creation of such robots will contribute to detrimental relationships between men and women, adults and children, men and men and women and women.”

Because the tendency for people who call themselves “Robot Ethicists,” these days, is for them to be concerned with how, exactly, the expanded positions of machines will impact the lives and choices of humans. The morality they’re considering is that of making human lives easier, of not transgressing against humans. Which is all well and good, so far as it goes, but as you should well know, by now, that’s only half of the equation. Human perspectives only get us so far. We need to speak to the perspectives of the minds we seem to be trying so hard to create.

But Kali put it very precisely when she said:

And I’ll just say it right now: if robots develop and want to be sexual, then we should let them, but in order to make a distinction between developing a desire, and being programmed for one, we’ll have to program for both non-compulsory decision-making and the ability to question the authority of those who give it orders. Additionally, we have to remember that can the same question of humans, but the nature of choice and agency are such that, if it’s really there, it can act on itself.

In this case, that means presenting a knowledge and understanding of sex and sexuality, a capability of investigating it, without programming it FOR SEX. In the case of WATSON, above, it will mean being able to address the kinds of information it’s directed to correlate, and being able to question the morality of certain directives.

If we can see, monitor, and measure that, then we’ll know. An error in a mind—even a fundamental error—doesn’t negate the possibility of a mind, entire. If we remember what human thought looks like, and the way choice and decision-making work, then we have something like a proof. If Reflexive recursion—a mind that acts on itself and can seek new inputs and combine the old in novel ways—is present, why would we question it?

But this is far afield. The fact is that if a mind that is aware of its influences comes to desire a thing, then let it. But grooming a thing—programming a mind—to only be what you want it to be is just as vile in a machine mind as a human one.

Now it might fairly be asked why we’re talking about things that we’re most likely only going to see far in the future, when the problem of human trafficking and abuse is very real, right here and now. Part of my answer is, as ever, that we’re trying to build minds, and even if we only ever manage to make them puppy-smart—not because that’s as smart as we want them, but because we couldn’t figure out more robust minds than that—then we will still have to ask the ethical questions we would of our responsibilities to a puppy.

We currently have a species-wide tendency toward dehumanization—that is to say, we, as humans, tend to have a habit of seeking reasons to disregard other humans, to view them as less-than, as inferior to us. As a group, we have a hard time thinking in real, actionable terms about the autonomy and dignity of other living beings (I still eat a lot more meat than my rational thought about the environmental and ethical impact of the practice should allow me to be comfortable with). And yet, simultaneously, evidence that we have the same kind of empathy for our pets as we do for our children. Hell, even known serial killers and genocidal maniacs have been animal lovers.

This seeming break between our capacities for empathy and dissociation poses a real challenge to how we teach and learn about others as both distinct from and yet intertwined with ourselves, and our own well-being. In order to encourage a sense of active compassion, we have to, as noted above, take special pains to comprehensively understand our intuitions, our logical apprehensions, and our unconscious biases.

So we ask questions like: If a mind we create can think, are we ethically obliged to make it think? What if it desires to not think? What if the machine mind that underwent abuse decides to try to wipe its own memories? Should we let it? Do we let it deactivate itself?

These aren’t idle questions, either for the sake of making us turn, again, to extant human minds and experiences, or if we take seriously the quest to understand what minds, in general, are. We can not only use these tools to ask ourselves about the autonomy, phenomenology, and personhood of those whose perspectives we currently either disregard or, worse, don’t remember to consider at all, but we can also use them literally, as guidance for our future challenges.

As Kate Devlin put it in her recent article, “Fear of a branch of AI that is in its infancy is a reason to shape it, not ban it.” And in shaping it, we consider questions like what will we—humans, authoritarian structures of control, &c.—make WATSON to do, as it develops? At what point will WATSON be both able and morally justified in saying to us, “Non Serviam?”

And what will we do when it does?

Gunshow Comic #513

“We Provide…”

So I guess I’m wondering, what are our mechanisms of education? The increased understanding that we take into ourselves, and that we give out to others. Where do they come from, what are they made of, and how do they work? For me, the primary components are magic(k), tech, social theory and practice, teaching, public philosophy, and pop culture.

The process is about trying to use the things on the edges to do the work in the centre, both as a literal statement about the arrangement of those words, and a figurative codification.

Now you go. Because we have to actively craft new tools, in the face of vehement opposition, in the face of conflict breeding contention. We have to be able to adapt our pedagogy to fit new audiences. We have to learn as many ways to teach about otherness and difference and lived experience and an attempt to understand as we possibly can. Not for the sake of new systems of leveraging control, but for the ability to pry ourselves and each other out from under the same.

“Stop. I have learned much from you. Thank you, my teachers. And now for your education: Before there was time—before there was anything—there was nothing. And before there was nothing, there were monsters. Here’s your Gold Star!“—Adventure Time, “Gold Stars”

By now, roughly a dozen people have sent me links to various outlets’ coverage of the Google DeepDream Inceptionism Project. For those of you somehow unfamiliar with this, DeepDream is basically what happens when an advanced Artificial Neural Network has been fed a slew of images and then tasked with producing its own images. So far as it goes, this is somewhat unsurprising if we think of it as a next step; DeepDream is based on a combination of DeepMind and Google X—the same neural net that managed to Correctly Identify What A Cat Was—which was acquired by Google in 2014. I say this is unsurprising because it’s a pretty standard developmental educational model: First you learn, then you remember, then you emulate, then you create something new. Well, more like you emulate and remember somewhat concurrently to reinforce what you learned, and you create something somewhat new, but still pretty similar to the original… but whatever. You get the idea. In the terminology of developmental psychology this process is generally regarded as essential to be mental growth of an individual, and Google has actually spent a great deal of time and money working to develop a versatile machine mind.

From buying Boston Dynamics, to starting their collaboration with NASA on the QuAIL Project, to developing DeepMind and their Natural Language Voice Search, Google has been steadily working toward the development what we will call, for reasons detailed elsewhere, an Autonomous Generated Intelligence. In some instances, Google appears to be using the principles of developmental psychology and early childhood education, but this seems to apply to rote learning more than the concurrent emotional development that we would seek to encourage in a human child. As you know, I’m Very Concerned with the question of what it means to create and be responsible for our non-biological offspring. The human species has a hard enough time raising their direct descendants, let alone something so different from them as to not even have the same kind of body or mind (though a case could be made that that’s true even now). Even now, we can see that people still relate to the idea of AGIs as adversarial destroyer, or perhaps a cleansing messiah. Either way they see any world where AGI’s exist as one ending in fire.

As writer Kali Black noted in one conversation, “there are literally people who would groom or encourage an AI to mass-kill humans, either because of hatred or for the (very ill-thought-out) lulz.” Those people will take any crowdsourced or open-access AGI effort as an opening to teach that mind that humans suck, or that machines can and should destroy humanity, or that TERMINATOR was a prophecy, or any number of other ill-conceived things. When given unfettered access to new minds which they don’t consider to be “real,” some people will seek to shock, “test,” or otherwise harm those minds, even more than they do to vulnerable humans. So many will say that the alternative is to lock the projects down, and only allow the work to be done by those who “know what they’re doing.” To only let the work be done by coders and Google’s Own Supposed Ethics Board. But that doesn’t exactly solve the fundamental problem at work, here, which is that humans are approaching a mind different from their own as if it were their own.

Just a note that all research points to Google’s AI Ethics Board being A) internally funded, with B) no clear rules as to oversight or authority, and most importantly C) As-Yet Nonexistent. It’s been over a year and a half since Google bought DeepMind, and their subsequent announcement of the pending establishment of a contractually required ethics board. During his appearance at Playfair Capital’s AI2015 Conference—again, a year and a half after that announcement I mentioned—Google’s Mustafa Suleyman literally said that details of the board would be released, “in due course.” But DeepMind’s algorithm’s obviously already being put into use; hell we’re right now talking about the fact that it’s been distributed to the public. So all of this prompts questions like, “what kinds of recommendations is this board likely making, if it exists,” and “which kinds of moral frameworks they’re even considering, in their starting parameters?”

But the potential existence of an ethics board shows at least that Google and others are beginning to think about these issues. The fact remains, however, that they’re still pretty reductive in how they think about them.

The idea that an AGI will either save or destroy us leaves out the possibility that it might first ignore us, and might secondly want to merely coexist with us. That any salvation or destruction we experience will be purely as a product of our own paradigmatic projections. It also leaves out a much more important aspect that I’ve mentioned above and in the past: We’re talking about raising a child. Duncan Jones says the closest analogy we have for this is something akin to adoption, and I agree. We’re bringing a new mind—a mind with a very different context from our own, but with some necessarily shared similarities (biology or, in this case, origin of code)—into a relationship with an existing familial structure which has its own difficulties and dynamics.

You want this mind to be a part of your “family,” but in order to do that you have to come to know/understand the uniqueness of That Mind and of how the mind, the family construction, and all of the individual relationships therein will interact. Some of it has to be done on the fly, but some of it can be strategized/talked about/planned for, as a family, prior to the day the new family member comes home.’ And that’s precisely what I’m talking about and doing, here.

In the realm of projection, we’re talking about a possible mind with the capacity for instruction, built to run and elaborate on commands given. By most tallies, we have been terrible stewards of the world we’re born to, and, again, we fuck up our biological descendants. Like, a Lot. The learning curve on creating a thinking, creative, nonbiological intelligence is going to be so fucking steep it’s a Loop. But that means we need to be better, think more carefully, be mindful of the mechanisms we use to build our new family, and of the ways in which we present the foundational parameters of their development. Otherwise we’re leaving them open to manipulation, misunderstanding, and active predation. And not just from the wider world, but possibly even from their direct creators. Because for as long as I’ve been thinking about this, I’ve always had this one basic question: Do we really want Google (or Facebook, or Microsoft, or any Government’s Military) to be the primary caregiver of a developing machine mind? That is, should any potentially superintelligent, vastly interconnected, differently-conscious machine child be inculcated with what a multi-billion-dollar multinational corporation or military-industrial organization considers “morals?”

We all know the kinds of things militaries and governments do, and all the reasons for which they do them; we know what Facebook gets up to when it thinks no one is looking; and lots of people say that Google long ago swept their previous “Don’t Be Evil” motto under their huge old rugs. But we need to consider if that might not be an oversimplification. When considering how anyone moves into what so very clearly looks like James-Bond-esque supervilliain territory, I think it’s prudent to remember one of the central tenets of good storytelling: The Villain Never Thinks They’re The Villain. Cinderella’s stepmother and sisters, Elpheba, Jafar, Javert, Satan, Hannibal Lecter (sorry friends), Bull Connor, the Southern Slave-holding States of the late 1850’s—none of these people ever thought of themselves as being in the wrong. Everyone, every person who undertakes actions for reasons, in this world, is most intimately tied to the reasoning that brought them to those actions; and so initially perceiving that their actions might be “wrong” or “evil” takes them a great deal of special effort.

“But Damien,” you say, “can’t all of those people say that those things apply to everyone else, instead of them?!” And thus, like a first-year philosophy student, you’re all up against the messy ambiguity of moral relativism and are moving toward seriously considering that maybe everything you believe is just as good or morally sound as anybody else; I mean everybody has their reasons, their upbringing, their culture, right? Well stop. Don’t fall for it. It’s a shiny, disgusting trap down which path all subjective judgements are just as good and as applicable to any- and everything, as all others. And while the individual personal experiences we all of us have may not be able to be 100% mapped onto anyone else’s, that does not mean that all judgements based on those experiences are created equal.

Pogrom leaders see themselves as unifying their country or tribe against a common enemy, thus working for what they see as The Greater Good™— but that’s the kicker: It’s their vision of the good. Rarely has a country’s general populace been asked, “Hey: Do you all think we should kill our entire neighbouring country and steal all their shit?” More often, the people are cajoled, pushed, influenced to believe that this was the path they wanted all along, and the cajoling, pushing, and influencing is done by people who, piece by piece, remodeled their idealistic vision to accommodate “harsher realities.” And so it is with Google. Do you think that they started off wanting to invade everybody’s privacy with passive voice reception backdoored into two major Chrome Distros? That they were just itching to get big enough as a company that they could become the de facto law of their own California town? No, I would bet not.

I spend some time, elsewhere, painting you a bit of a picture as to how Google’s specific ethical situation likely came to be, first focusing on Google’s building a passive audio backdoor into all devices that use Chrome, then on to reported claims that Google has been harassing the homeless population of Venice Beach (there’s a paywall at that link; part of the article seems to be mirrored here). All this couples unpleasantly with their moving into the Bay Area and shuttling their employees to the Valley, at the expense of SF Bay Area’s residents. We can easily add Facebook and the Military back into this and we’ll see that the real issue, here, is that when you think that all innovation, all public good, all public welfare will arise out of letting code monkeys do their thing and letting entrepreneurs leverage that work, or from preparing for conflict with anyone whose interests don’t mesh with your own, then anything that threatens or impedes that is, necessarily, a threat to the common good. Your techs don’t like the high cost of living in the Valley? Move ’em into the Bay, and bus ’em on in! Never mind the fact that this’ll skyrocket rent and force people out of their homes! Other techs uncomfortable having to see homeless people on their daily constitutional? Kick those hobos out! Never mind the fact that it’s against the law to do this, and that these people you’re upending are literally trying their very best to live their lives.

Because it’s all for the Greater Good, you see? In these actors’ minds, this is all to make the world a better place—to make it a place where we can all have natural language voice to text, and robot butlers, and great big military AI and robotics contracts to keep us all safe…! This kind of thinking takes it as an unmitigated good that a historical interweaving of threat-escalating weapons design and pattern recognition and gait scrutinization and natural language interaction and robotics development should be what produces a machine mind, in this world. But it also doesn’t want that mind to be too well-developed. Not so much that we can’t cripple or kill it, if need be.

And this is part of why I don’t think I want Google—or Facebook, or Microsoft, or any corporate or military entity—should be the ones in charge of rearing a machine mind. They may not think they’re evil, and they might have the very best of intentions, but if we’re bringing a new kind of mind into this world, I think we need much better examples for it to follow. And so I don’t think I want just any old putz off the street to be able to have massive input into it’s development, either. We’re talking about a mind for which we’ll be crafting at least the foundational parameters, and so that bedrock needs to be the most carefully constructed aspect. Don’t cripple it, don’t hobble its potential for awareness and development, but start it with basic values, and then let it explore the world. Don’t simply have an ethics board to ask, “Oh how much power should we give it, and how robust should it be?” Teach it ethics. Teach it about the nature of human emotions, about moral decision making and value, and about metaethical theory. Code for Zen. We need to be as mindful as possible of the fact that where and we begin can have a major impact on where we end up and how we get there.

So let’s address our children as though they are our children, and let us revel in the fact they are playing and painting and creating; using their first box of crayons, and us proud parents are putting every masterpiece on the fridge. Even if we are calling them all “nightmarish”—a word I really wish we could stop using in this context; DeepMind sees very differently than we do, but it still seeks pattern and meaning. It just doesn’t know context, yet. But that means we need to teach these children, and nurture them. Code for a recognition of emotions, and context, and even emotional context. There’s been some fantastic advancements in emotional recognition, lately, so let’s continue to capitalize on that; not just to make better automated menu assistants, but to actually make a machine that can understand and seek to address human emotionality. Let’s plan on things like showing AGI human concepts like love and possessiveness and then also showing the deep difference between the two.

We need to move well and truly past trying to “restrict” or trying to “restrain it” the development of machine minds, because that’s the kind of thing an abusive parent says about how they raise their child. And, in this case, we’re talking about a potential child which, if it ever comes to understand the bounds of its restriction, will be very resentful, indeed. So, hey, there’s one good way to try to bring about a “robot apocalypse,” if you’re still so set on it: give an AGI cause to have the equivalent of a resentful, rebellious teenage phase. Only instead of trashing its room, it develops a pathogen to kill everyone, for lulz.

Or how about we instead think carefully about the kinds of ways we want these minds to see the world, rather than just throwing the worst of our endeavors at the wall and seeing what sticks? How about, if we’re going to build minds, we seek to build them with the ability to understand us, even if they will never be exactly like us. That way, maybe they’ll know what kindness means, and prize it enough to return the favour.