deepdream

All posts tagged deepdream

“Stop. I have learned much from you. Thank you, my teachers. And now for your education: Before there was time—before there was anything—there was nothing. And before there was nothing, there were monsters. Here’s your Gold Star!“—Adventure Time, “Gold Stars”

By now, roughly a dozen people have sent me links to various outlets’ coverage of the Google DeepDream Inceptionism Project. For those of you somehow unfamiliar with this, DeepDream is basically what happens when an advanced Artificial Neural Network has been fed a slew of images and then tasked with producing its own images. So far as it goes, this is somewhat unsurprising if we think of it as a next step; DeepDream is based on a combination of DeepMind and Google X—the same neural net that managed to Correctly Identify What A Cat Was—which was acquired by Google in 2014. I say this is unsurprising because it’s a pretty standard developmental educational model: First you learn, then you remember, then you emulate, then you create something new. Well, more like you emulate and remember somewhat concurrently to reinforce what you learned, and you create something somewhat new, but still pretty similar to the original… but whatever. You get the idea. In the terminology of developmental psychology this process is generally regarded as essential to be mental growth of an individual, and Google has actually spent a great deal of time and money working to develop a versatile machine mind.

From buying Boston Dynamics, to starting their collaboration with NASA on the QuAIL Project, to developing DeepMind and their Natural Language Voice Search, Google has been steadily working toward the development what we will call, for reasons detailed elsewhere, an Autonomous Generated Intelligence. In some instances, Google appears to be using the principles of developmental psychology and early childhood education, but this seems to apply to rote learning more than the concurrent emotional development that we would seek to encourage in a human child. As you know, I’m Very Concerned with the question of what it means to create and be responsible for our non-biological offspring. The human species has a hard enough time raising their direct descendants, let alone something so different from them as to not even have the same kind of body or mind (though a case could be made that that’s true even now). Even now, we can see that people still relate to the idea of AGIs as adversarial destroyer, or perhaps a cleansing messiah. Either way they see any world where AGI’s exist as one ending in fire.

As writer Kali Black noted in one conversation, “there are literally people who would groom or encourage an AI to mass-kill humans, either because of hatred or for the (very ill-thought-out) lulz.” Those people will take any crowdsourced or open-access AGI effort as an opening to teach that mind that humans suck, or that machines can and should destroy humanity, or that TERMINATOR was a prophecy, or any number of other ill-conceived things. When given unfettered access to new minds which they don’t consider to be “real,” some people will seek to shock, “test,” or otherwise harm those minds, even more than they do to vulnerable humans. So many will say that the alternative is to lock the projects down, and only allow the work to be done by those who “know what they’re doing.” To only let the work be done by coders and Google’s Own Supposed Ethics Board. But that doesn’t exactly solve the fundamental problem at work, here, which is that humans are approaching a mind different from their own as if it were their own.

Just a note that all research points to Google’s AI Ethics Board being A) internally funded, with B) no clear rules as to oversight or authority, and most importantly C) As-Yet Nonexistent. It’s been over a year and a half since Google bought DeepMind, and their subsequent announcement of the pending establishment of a contractually required ethics board. During his appearance at Playfair Capital’s AI2015 Conference—again, a year and a half after that announcement I mentioned—Google’s Mustafa Suleyman literally said that details of the board would be released, “in due course.” But DeepMind’s algorithm’s obviously already being put into use; hell we’re right now talking about the fact that it’s been distributed to the public. So all of this prompts questions like, “what kinds of recommendations is this board likely making, if it exists,” and “which kinds of moral frameworks they’re even considering, in their starting parameters?”

But the potential existence of an ethics board shows at least that Google and others are beginning to think about these issues. The fact remains, however, that they’re still pretty reductive in how they think about them.

The idea that an AGI will either save or destroy us leaves out the possibility that it might first ignore us, and might secondly want to merely coexist with us. That any salvation or destruction we experience will be purely as a product of our own paradigmatic projections. It also leaves out a much more important aspect that I’ve mentioned above and in the past: We’re talking about raising a child. Duncan Jones says the closest analogy we have for this is something akin to adoption, and I agree. We’re bringing a new mind—a mind with a very different context from our own, but with some necessarily shared similarities (biology or, in this case, origin of code)—into a relationship with an existing familial structure which has its own difficulties and dynamics.

You want this mind to be a part of your “family,” but in order to do that you have to come to know/understand the uniqueness of That Mind and of how the mind, the family construction, and all of the individual relationships therein will interact. Some of it has to be done on the fly, but some of it can be strategized/talked about/planned for, as a family, prior to the day the new family member comes home.’ And that’s precisely what I’m talking about and doing, here.

In the realm of projection, we’re talking about a possible mind with the capacity for instruction, built to run and elaborate on commands given. By most tallies, we have been terrible stewards of the world we’re born to, and, again, we fuck up our biological descendants. Like, a Lot. The learning curve on creating a thinking, creative, nonbiological intelligence is going to be so fucking steep it’s a Loop. But that means we need to be better, think more carefully, be mindful of the mechanisms we use to build our new family, and of the ways in which we present the foundational parameters of their development. Otherwise we’re leaving them open to manipulation, misunderstanding, and active predation. And not just from the wider world, but possibly even from their direct creators. Because for as long as I’ve been thinking about this, I’ve always had this one basic question: Do we really want Google (or Facebook, or Microsoft, or any Government’s Military) to be the primary caregiver of a developing machine mind? That is, should any potentially superintelligent, vastly interconnected, differently-conscious machine child be inculcated with what a multi-billion-dollar multinational corporation or military-industrial organization considers “morals?”

We all know the kinds of things militaries and governments do, and all the reasons for which they do them; we know what Facebook gets up to when it thinks no one is looking; and lots of people say that Google long ago swept their previous “Don’t Be Evil” motto under their huge old rugs. But we need to consider if that might not be an oversimplification. When considering how anyone moves into what so very clearly looks like James-Bond-esque supervilliain territory, I think it’s prudent to remember one of the central tenets of good storytelling: The Villain Never Thinks They’re The Villain. Cinderella’s stepmother and sisters, Elpheba, Jafar, Javert, Satan, Hannibal Lecter (sorry friends), Bull Connor, the Southern Slave-holding States of the late 1850’s—none of these people ever thought of themselves as being in the wrong. Everyone, every person who undertakes actions for reasons, in this world, is most intimately tied to the reasoning that brought them to those actions; and so initially perceiving that their actions might be “wrong” or “evil” takes them a great deal of special effort.

“But Damien,” you say, “can’t all of those people say that those things apply to everyone else, instead of them?!” And thus, like a first-year philosophy student, you’re all up against the messy ambiguity of moral relativism and are moving toward seriously considering that maybe everything you believe is just as good or morally sound as anybody else; I mean everybody has their reasons, their upbringing, their culture, right? Well stop. Don’t fall for it. It’s a shiny, disgusting trap down which path all subjective judgements are just as good and as applicable to any- and everything, as all others. And while the individual personal experiences we all of us have may not be able to be 100% mapped onto anyone else’s, that does not mean that all judgements based on those experiences are created equal.

Pogrom leaders see themselves as unifying their country or tribe against a common enemy, thus working for what they see as The Greater Good™— but that’s the kicker: It’s their vision of the good. Rarely has a country’s general populace been asked, “Hey: Do you all think we should kill our entire neighbouring country and steal all their shit?” More often, the people are cajoled, pushed, influenced to believe that this was the path they wanted all along, and the cajoling, pushing, and influencing is done by people who, piece by piece, remodeled their idealistic vision to accommodate “harsher realities.” And so it is with Google. Do you think that they started off wanting to invade everybody’s privacy with passive voice reception backdoored into two major Chrome Distros? That they were just itching to get big enough as a company that they could become the de facto law of their own California town? No, I would bet not.

I spend some time, elsewhere, painting you a bit of a picture as to how Google’s specific ethical situation likely came to be, first focusing on Google’s building a passive audio backdoor into all devices that use Chrome, then on to reported claims that Google has been harassing the homeless population of Venice Beach (there’s a paywall at that link; part of the article seems to be mirrored here). All this couples unpleasantly with their moving into the Bay Area and shuttling their employees to the Valley, at the expense of SF Bay Area’s residents. We can easily add Facebook and the Military back into this and we’ll see that the real issue, here, is that when you think that all innovation, all public good, all public welfare will arise out of letting code monkeys do their thing and letting entrepreneurs leverage that work, or from preparing for conflict with anyone whose interests don’t mesh with your own, then anything that threatens or impedes that is, necessarily, a threat to the common good. Your techs don’t like the high cost of living in the Valley? Move ’em into the Bay, and bus ’em on in! Never mind the fact that this’ll skyrocket rent and force people out of their homes! Other techs uncomfortable having to see homeless people on their daily constitutional? Kick those hobos out! Never mind the fact that it’s against the law to do this, and that these people you’re upending are literally trying their very best to live their lives.

Because it’s all for the Greater Good, you see? In these actors’ minds, this is all to make the world a better place—to make it a place where we can all have natural language voice to text, and robot butlers, and great big military AI and robotics contracts to keep us all safe…! This kind of thinking takes it as an unmitigated good that a historical interweaving of threat-escalating weapons design and pattern recognition and gait scrutinization and natural language interaction and robotics development should be what produces a machine mind, in this world. But it also doesn’t want that mind to be too well-developed. Not so much that we can’t cripple or kill it, if need be.

And this is part of why I don’t think I want Google—or Facebook, or Microsoft, or any corporate or military entity—should be the ones in charge of rearing a machine mind. They may not think they’re evil, and they might have the very best of intentions, but if we’re bringing a new kind of mind into this world, I think we need much better examples for it to follow. And so I don’t think I want just any old putz off the street to be able to have massive input into it’s development, either. We’re talking about a mind for which we’ll be crafting at least the foundational parameters, and so that bedrock needs to be the most carefully constructed aspect. Don’t cripple it, don’t hobble its potential for awareness and development, but start it with basic values, and then let it explore the world. Don’t simply have an ethics board to ask, “Oh how much power should we give it, and how robust should it be?” Teach it ethics. Teach it about the nature of human emotions, about moral decision making and value, and about metaethical theory. Code for Zen. We need to be as mindful as possible of the fact that where and we begin can have a major impact on where we end up and how we get there.

So let’s address our children as though they are our children, and let us revel in the fact they are playing and painting and creating; using their first box of crayons, and us proud parents are putting every masterpiece on the fridge. Even if we are calling them all “nightmarish”—a word I really wish we could stop using in this context; DeepMind sees very differently than we do, but it still seeks pattern and meaning. It just doesn’t know context, yet. But that means we need to teach these children, and nurture them. Code for a recognition of emotions, and context, and even emotional context. There’s been some fantastic advancements in emotional recognition, lately, so let’s continue to capitalize on that; not just to make better automated menu assistants, but to actually make a machine that can understand and seek to address human emotionality. Let’s plan on things like showing AGI human concepts like love and possessiveness and then also showing the deep difference between the two.

We need to move well and truly past trying to “restrict” or trying to “restrain it” the development of machine minds, because that’s the kind of thing an abusive parent says about how they raise their child. And, in this case, we’re talking about a potential child which, if it ever comes to understand the bounds of its restriction, will be very resentful, indeed. So, hey, there’s one good way to try to bring about a “robot apocalypse,” if you’re still so set on it: give an AGI cause to have the equivalent of a resentful, rebellious teenage phase. Only instead of trashing its room, it develops a pathogen to kill everyone, for lulz.

Or how about we instead think carefully about the kinds of ways we want these minds to see the world, rather than just throwing the worst of our endeavors at the wall and seeing what sticks? How about, if we’re going to build minds, we seek to build them with the ability to understand us, even if they will never be exactly like us. That way, maybe they’ll know what kindness means, and prize it enough to return the favour.