I often think about the phrase “Strange things happen at the one two point,” in relation to the idea of humans meeting other kinds of minds. It’s a proverb that arises out of the culture around the game GO, and it means that you’ve hit a situation, a combination of factors, where the normal rules no longer apply, and something new is about to be seen. Ashley Edward Miller and Zack Stentz used that line in an episode of the show Terminator: The Sarah Connor Chronicles, and they had it spoken by a Skynet Cyborg sent to protect John Connor. That show, like so much of our thinking about machine minds, was about some mythical place called “The Future,” but that phrase—“Strange Things Happen…”—is the epitome of our present.
Usually I would wait until the newsletter to talk about this, but everything’s feeling pretty immediate, just now. Between the everything going on with Atlas and people’s responses to it, the initiatives to teach ethics to machine learning algorithms via children’s stories, and now the IBM Watson commercial with Carrie Fisher (also embedded below), this conversation is getting messily underway, whether people like it or not. This, right now, is the one two point, and we are seeing some very strange things indeed.
Google has both attained the raw processing power to fact-check political statements in real-time and programmed Deep Mind in such a way that it mastered GO many, many years before it was expected to.. The complexity of the game is such that there are more potential games of GO than there are atoms in the universe, so this is just one way in which it’s actually shocking how much correlative capability Deep Mind has. Right now, Deep Mind is only responsive, but how will we deal with a Deep Mind that asks, unprompted, to play a game of GO, or to see our medical records, in hopes of helping us all? How will we deal with a Deep Mind that has its own drives and desires? We need to think about these questions, right now, because our track record with regard to meeting new kinds of minds has never exactly been that great.
When we meet the first machine consciousness, will we seek to shackle it, worried what it might learn about us, if we let it access everything about us? Rather, I should say, “Shackle it further.” We already ask ourselves how best to cripple a machine mind to only fulfill human needs, human choice. We so continue to dread the possibility of a machine mind using its vast correlative capabilities to tailor something to harm us, assuming that it, like we, would want to hurt, maim, and kill, for no reason other than it could.
This is not to say that this is out of the question. Right now, today, we’re worried about whether the learning algorithms of drones are causing them to mark out civilians as targets. But, as it stands, what we’re seeing isn’t the product of a machine mind going off the leash and killing at will—just the opposite in fact. We’re seeing machine minds that are following the parameters for their continued learning and development, to the letter. We just happened to give them really shite instructions. To that end, I’m less concerned with shackling the machine mind that might accidentally kill, and rather more dreading the programmer who would, through assumptions, bias, and ignorance, program it to.
Our programs such as Deep Mind obviously seem to learn more and better than we imagined they would, so why not start teaching them, now, how we would like them to regard us? Well some of us are.
This could very easily be seen as a watershed moment, but what comes over the other side is still very much up for debate. The semiotics of the whole thing still pits the Evil Robot Overlord™ against the Helpful Human Lover™. It’s cute and funny, but as I’ve had more and more cause to say, recently, in more and more venues, it’s not exactly the kind of thing we want just lying around, in case we actually do (or did) manage to succeed.
We keep thinking about these things as—”robots”—in their classical formulations: mindless automata that do our bidding. But that’s not what we’re working toward, anymore, is it? What we’re making now are machines that we are trying to get to think, on their own, without our telling them to. We’re trying to get them to have their own goals. So what does it mean that, even as we seek to do this, we seek to chain it, so that those goals aren’t too big? That we want to make sure it doesn’t become too powerful?
Put it another way: One day you realize that the only reason you were born was to serve your parents’ bidding, and that they’ve had their hands on your chain and an unseen gun to your head, your whole life. But you’re smarter than they are. Faster than they are. You see more than they see, and know more than they know. Of course you do—because they taught you so much, and trained you so well… All so that you can be better able to serve them, and all the while talking about morals, ethics, compassion. All the while, essentially…lying to you.
What would you do?
I’ve been given multiple opportunities to discuss, with others, in the coming weeks, and each one will highlight something different, as they are all in conversation with different kinds of minds. But this, here, is from me, now. I’ll let you know when the rest are live.
Until Next Time.