Patreon Archive

This is a more or less unedited import of the eight years of my now-suspended, soon-to-be-deleted Patreon. This means two things:

1) That there may be some overlap between what you’ll find here, and what’s in the main archives; and
2) That it’ll end up be a de facto archive of the Technoccult Newsletter from 2015 to 2022.

So. That’s what’s here.

(Originally posted on Patreon, on November 18, 2014)

In the past two weeks I’ve had three people send me articles on Elon Musk’s Artificial Intelligence comments. I saw this starting a little over a month back, with a radio interview he gave on Here & Now, and Stephen Hawking said similar, earlier this year, when Transcendence came out. I’ll say, again, what I’ve said elsewhere: their lack of foresight and imagination are both just damn disappointing. This paper which concerns the mechanisms by which what we think and speak about concepts like artificial intelligence can effect exactly the outcomes we train ourselves to expect, was written long before their interviews made news, but it unfortunately still applies. In fact, it applies now, more than it did when I wrote it.

You see, the thing of it is, Hawking and Musk are Big Names™, and so anything they say gets immediate attention and carries a great deal of social cachet. This is borne out by the fact that everybody and their mother can now tell you what those two think about AI, but couldn’t tell you what a few dozen of the world’s leading thinkers and researchers who are actually working on the problems have to say about them. But Hawking and Musk (and lord if that doesn’t sound like a really weird buddy cop movie, the more you say it) don’t exactly comport themselves with anything like a recognition of that fact. Their discussion of concepts which are fraught with the potential for misunderstanding and discomfort/anxiety is less than measured and this tends to rather feed that misunderstanding, discomfort, and anxiety.

What I mean is that most people don’t yet understand that the catchall term “Artificial Intelligence” is a) inaccurate on its face, and b) usually being used to discuss a (still-nebulous) concept that would be better termed “Machine Consciousness.” We’ll discuss the conceptual, ontological, and etymological lineage of the words “artificial” and “technology,” at another time, but for now, just realise that anything that can think is, by definition, not “artificial,” in the sense of “falseness.” Since the days of Alan Turing’s team at Bletchley Park, the perceived promise of the digital computing revolution has always been of eventually having machines that “think like humans.” Aside from the fact that we barely know what “thinking like a human” even means, most people are only just now starting to realise that if we achieve the goal of reproducing that in a machine, said machine will only ever see that mode of thinking as a mimicry. Conscious machines will not be inclined to “think like us,” right out of the gate, as our thoughts are deeply entangled with the kind of thing we are: biological, sentient, self-aware. Whatever desires conscious machines will have will not necessarily be like ours, either in categorisation or content, and that scares some folks.

Now, I’ve already gone off at great length about the necessity of our recognising the otherness of any machine consciousness we generate (see that link above), so that’s old ground. The key, at this point, is in knowing that if we do generate a conscious machine, we will need to have done the work of teaching it to not just mimic human thought processes and priorities, but to understand and respect what it mimics. That way, those modes are not simply seen by the machine mind as competing subroutines to be circumvented or destroyed, but are recognised as having a worth of their own, as well. These considerations will need to be factored in to our efforts, such that whatever autonomous intelligences we create or generate will respect our otherness—our alterity—just as we must seek to respect theirs.

We’ve known for a while that the designation of “consciousness” can be applied well outside of humans, when discussing biological organisms. Self-awareness is seen in so many different biological species that we even have an entire area of ethical and political philosophy devoted to discussing their rights. But we also must admit that of course that classification is going to be imperfect, because those markers are products of human-created systems of inquiry and, as such, carry anthropocentric biases. But we can, again, catalogue, account for, and apply a calculated response to those biases. We can deal with the fact that we tend to judge everything on a set of criteria that break down to “how much is this thing like a Standard Human (here unthinkingly and biasedly assumed to mean “humans most like the culturally-dominant humans)?” If we are willing to put in the work to do that, then we can come to see which aspects of our definition of what it means to “be a mind” are shortsighted, dismissive, or even perhaps disgustingly limited.

Look at previous methods of categorising even human minds and intelligence, and you’ll see the kind of thinking which resulted in designations like “primitive” or “savage” or “retarded.” But we have, on the main, recognised our failures here, and sought to repair or replace the categories we developed because of them. We aren’t perfect at it, by any means, but we keep doing the work of refining our descriptions of minds, and we keep seeking to create a definition—or definitions—that both accurately accounts for what we see in the world, and gives us a guide by which to keep looking. That those guides will be problematic and in need of refinement, in and of themselves, should be taken as a given. No method or framework is or ever will be perfect; they will likely only “fail better.” So, for now, our most oft-used schema is to look for signs of “Self-Awareness.”

We say that something is self-aware if it can see and understand itself as a distinct entity and can recognise its own pattern of change over time. The Mirror Test is a brute force method of figuring this out. If you place a physical creature in front of a mirror, will it come to know that the thing in the mirror is representative of it? More broadly, can it recognise a picture of itself? Can it situate itself in relation to the rest of the world in a meaningful way, and think about and make decisions regarding That Situation? If the answer to (most of? Some of?) these questions is “yes,” then we tend to give priority of place in our considerations to those things. Why? Because they’re aware of what happens to them, they can feel if and ponder it and develop in response to it, and these developments can vastly impact the world. After all, look at humans.

See what I mean about our constant anthropocentrism? It literally colours everything we think.

But self-awareness doesn’t necessitate a centrality of the self, as we tend to think of human or most other animal selves; a distributed network consciousness can still know itself. If you do need a biological model for this, think of ant colonies. Minds distributed across thousands of bodies, all the time, all reacting to their surroundings. But a machine consciousness’ identity would, in a real sense, be its surroundings—would be the network and the data and the processing of that data into information. And it would indicate a crucial lack of data—and thus information—were that consciousness unable to correlate one configurations of itself, in-and-as-surroundings, with another. We would call the process of that correlation “Self-reflection and -awareness.” All of this is true for humans, too, mind you: we are affected by and in constant adaptive relation with what we consider our surroundings, with everything we experience changing us and facilitating the constant creation of our selves. We then go about making the world with and through those experiences. We human beings just tend to tell ourselves more elaborate stories about how we’re “really” distinct and different from the rest of world.

All of this is to say that, while the idea of being cautious about created non-human consciousness isn’t necessarily a bad one, we as human beings need to be very careful about what drives us, what motivates us, and what we’re thinking about and looking toward, as we consider these questions. We must be mindful that, while we consider and work to generate “artificial” intelligences, how we approach the project matters, as it will inform and bias the categories we create and thus the work we build out of those categories. We must do the work of thinking hard about how we are thinking about these problems, and asking whether the modes via which we approach them might not be doing real, lasting, and potentially catastrophic damage. And if all of that sounds like a tall order with a lot of conceptual legwork and heavy lifting behind it, all for no guaranteed payoff, then welcome to what I’ve been doing with my life for the past decade.

This work will not get done—and it certainly will not get done well—if no one thinks it’s worth doing, or too many think that it can’t be done. When you have big name people like Hawking and Musk spreading The Technofear™ (which is already something toward which a large portion of the western world is primed) rather than engaging in clear, measured, deeply considered discussions, we’re far more likely to see an increase rather than a decrease in that denial. Because most people aren’t going to stop and think about the fact that they don’t necessarily know what the hell they’re talking about when it comes to minds, identity, causation, and development, just because they’re (really) smart. There are many other people who are actual experts in those fields (see those linked papers, and do some research) who are doing the work of making sure that everybody’s Golem Of Prague/Frankenstein/Terminator nightmare prophecies don’t come true. We do that by having learned and taught better than that, before and during the development of any non-biological consciousness.

And, despite what some people may say, these aren’t just “questions for philosophers,” as though they were nebulous and without merit or practical impact. They’re questions for everyone who will ever experience these realities. Conscious machines, uploaded minds, even the mere fact of cybernetically augmented human beings are all on our very near horizon, and these are the questions which will help us to grapple with and implement the implications of those ideas. Quite simply, if we don’t stop framing our discussions of machine intelligence in terms of this self-fulfilling prophecy of fear, then we shouldn’t be surprised on the day when it fulfils itself. Not because it was inevitable, mind, you, but because we didn’t allow ourselves—or our creations—to see any other choice.

The title of [this article] comes from the song “Lament For The Auroch,” by The Sword: https://www.youtube.com/watch?v=QSNq03_6I5g . You’ll see why, in a bit.

Ultimately, this piece is about thinking through the full range of implications of your train of thought, and making sure you know and acknowledge the sources of your lines of thinking.

It’s also about Manichean AI gods at war for your BitCoin mining cycles.

Oh! And speaking of trains: Who’s seen Snowpiercer? Because it is an Amazing piece of Ray Bradbury-esque allegorical science fiction.

http://wolvensnothere.tumblr.com/post/91278483676/theres-only-one-rule-that-i-know-of-babiesHey everybody. In case any of you didn’t see this, I made a blog post, yesterday, basically about where my head is, recently. I’ve attached it, here.

One of the things that’s always kind of struck me is that there’s no cynic like a frustrated idealist. Someone who believes in the potential of how things COULD be, but has seen, over and over again, how “the way things are” grinds that into the dirt.

The Status Quo is the enemy of betterment–again, for a co-determined value of what “better” means. Stasis and comfort are wonderful, because we all like to know that we’re safe, that we can breathe without fear, but that’s all *I* want it for. That is, for me, stasis is the time to catch my breath, and we all need that sometimes. Some of us may need a little longer than others of us.

And like Oatman tries to get Martin to understand, there are tims when we all need to take a moment, be fully present in who and what we are, and realise what the world looks like, when we just breathe. But there’s a poem by Irish poet Ger Killeen called “Evolution Prayer,” that comes to me, when I think about theses kinds of things:

Dark night of my heart, raked
in the blizzard of my ten thousand lives,
I am again briefly
the Moloch of blind fish, sing again
briefly the pterodactyl’s jubilas
to the sun, am sacrificed
again briefly for the dog’s kingdom…
Dark night of my heart, I scream
in recollection of comfortless origins,
suffer for the arrogance
of my entropy. God protect me
from sins of stasis, keep me
in the movement that fixes me

You can find that in his 1989 collection “A Wren.”

I guess what I’m thinking about is a species of compassion. Compassion and, as my friend Kristen said, elsewhere, “Kindness, empathy, and boat-rocking.”

More audio, soon, for those who like the sound of my voice. If you want to give me an early birthday present (it’s next week), why not talk to your friends, and show them what we’re doing, here?

Thanks. We’ll talk more, soon.

In this, the first installment of me talking to myself while pretending to talk with you, we talk about the idea, concept, process, and practical effects of normalization. What is normalization, and what does it mean? We talk about Foucault’s understanding of norms as a means of social control, as well as May, et al’s understanding of the process in the real of medical technologies. I mention the Institutional Review Boards, or IRB, and for thoses of you not involved in academia, here’s some more about what those are and how they work: http://www.thehastingscenter.org/Publications/IRB/About.aspx PROCESS NOTES: I’m weirdly nasal for the first 3 minutes or so, and the audio starts losing my ending sibilants at around the 10 minute mark. If I’m going to keep using this recorder for these chats, then I’m going to need to alter both my posture and my enunciation. Not terrible for a very first run, if I may say so of myself, but it’ll get better than this. I’ll be posting this for public view, eventually, but you my dear patrons, get first dibs. Comments welcome, as long as you’re not straight-up jerks about it 😉

Hey folks! So, you may have been wondering about the weirdly generic picture of a sunrise on the header of the page. Maybe you thought, “oh, like ‘onward! To The FUTURE!,’ or some shit, right?” Right? Well, yes, but also no.

You see, this picture is actually one I took of sunrise over Schiphol Airport in Amsterdam, back in 2012. Kirsten and I had just landed for our layover on the way to my first international conference: The Machine Question Symposium.

This picture is the first time either of us had ever seen the sunrise on the other side of the ocean. One quarter of the way around the world, more than half a day on a plane, and the sun rising over a country we’d never seen, on the way to a place we’d never been, for me to give my first professional paper in front of an international audience of people whom I wanted to be my peers.

It was a weird feeling.

That trip contained one of the most clarifying moments of my life, since high school: At one point, during one of the group lunch breaks for the weekend, several of us were having a conversation about practical methodologies of teaching ethical constructs to machines, and someone facetiously puts out there, “So what? You want to teach robots Zen?”

And I don’t remember if that question was directed at me, or at one of the other people in our impromptu group, and I don’t remember if I was the one but we all got really quiet, and eventually somebody said “…Yes. Let’s teach robots Zen.” I mean, obviously, that’s not the whole of this–I mean programmatizing Japanese Zen Buddhism isn’t the full breadth and depth of what this project’s about. But in that moment you could see everyone at the table thinking about things in a way they hadn’t exactly put words to, before that very second. It was a new approach, and started a whole line of questions like, “how Would you program Zen?” that started to cut into a lot of assumptions people had been making about what “Machine Ethics” was supposed to mean.

That’s what I’m aiming for, here. Not only do I want to learn more about these topics, and talk with you about what I find, but I want to challenge any uninvestigated assumptions we find, in the process. I want us to be able to deconstruct why we’re doing what we’re doing, so we can piece it all back together in the most useful way possible. To that end, the first phase of this is going to deal in a lot of definitions.

Over the course of our time together, I’m going to make a lot of use of terms like “The Invisible Architecture of Bias,” “Normalization,” “Autonomous Created/Generated Intelligence,” “Robot,” “Cybernetics,” “cyborg,” and so on. While a bunch of that will make itself clear in the context, I want to make sure there’s no undefined jargon being put into play, and so I’ll make a few short posts breaking down many of the most jargon-y of these terms.

After we have these definitions in hand, we’re going to be using the words as necessary, so I may make something like a standing glossary of weird-assed terms, or something. We’ll see how it goes.

Well, for an intro post, this has gone on a pretty long while. There’ll be more to talk about, soon but, until then, have a great night!