Let me be SUPER clear, so we can remove all doubt: The potential moral Patiency of #ai/#robots—that is, what responsibilities their creators have to THEM—has been given Far Less consideration or even Credence than that of the AGENCY of said, and that is a Failure.

I coined the phrase “Œdipal Obsolescence Fears” because we’re like Oedipus’ dad, bringing about the very prophecy we’re fighting against. Only w/ machine intelligence, WE WROTE THE PROPHECY…

…We wrote this story about what AI would be and do. WE wrote it. And we can CHANGE IT…

A Future Worth Thinking About: Does An AI Have A Buddha Nature?

Good morning! Lots of new people around here, so I thought I’d remind you that I have Patreon Project called “A Future Worth Thinking About.” It’s a place where I talk a bit more formally about things like Artificial Intelligence, Philosophy, Sociology, Magick, Technology, and the intersections of all of the above.

If you like what we do around here, take a look at the page, read some essays, give a listen to some audio, whatever works for you. And if you like what you see around there, feel free to tell your friends.

Have a great day, all.

“A Future Worth Thinking About”

“On The Public’s Perception of Machine Intelligence” (Storify)

Shortly after waking up, I encountered another annoying “Fear Artificial Intelligence! FEARRRR IIITTTT!” headline, from another notable quotable—Bill Gates, this time. Then all this happened.

Previously in this conversation… like, everything I’ve ever said, but most recently, there’s this: http://wolvensnothere.tumblr.com/post/108575909821/john-brockman

The [above] image comes from my presentation from Magick Codes, late last year: https://www.academia.edu/9891302/Plug_and_Pray_Conceptualizing_Digital_Demigods_and_Electronic_Angels_by_Damien_Patrick_Williams

Ultimately, I’m getting extremely tired of the late-to-the-game, nunanceless discussion of issues and ideas that we’ve been trying to discuss for years and years.

I want us to be talking about these things BEFORE we’re terrified for or lives, because when we react from that mentality, we make really fucking dumb decisions. When we skim the headlines and go for the
sensational, we increase the likelihood of those decisions having longer-term implications.

Anyway, here’s the thing. Thank you to all of my interlocutors. It was an enlightening way to start the day.

The title of [this article] comes from the song “Lament For The Auroch,” by The Sword: https://www.youtube.com/watch?v=QSNq03_6I5g . You’ll see why, in a bit.

Ultimately, this piece is about thinking through the full range of implications of your train of thought, and making sure you know and acknowledge the sources of your lines of thinking.

It’s also about Manichean AI gods at war for your BitCoin mining cycles.

Oh! And speaking of trains: Who’s seen Snowpiercer? Because it is an Amazing piece of Ray Bradbury-esque allegorical science fiction.

http://wolvensnothere.tumblr.com/post/91278483676/theres-only-one-rule-that-i-know-of-babiesHey everybody. In case any of you didn’t see this, I made a blog post, yesterday, basically about where my head is, recently. I’ve attached it, here.

One of the things that’s always kind of struck me is that there’s no cynic like a frustrated idealist. Someone who believes in the potential of how things COULD be, but has seen, over and over again, how “the way things are” grinds that into the dirt.

The Status Quo is the enemy of betterment–again, for a co-determined value of what “better” means. Stasis and comfort are wonderful, because we all like to know that we’re safe, that we can breathe without fear, but that’s all *I* want it for. That is, for me, stasis is the time to catch my breath, and we all need that sometimes. Some of us may need a little longer than others of us.

And like Oatman tries to get Martin to understand, there are tims when we all need to take a moment, be fully present in who and what we are, and realise what the world looks like, when we just breathe. But there’s a poem by Irish poet Ger Killeen called “Evolution Prayer,” that comes to me, when I think about theses kinds of things:

Dark night of my heart, raked
in the blizzard of my ten thousand lives,
I am again briefly
the Moloch of blind fish, sing again
briefly the pterodactyl’s jubilas
to the sun, am sacrificed
again briefly for the dog’s kingdom…
Dark night of my heart, I scream
in recollection of comfortless origins,
suffer for the arrogance
of my entropy. God protect me
from sins of stasis, keep me
in the movement that fixes me

You can find that in his 1989 collection “A Wren.”

I guess what I’m thinking about is a species of compassion. Compassion and, as my friend Kristen said, elsewhere, “Kindness, empathy, and boat-rocking.”

More audio, soon, for those who like the sound of my voice. If you want to give me an early birthday present (it’s next week), why not talk to your friends, and show them what we’re doing, here?

Thanks. We’ll talk more, soon.

In this, the first installment of me talking to myself while pretending to talk with you, we talk about the idea, concept, process, and practical effects of normalization. What is normalization, and what does it mean? We talk about Foucault’s understanding of norms as a means of social control, as well as May, et al’s understanding of the process in the real of medical technologies. I mention the Institutional Review Boards, or IRB, and for thoses of you not involved in academia, here’s some more about what those are and how they work: http://www.thehastingscenter.org/Publications/IRB/About.aspx PROCESS NOTES: I’m weirdly nasal for the first 3 minutes or so, and the audio starts losing my ending sibilants at around the 10 minute mark. If I’m going to keep using this recorder for these chats, then I’m going to need to alter both my posture and my enunciation. Not terrible for a very first run, if I may say so of myself, but it’ll get better than this. I’ll be posting this for public view, eventually, but you my dear patrons, get first dibs. Comments welcome, as long as you’re not straight-up jerks about it 😉

Hey folks! So, you may have been wondering about the weirdly generic picture of a sunrise on the header of the page. Maybe you thought, “oh, like ‘onward! To The FUTURE!,’ or some shit, right?” Right? Well, yes, but also no.

You see, this picture is actually one I took of sunrise over Schiphol Airport in Amsterdam, back in 2012. Kirsten and I had just landed for our layover on the way to my first international conference: The Machine Question Symposium.

This picture is the first time either of us had ever seen the sunrise on the other side of the ocean. One quarter of the way around the world, more than half a day on a plane, and the sun rising over a country we’d never seen, on the way to a place we’d never been, for me to give my first professional paper in front of an international audience of people whom I wanted to be my peers.

It was a weird feeling.

That trip contained one of the most clarifying moments of my life, since high school: At one point, during one of the group lunch breaks for the weekend, several of us were having a conversation about practical methodologies of teaching ethical constructs to machines, and someone facetiously puts out there, “So what? You want to teach robots Zen?”

And I don’t remember if that question was directed at me, or at one of the other people in our impromptu group, and I don’t remember if I was the one but we all got really quiet, and eventually somebody said “…Yes. Let’s teach robots Zen.” I mean, obviously, that’s not the whole of this–I mean programmatizing Japanese Zen Buddhism isn’t the full breadth and depth of what this project’s about. But in that moment you could see everyone at the table thinking about things in a way they hadn’t exactly put words to, before that very second. It was a new approach, and started a whole line of questions like, “how Would you program Zen?” that started to cut into a lot of assumptions people had been making about what “Machine Ethics” was supposed to mean.

That’s what I’m aiming for, here. Not only do I want to learn more about these topics, and talk with you about what I find, but I want to challenge any uninvestigated assumptions we find, in the process. I want us to be able to deconstruct why we’re doing what we’re doing, so we can piece it all back together in the most useful way possible. To that end, the first phase of this is going to deal in a lot of definitions.

Over the course of our time together, I’m going to make a lot of use of terms like “The Invisible Architecture of Bias,” “Normalization,” “Autonomous Created/Generated Intelligence,” “Robot,” “Cybernetics,” “cyborg,” and so on. While a bunch of that will make itself clear in the context, I want to make sure there’s no undefined jargon being put into play, and so I’ll make a few short posts breaking down many of the most jargon-y of these terms.

After we have these definitions in hand, we’re going to be using the words as necessary, so I may make something like a standing glossary of weird-assed terms, or something. We’ll see how it goes.

Well, for an intro post, this has gone on a pretty long while. There’ll be more to talk about, soon but, until then, have a great night!