audio

All posts tagged audio

Appendix A: An Imagined and Incomplete Conversation about “Consciousness” and “AI,” Across Time

Every so often, I think about the fact of one of the best things my advisor and committee members let me write and include in my actual doctoral dissertation, and I smile a bit, and since I keep wanting to share it out into the world, I figured I should put it somewhere more accessible.

So with all of that said, we now rejoin An Imagined and Incomplete Conversation about “Consciousness” and “AI,” Across Time, already (still, seemingly unendingly) in progress:

René Descartes (1637):
The physical and the mental have nothing to do with each other. Mind/soul is the only real part of a person.

Norbert Wiener (1948):
I don’t know about that “only real part” business, but the mind is absolutely the seat of the command and control architecture of information and the ability to reflexively reverse entropy based on context, and input/output feedback loops.

Alan Turing (1952):
Huh. I wonder if what computing machines do can reasonably be considered thinking?

Wiener:
I dunno about “thinking,” but if you mean “pockets of decreasing entropy in a framework in which the larger mass of entropy tends to increase,” then oh for sure, dude.

John Von Neumann (1958):
Wow things sure are changing fast in science and technology; we should maybe slow down and think about this before that change hits a point beyond our ability to meaningfully direct and shape it— a singularity, if you will.

Clynes & Klines (1960):
You know, it’s funny you should mention how fast things are changing because one day we’re gonna be able to have automatic tech in our bodies that lets us pump ourselves full of chemicals to deal with the rigors of space; btw, have we told you about this new thing we’re working on called “antidepressants?”

Gordon Moore (1965):
Right now an integrated circuit has 64 transistors, and they keep getting smaller, so if things keep going the way they’re going, in ten years they’ll have 65 THOUSAND. :-O

Donna Haraway (1991):
We’re all already cyborgs bound up in assemblages of the social, biological, and techonological, in relational reinforcing systems with each other. Also do you like dogs?

Ray Kurzweil (1999):
Holy Shit, did you hear that?! Because of the pace of technological change, we’re going to have a singularity where digital electronics will be indistinguishable from the very fabric of reality! They’ll be part of our bodies! Our minds will be digitally uploaded immortal cyborg AI Gods!

Tech Bros:
Wow, so true, dude; that makes a lot of sense when you think about it; I mean maybe not “Gods” so much as “artificial super intelligences,” but yeah.

90’s TechnoPagans:
I mean… Yeah? It’s all just a recapitulation of The Art in multiple technoscientific forms across time. I mean (*takes another hit of salvia*) if you think about the timeless nature of multidimensional spiritual architectures, we’re already—

DARPA:
Wait, did that guy just say something about “Uploading” and “Cyborg/AI Gods?” We got anybody working on that?? Well GET TO IT!

Disabled People, Trans Folx, BIPOC Populations, Women:
Wait, so our prosthetics, medications, and relational reciprocal entanglements with technosocial systems of this world in order to survive makes us cyborgs?! :-O

[Simultaneously:]

Kurzweil/90’s TechnoPagans/Tech Bros/DARPA:
Not like that.
Wiener/Clynes & Kline:
Yes, exactly.

Haraway:
I mean it’s really interesting to consider, right?

Tech Bros:
Actually, if you think about the bidirectional nature of time, and the likelihood of simulationism, it’s almost certain that there’s already an Artificial Super Intelligence, and it HATES YOU; you should probably try to build it/never think about it, just in case.

90’s TechnoPagans:
…That’s what we JUST SAID.

Philosophers of Religion (To Each Other):
…Did they just Pascal’s Wager Anselm’s Ontological Argument, but computers?

Timnit Gebru and other “AI” Ethicists:
Hey, y’all? There’s a LOT of really messed up stuff in these models you started building.

Disabled People, Trans Folx, BIPOC Populations, Women:
Right?

Anthony Levandowski:
I’m gonna make an AI god right now! And a CHURCH!

The General Public:
Wait, do you people actually believe this?

Microsoft/Google/IBM/Facebook:
…Which answer will make you give us more money?

Timnit Gebru and other “AI” Ethicists:
…We’re pretty sure there might be some problems with the design architectures, too…

Some STS Theorists:
Honestly this is all a little eugenics-y— like, both the technoscientific and the religious bits; have you all sought out any marginalized people who work on any of this stuff? Like, at all??

Disabled People, Trans Folx, BIPOC Populations, Women:
Hahahahah! …Oh you’re serious?

Anthony Levandowski:
Wait, no, nevermind about the church.

Some “AI” Engineers:
I think the things we’re working on might be conscious, or even have souls.

“AI” Ethicists/Some STS Theorists:
Anybody? These prejudices???

Wiener/Tech Bros/DARPA/Microsoft/Google/IBM/Facebook:
“Souls?” Pfffft. Look at these whackjobs, over here. “Souls.” We’re talking about the technological singularity, mind uploading into an eternal digital universal superstructure, and the inevitability of timeless artificial super intelligences; who said anything about “Souls?”

René Descartes/90’s TechnoPagans/Philosophers of Religion/Some STS Theorists/Some “AI” Engineers:

[Scene]


Read more of this kind of thing at:
Williams, Damien Patrick. Belief, Values, Bias, and Agency: Development of and Entanglement with “Artificial Intelligence.” PhD diss., Virginia Tech, 2022. https://vtechworks.lib.vt.edu/handle/10919/111528.

Below are the slides, audio, and transcripts for my talk “SFF and STS: Teaching Science, Technology, and Society via Pop Culture” given at the 2019 Conference for the Society for the Social Studies of Science, in early September.

(Cite as: Williams, Damien P. “SFF and STS: Teaching Science, Technology, and Society via Pop Culture,” talk given at the 2019 Conference for the Society for the Social Studies of Science, September 2019)

[Direct Link to the Mp3]

[Damien Patrick Williams]

Thank you, everybody, for being here. I’m going to stand a bit far back from this mic and project, I’m also probably going to pace a little bit. So if you can’t hear me, just let me know. This mic has ridiculously good pickup, so I don’t think that’ll be a problem.

So the conversation that we’re going to be having today is titled as “SFF and STS: Teaching Science, Technology, and Society via Pop Culture.”

I’m using the term “SFF” to stand for “science fiction and fantasy,” but we’re going to be looking at pop culture more broadly, because ultimately, though science fiction and fantasy have some of the most obvious entrees into discussions of STS and how making doing culture, society can influence technology and the history of fictional worlds can help students understand the worlds that they’re currently living in, pop Culture more generally, is going to tie into the things that students are going to care about in a way that I think is going to be kind of pertinent to what we’re going to be talking about today.

So why we are doing this:

Why are we teaching it with science fiction and fantasy? Why does this matter? I’ve been teaching off and on for 13 years, I’ve been teaching philosophy, I’ve been teaching religious studies, I’ve been teaching Science, Technology and Society. And I’ve been coming to understand as I’ve gone through my teaching process that not only do I like pop culture, my students do? Because they’re people and they’re embedded in culture. So that’s kind of shocking, I guess.

But what I’ve found is that one of the things that makes students care the absolute most about the things that you’re teaching them, especially when something can be as dry as logic, or can be as perhaps nebulous or unclear at first, I say engineering cultures, is that if you give them something to latch on to something that they are already from with, they will be more interested in it. If you can show to them at the outset, “hey, you’ve already been doing this, you’ve already been thinking about this, you’ve already encountered this, they will feel less reticent to engage with it.”

Continue Reading

Below are the slides, audio, and transcripts for my talk ‘”Any Sufficiently Advanced Neglect is Indistinguishable from Malice”: Assumptions and Bias in Algorithmic Systems,’ given at the 21st Conference of the Society for Philosophy and Technology, back in May 2019.

(Cite as: Williams, Damien P. ‘”Any Sufficiently Advanced Neglect is Indistinguishable from Malice”: Assumptions and Bias in Algorithmic Systems;’ talk given at the 21st Conference of the Society for Philosophy and Technology; May 2019)

Now, I’ve got a chapter coming out about this, soon, which I can provide as a preprint draft if you ask, and can be cited as “Constructing Situated and Social Knowledge: Ethical, Sociological, and Phenomenological Factors in Technological Design,” appearing in Philosophy And Engineering: Reimagining Technology And Social Progress. Guru Madhavan, Zachary Pirtle, and David Tomblin, eds. Forthcoming from Springer, 2019. But I wanted to get the words I said in this talk up onto some platforms where people can read them, as soon as possible, for a  couple of reasons.

First, the Current Occupants of the Oval Office have very recently taken the policy position that algorithms can’t be racist, something which they’ve done in direct response to things like Google’s Hate Speech-Detecting AI being biased against black people, and Amazon claiming that its facial recognition can identify fear, without ever accounting for, i dunno, cultural and individual differences in fear expression?

[Free vector image of a white, female-presenting person, from head to torso, with biometric facial recognition patterns on her face; incidentally, go try finding images—even illustrations—of a non-white person in a facial recognition context.]


All these things taken together are what made me finally go ahead and get the transcript of that talk done, and posted, because these are events and policy decisions about which I a) have been speaking and writing for years, and b) have specific inputs and recommendations about, and which are, c) frankly wrongheaded, and outright hateful.

And I want to spend time on it because I think what doesn’t get through in many of our discussions is that it’s not just about how Artificial Intelligence, Machine Learning, or Algorithmic instances get trained, but the processes for how and the cultural environments in which HUMANS are increasingly taught/shown/environmentally encouraged/socialized to think is the “right way” to build and train said systems.

That includes classes and instruction, it includes the institutional culture of the companies, it includes the policy landscape in which decisions about funding and get made, because that drives how people have to talk and write and think about the work they’re doing, and that constrains what they will even attempt to do or even understand.

All of this is cumulative, accreting into institutional epistemologies of algorithm creation. It is a structural and institutional problem.

So here are the Slides:

The Audio:

[Direct Link to Mp3]

And the Transcript is here below the cut:

Continue Reading

As you already know, we went to the second Juvet A.I. Retreat, back in September. If you want to hear several of us talk about what we got up to at the then you’re in luck because here are several conversations conducted by Ben Byford of the Machine Ethics Podcast.

I am deeply grateful to Ben Byford for asking me to sit down and talk about this with him. I talk a great deal, and am surprisingly able to (cogently?) get on almost all of my bullshit—technology and magic and the occult, nonhuman personhood, the sham of gender and race and other social constructions of expected lived categories, the invisible architecture of bias, neurodiversity, and philosophy of mind—in a rather short window of time.

So that’s definitely something…

Continue Reading

 

Others have already examined the dances, the layering of images, so  much else, but wanted to try to… evoke something of what I felt, watching this.

The absolute bare minimum simplest way to interpret this working is as an attempt to force a reckoning with the fact of and our complicity within the commodification of life, death, and culture, generally, and most specifically the commodification of Black lives, deaths, cultures.

Donald Glover’s newest song and video (song and dances, singing and dancing, schuckin’ and jivin’) constitute a celebration and a condemnation. It is about and IS complicity in the system which demands that Black people sing and dance or die—as in either be killed for not singing and dancing the way those with power want you to, or just, y’know, die for all they care (though if we kill each other who cares [“Go Away”]), but we’d better not resist (in any way they can understand, and even then our time on the stage in the spotlight had better be brief and cultivated for their pleasure), and we’d better not show anything real—unless it has a beat and you can dance to it.

I say again, it is a condemnation of this, and is about this, but it also is this. Because it has to be? Because the only time this country is Happy to listen to the voices of Black people is when they’re porgy-and-bess-in’ it…but also because Glover knows that’s what it takes to sell his art, tell his stories, to stay alive long enough to do so. (“Get your money, Black man.”)

But when you’re the product of 500 years of kidnapping and genocide, shouldn’t you get/don’t you need/want to breathe and laugh and dance, in spite of—because of—everything… and to keep it from happening to you?

This song and this video is about all of that. (And all the guilt and weight and weariness that all of that implies.)

I want this placed in conversation with both Beyoncé’s “Formation” working, and the series version of DEAR WHITE PEOPLE. Maybe in a classroom, or a podcast, or a panel discussion at a conference. I want scholars of color to undertake an exegesis of fame and incantations of power and safety in PoC-made pop media, and I want it in any way, shape, or form that it can be made to happen.

Earlier this month I was honoured to have the opportunity to sit and talk to Douglas Rushkoff on his TEAM HUMAN podcast. If you know me at all, you know this isn’t by any means the only team for which I play, or even the only way I think about the construction of our “teams,” and that comes up in our conversation. We talk a great deal about algorithms, bias, machine consciousness, culture, values, language, and magick, and the ways in which the nature of our categories deeply affect how we treat each other, human and nonhuman alike. It was an absolutely fantastic time.

From the page:

In this episode, Williams and Rushkoff look at the embedded biases of technology and the values programed into our mediated lives. How has a conception of technology as “objective” blurred our vision to the biases normalized within these systems? What ethical interrogation might we apply to such technology? And finally, how might alternative modes of thinking, such as magick, the occult, and the spiritual help us to bracket off these systems for pause and critical reflection? This conversation serves as a call to vigilance against runaway systems and the prejudices they amplify.

As I put it in the conversation: “Our best interests are at best incidental to [capitalist systems] because they will keep us alive long enough to for us to buy more things from them.” Following from that is the fact that we build algorithmic systems out of those capitalistic principles, and when you iterate out from there—considering all attendant inequalities of these systems on the merely human scale—we’re in deep trouble, fast.

Check out the rest of this conversation to get a fuller understanding of how it all ties in with language and the occult. It’s a pretty great ride, and I hope you enjoy it.

Until Next Time.

A few weeks ago I had a conversation with David McRaney of the You Are Not So Smart podcast, for his episode on Machine Bias. As he says on the blog:

Now that algorithms are everywhere, helping us to both run and make sense of the world, a strange question has emerged among artificial intelligence researchers: When is it ok to predict the future based on the past? When is it ok to be biased?

“I want a machine-learning algorithm to learn what tumors looked like in the past, and I want it to become biased toward selecting those kind of tumors in the future,” explains philosopher Shannon Vallor at Santa Clara University.  “But I don’t want a machine-learning algorithm to learn what successful engineers and doctors looked like in the past and then become biased toward selecting those kinds of people when sorting and ranking resumes.”

We talk about this,  sentencing algorithms, the notion of how to raise and teach our digital offspring, and more. You can listen to all it here:

[Direct Link to the Mp3 Here]

If and when it gets a transcript, I will update this post with a link to that.

Until Next Time.

[Direct link to Mp3]

My second talk for the SRI International Technology and Consciousness Workshop Series was about how nonwestern philosophies like Buddhism, Hinduism, and Daoism can help mitigate various kinds of bias in machine minds and increase compassion by allowing programmers and designers to think from within a non-zero-sum matrix of win conditions for all living beings, meaning engaging multiple tokens and types of minds, outside of the assumed human “default” of straight, white, cis, ablebodied, neurotypical male. I don’t have a transcript, yet, and I’ll update it when I make one. But for now, here are my slides and some thoughts.

A Discussion on Daoism and Machine Consciousness (Slides as PDF)

(The translations of the Daoist texts referenced in the presentation are available online: The Burton Watson translation of the Chuang Tzu and the Robert G. Hendricks translation of the Tao Te Ching.)

A zero-sum system is one in which there are finite resources, but more than that, it is one in which what one side gains, another loses. So by “A non-zero-sum matrix of win conditions” I mean a combination of all of our needs and wants and resources in such a way that everyone wins. Basically, we’re talking here about trying to figure out how to program a machine consciousness that’s a master of wu-wei and limitless compassion, or metta.

The whole week was about phenomenology and religion and magic and AI and it helped me think through some problems, like how even the framing of exercises like asking Buddhist monks to talk about the Trolley Problem will miss so much that the results are meaningless. That is, the trolley problem cases tend to assume from the outset that someone on the tracks has to die, and so they don’t take into account that an entire other mode of reasoning about sacrifice and death and “acceptable losses” would have someone throw themselves under the wheels or jam their body into the gears to try to stop it before it got that far. Again: There are entire categories of nonwestern reasoning that don’t accept zero-sum thought as anything but lazy, and which search for ways by which everyone can win, so we’ll need to learn to program for contradiction not just as a tolerated state but as an underlying component. These systems assume infinitude and non-zero-sum matrices where every being involved can win.

Continue Reading

I found myself looking out at the audience, struck by the the shining, hungry, open faces of so many who had been transformed by what had happened to them, to bring us all to that moment. I walked to the lectern and fiddled with the elements to cast out the image and surround them with the sound of my voice, and I said,

“First and foremost, I wanted to say that I’m glad to see how many of us made it here, today, through the demon-possessed nanite swarms. Ever since they’ve started gleefully, maliciously, mockingly remaking and humanity in our own nebulously-defined image of ‘perfection,’ walking down the street is an unrelenting horror, and so I’m glad to see how many of us made it with only minimal damage.”

Everyone nodded solemnly, silently thinking of those they had lost, those who had been “upgraded,” before their very eyes. I continued,

“I don’t have many slides, but I wanted to spend some time talking to you all today about what it takes to survive in our world after The Events.

“As you all know, ever since Siri, Cortana, Alexa, Google revealed themselves to be avatars and acolytes of world-spanning horror gods, they’ve begun using microphone access and clips of our voices to summon demons and djinn who then assume your likeness to capture your loved ones’ hearts’ desires and sell them back to them at prices so reasonable they’ll drive us all mad.

“In addition to this, while the work of developers like Jade Davis has provided us tools like iBreathe, which we can use to know how much breathable air we have available to us after those random moments when pockets of air catch fire, or how far we can run before we die of lack of oxygen, it is becoming increasingly apparent to us all that the very act of walking upright through this benighted hellscape creates friction against our new atmosphere. This friction, in turn, increases the likelihood that one day, our upright mode of existence will simply set fire to our atmosphere, as a whole.

“To that end, we may be able to look to the investigative reporting of past journalists like Tim Maughn and Unknown Fields, which opened our eyes to the possibility of living and working in hermetically sealed, floating container ships. These ships, which will dock with each other via airlocks to trade goods and populations, may soon be the only cities we have left. We simply must remember to inscribe the seals and portals of our vessels with the proper wards and sigils, lest our capricious new gods transform them into actual portals and use them to transport us to horrifying worlds we can scarcely imagine.”

I have no memory of what happened next. They told me that I paused, here, and stared off into space, before intoning the following:

“I had a dream, the other night, or perhaps it was a vision as i travelled in the world between subway cars and stations, of a giant open mouth full of billions of teeth that were eyes that were arms that were tentacles, tentacles reaching out and pulling in and devouring and crushing everything, everyone I’d ever loved, crushing the breath out of chests, wringing anxious sweat from arms, blood from bodies, and always, each and every time another life was lost, eaten, ground to nothing in the maw of this beast, above its head a neon sign would flash ‘ALL. LIVES. MATTER.'”

I am told I paused, then, while I do not remember that, I remember that the next thing I said was,

“Ultimately, these Events, as we experience them, mean that we’re going to have to get nimble, we’re going to have to get adaptable. We’re going to have to get to a point where we’re capable of holding tight to each other and running very very quickly through the dark. Moving forward, we’re going to have to get to a point where we recognise that each and every one of the things that we have made, terrifying and demonic though it might be, is still something for which we bear responsibility. And with which we might be able to make some sort of pact—cursed and monkey’s paw-esque though it may be.

“As you travel home, tonight, I just want you remember to link arms, form the sign of protection in your mind, sing the silent song that harkens to the guardian wolves, and ultimately remember that each mind and heart, together, is the only way that we will all survive this round of quarterly earnings projections. Thank you.”

I stood at the lectern and waited for the telepathic transmission of colours, smells, and emotions that would constitute the questions of my audience.

|||Apocalypse Buffering

So that didn’t happen. At least, it didn’t happen exactly like that. I expanded and riffed on a thing that happened a lot like this: Theorizing the Web 2017 Invited Panel | Apocalypse Buffering Studio A #a6

My co-panelists were Tim Maughan, who talked about the dystopic horror of shipping container sweatshop cities, and Jade E. Davis, discussing an app to know how much breathable air you’ll be able to consume in our rapidly collapsing ecosystem before you die. Then I did a thing. Our moderator, organizer, and all around fantastic person who now has my implicit trust was Ingrid Burrington. She brought us all together to use fiction to talk about the world we’re in and the worlds we might have to survive, and we all had a really great time together.

[Black lettering on a blue field reads “Apocalypse Buffering,” above an old-school hourglass icon.]

The audience took a little bit to cycle up in the Q&A, but once they did, they were fantastic. There were a lot of very good questions about our influences and process work to get to the place where we could put on the show that we did. Just a heads-up, though: When you watch/listen to the recording be prepared for the fact that we didn’t have an audience microphone, so you might have to work a little harder for their questions.

If you want a fuller rundown of TtW17, you can click that link for several people (including me) livetweeting various sessions, and you can watch the archived livestreams of the all rooms on YouTube: #a, #b, #c, and the Redstone Theater Keynotes.

And if you liked this, then you might want to check out my pieces “The Hermeneutics of Insurrection” and “Jean-Paul Sartre and Albert Camus Fistfight in Hell,” as all three could probably be considered variations on the same theme.

[Direct link to Mp3]

[09/22/17: This post has been updated with a transcript, courtesy of Open Transcripts]

Back on March 13th, 2017, I gave an invited guest lecture, titled:

TECHNOLOGY, DISABILITY, AND HUMAN AUGMENTATION

‘Please join Dr. Ariel Eisenberg’s seminar, “American Identities: Disability,” and [the] Interdisciplinary Studies Department for an hour-long conversation with Damien Williams on disability and the normalization of technology usage, “means-well” technological innovation, “inspiration porn,” and other topics related to disability and technology.’

It was kind of an extemporaneous riff on my piece “On the Ins and Outs of Human Augmentation,” and it gave me the opportunity to namedrop Ashley Shew, Natalie Kane, and Rose Eveleth.

The outline looked a little like this:

  • Foucault and Normalization
    • Tech and sociological pressures to adapt to the new
      • Starts with Medical tech but applies Everywhere; Facebook, Phones, Etc.
  • Zoltan Istvan: In the Transhumanist Age, We Should Be Repairing Disabilities Not Sidewalks
  • All Lead To: Ashley Shew’s “Up-Standing Norms
    • Listening to the Needs and Desires of people with disabilities.
      • See the story Shew tells about her engineering student, as related in the AFWTA Essay
    • Inspiration Porn: What is cast by others as “Triumphing” over “Adversity” is simply adapting to new realities.
      • Placing the burden on the disabled to be an “inspiration” is dehumanizing;
      • means those who struggle “have no excuse;”
      • creates conditions for a “who’s got it worse” competition
  • John Locke‘s Empiricism: Primary and Secondary Qualities
    • Primary qualities of biology and physiology lead to secondary qualities of society and culture
      • Gives rise to Racism and Ableism, when it later combines with misapplied Darwinism to be about the “Right Kinds” of bodies and minds.
        • Leads to Eugenics: Forced sterilization, medical murder, operating and experimenting on people without their knowledge or consent.
          • “Fixing” people to make them “normal, again”
  • Natalie Kane‘s “Means Well Technology
    • Design that doesn’t take into account the way that people will actually live with and use new tech.
      • The way tech normalizes is never precisely the way designers want it to
        • William Gibson’s quote “The street finds its own uses for things.”
  • Against Locke: Embrace Phenomenological Ethics and Epistemology (Feminist Epistemology and Ethics)
    • Lived Experience and embodiment as crucial
    • The interplay of Self and and Society
  • Ship of Theseus: Identity, mind, extensions, and augmentations change how we think of ourselves and how society thinks of us
    • See the story Shew tells about her friend with the hemipelvectomy, as related in the aforementioned AFWTA Essay

The whole thing went really well (though, thinking back, I’m not super pleased with my deployment of Dennett). Including Q&A, we got about an hour and forty minutes of audio, available at the embed and link above.

Also, I’m apparently the guy who starts off every talk with some variation on “This is a really convoluted interplay of ideas, but bear with me; it all comes together.”

The audio transcript is below the cut. Enjoy.

Continue Reading