damien patrick williams

All posts tagged damien patrick williams

Reimagining “AI’s” Environmental and Sociotechnical Materialities

There’s a new open-access book of collected essays called Reimagining AI for Environmental Justice and Creativity, and I happen to have an essay in it. The collection is made of contributions from participants in the October 2024 “Reimagining AI for Environmental Justice and Creativity” panels and workshops put on by Jess Reia, MC Forelle, and Yingchong Wang, and I’ve included my essay here, for you. That said, I highly recommend checking out the rest of the book, because all the contributions are fantastic.

This work was co-sponsored by: The Karsh Institute Digital Technology for Democracy Lab, The Environmental Institute, and The School of Data Science, all at UVA. The videos for both days of the “Reimagining AI for Environmental Justice and Creativity” talks are now available, and you can find them at the Karsh Institute website, and also below, before the text of my essay.

All in all, I think these these are some really great conversations on “AI” and environmental justice. They cover “AI”‘s extremely material practical aspects, the deeply philosophical aspects, and the necessary and fundamental connections between the two, and these are crucial discussions to be having, especially right now.

Hope you dig it.

Continue Reading

It’s really disheartening and honestly kind of telling that in spite of everything, ChatGPT is actively marketing itself to students in the run-up to college finals season.

We’ve talked many (many) times before about the kinds of harm that can come from giving over too much epistemic and heuristic authority over to systems built by people who have repeatedly, doggedly proven that they will a) buy into their own hype and b) refuse to ever question their own biases and hubris. But additionally, there’s been at least two papers in the past few months alone, and more in the last two years (1, 2, 3), demonstrating that over-reliance on “AI” tools diminishes critical thinking capacity and prevents students from building the kinds of foundational skills which allow them to learn more complex concepts, adapt to novel situations, and grow into experts.

Screenshot of ChatpGPT page:ChaptGPT Promo: 2 months free for students ChatGPT Plus is now free for college students through May Offer valid for students in the US and Canada [Buttons reading "Claim offer" and "learn more" An image of a pencil scrawling a scribbly and looping line] ChatGPT Plus is here to help you through finals

Screenshot of ChatGPT[.]com/students showing an introductory offer for college students during finals; captured 04/04/2025

That lack of expertise and capacity has a direct impact on people’s ability to discern facts, produce knowledge, and even participate in civic/public life. The diminishment of critical thinking skills makes people more susceptible to propaganda and other forms of dis- and misinformation— problems which, themselves, are already being exacerbated by the proliferation of “Generative AI” text and image systems and people not fulling understanding them for the bullshit engines they are.

The abovementioned susceptibility allows authoritarian-minded individuals and groups to thus further degrade belief in shared knowledge and consensus reality and to erode trust in expertise, thus exacerbating and worsening the next turn on the cycle when it starts all over again.

All of this creates the very conditions by which authoritarians seek to cement their control: by undercutting the individual tools and social mechanisms which can empower the populace to understand and challenge the kinds of damage dictators, theocrats, fascists, and kleptocrats seek to do on the path to enriching themselves and consolidating power.

And here’s OpenAI flagrantly encouraging said over-reliance. The original post on linkedIn even has an image of someone prompting ChatGPT to guide them on “mastering [a] calc 101 syllabus in two weeks.” So that’s nice.

No wait; the other thing… Terrible. It’s terrible.

View Kate Rouch’s graphic linkKate RouchKate Rouch • 3rd+Premium • 3rd+ Chief Marketing Officer at OpenAI.Chief Marketing Officer at OpenAI. 21h • Edited • 21 hours ago • Edited • Visible to anyone on or off LinkedIn ChatGPT Plus is free during finals! We can’t achieve our mission without empowering young people to use AI. Fittingly, today we launched our first scaled marketing campaign. The campaign shows students different ways to take advantage of ChatGPT as they study, work out, try to land jobs, and plan their summers. It also offers ChatGPT Plus’s more advanced capabilities to students for free through their finals. You’ll see creative on billboards, digital ads, podcasts, and more throughout the coming weeks. We hope you learn something useful! If you’re a college student in the US or Canada, you can claim the offer at www.chatgpt.com/students

Screenshot of a linkedIn post from OpenAI’s chief marketing officer. Captured 04/04/2025

Understand this. Push back against it. Reject its wholesale uncritical adoption and proliferation. Demand a more critical and nuanced stance on “AI” from yourself, from your representatives at every level, and from every company seeking to shove this technology down our throats.

Audio, Slides, and Transcript for my 2024 SEAC Keynote

Back in October, I was the keynote speaker for the Society for Ethics Across the Curriculum‘s 25th annual conference. My talk was titled “On Truth, Values, Knowledge, and Democracy in the Age of Generative ‘AI,’” and it touched on a lot of things that I’ve been talking and writing about for a while (in fact, maybe the title is familiar?), but especially in the past couple of years. Covered deepfakes, misinformation, disinformation, the social construction of knowledge, artifacts, and consensus reality, and more. And I know it’s been a while since the talk, but it’s not like these things have gotten any less pertinent, these past months.

As a heads-up, I didn’t record the Q&A because I didn’t get the audience’s permission ahead of time, and considering how much of this is about consent, that’d be a little weird, yeah? Anyway, it was in the Q&A section where we got deep into the environmental concerns of water and power use, including ways to use those facts to get through to students who possibly don’t care about some of the other elements. There were a honestly a lot of really trenchant questions from this group, and I was extremely glad to meet and think with them. Really hoping to do so more in the future, too.

A Black man with natural hair shaved on the sides & long in the center, grey square-frame glasses, wearing a silver grey suit jacket, a grey dress shirt with a red and black Paisley tie, and a black N95 medical mask stands on a stage behind a lectern and in front of a large screen showing a slide containing the words On Truth, Values, Knowledge,and Democracy in the Age of Generative “AI”Dr. Damien Patrick Williams Assistant Professor of Philosophy Assistant Professor of Data Science University of North Carolina at Charlotte, and an image of the same man, unmasked, with a beard, wearing a silver-grey pinstriped waistcoat & a dark grey shirt w/ a purple paisley tie in which bookshelves filled w/ books & framed degrees are visible in the background

Me at the SEAC conference; photo taken by Jason Robert (see alt text for further detailed description).

Below, you’ll find the audio, the slides, and the lightly edited transcript (so please forgive any typos and grammatical weirdnesses). All things being equal, a goodly portion of the concepts in this should also be getting worked into a longer paper coming out in 2025.

Hope you dig it.

Until Next Time.

Continue Reading

Below are the slides, audio, and transcripts for my talk “SFF and STS: Teaching Science, Technology, and Society via Pop Culture” given at the 2019 Conference for the Society for the Social Studies of Science, in early September.

(Cite as: Williams, Damien P. “SFF and STS: Teaching Science, Technology, and Society via Pop Culture,” talk given at the 2019 Conference for the Society for the Social Studies of Science, September 2019)

[Direct Link to the Mp3]

[Damien Patrick Williams]

Thank you, everybody, for being here. I’m going to stand a bit far back from this mic and project, I’m also probably going to pace a little bit. So if you can’t hear me, just let me know. This mic has ridiculously good pickup, so I don’t think that’ll be a problem.

So the conversation that we’re going to be having today is titled as “SFF and STS: Teaching Science, Technology, and Society via Pop Culture.”

I’m using the term “SFF” to stand for “science fiction and fantasy,” but we’re going to be looking at pop culture more broadly, because ultimately, though science fiction and fantasy have some of the most obvious entrees into discussions of STS and how making doing culture, society can influence technology and the history of fictional worlds can help students understand the worlds that they’re currently living in, pop Culture more generally, is going to tie into the things that students are going to care about in a way that I think is going to be kind of pertinent to what we’re going to be talking about today.

So why we are doing this:

Why are we teaching it with science fiction and fantasy? Why does this matter? I’ve been teaching off and on for 13 years, I’ve been teaching philosophy, I’ve been teaching religious studies, I’ve been teaching Science, Technology and Society. And I’ve been coming to understand as I’ve gone through my teaching process that not only do I like pop culture, my students do? Because they’re people and they’re embedded in culture. So that’s kind of shocking, I guess.

But what I’ve found is that one of the things that makes students care the absolute most about the things that you’re teaching them, especially when something can be as dry as logic, or can be as perhaps nebulous or unclear at first, I say engineering cultures, is that if you give them something to latch on to something that they are already from with, they will be more interested in it. If you can show to them at the outset, “hey, you’ve already been doing this, you’ve already been thinking about this, you’ve already encountered this, they will feel less reticent to engage with it.”

Continue Reading

Below are the slides, audio, and transcripts for my talk ‘”Any Sufficiently Advanced Neglect is Indistinguishable from Malice”: Assumptions and Bias in Algorithmic Systems,’ given at the 21st Conference of the Society for Philosophy and Technology, back in May 2019.

(Cite as: Williams, Damien P. ‘”Any Sufficiently Advanced Neglect is Indistinguishable from Malice”: Assumptions and Bias in Algorithmic Systems;’ talk given at the 21st Conference of the Society for Philosophy and Technology; May 2019)

Now, I’ve got a chapter coming out about this, soon, which I can provide as a preprint draft if you ask, and can be cited as “Constructing Situated and Social Knowledge: Ethical, Sociological, and Phenomenological Factors in Technological Design,” appearing in Philosophy And Engineering: Reimagining Technology And Social Progress. Guru Madhavan, Zachary Pirtle, and David Tomblin, eds. Forthcoming from Springer, 2019. But I wanted to get the words I said in this talk up onto some platforms where people can read them, as soon as possible, for a  couple of reasons.

First, the Current Occupants of the Oval Office have very recently taken the policy position that algorithms can’t be racist, something which they’ve done in direct response to things like Google’s Hate Speech-Detecting AI being biased against black people, and Amazon claiming that its facial recognition can identify fear, without ever accounting for, i dunno, cultural and individual differences in fear expression?

[Free vector image of a white, female-presenting person, from head to torso, with biometric facial recognition patterns on her face; incidentally, go try finding images—even illustrations—of a non-white person in a facial recognition context.]


All these things taken together are what made me finally go ahead and get the transcript of that talk done, and posted, because these are events and policy decisions about which I a) have been speaking and writing for years, and b) have specific inputs and recommendations about, and which are, c) frankly wrongheaded, and outright hateful.

And I want to spend time on it because I think what doesn’t get through in many of our discussions is that it’s not just about how Artificial Intelligence, Machine Learning, or Algorithmic instances get trained, but the processes for how and the cultural environments in which HUMANS are increasingly taught/shown/environmentally encouraged/socialized to think is the “right way” to build and train said systems.

That includes classes and instruction, it includes the institutional culture of the companies, it includes the policy landscape in which decisions about funding and get made, because that drives how people have to talk and write and think about the work they’re doing, and that constrains what they will even attempt to do or even understand.

All of this is cumulative, accreting into institutional epistemologies of algorithm creation. It is a structural and institutional problem.

So here are the Slides:

The Audio:

[Direct Link to Mp3]

And the Transcript is here below the cut:

Continue Reading

2017 SRI Technology and Consciousness Workshop Series Final Report

So, as you know, back in the summer of 2017 I participated in SRI International’s Technology and Consciousness Workshop Series. This series was an eight week program of workshops the current state of the field around, the potential future paths toward, and the moral and social implications of the notion of conscious machines. To do this, we brought together a rotating cast of dozens of researchers in AI, machine learning, psychedelics research, ethics, epistemology, philosophy of mind, cognitive computing, neuroscience, comparative religious studies, robotics, psychology, and much more.

Image of a rectangular name card with a stylized "Technology & Consciousness" logo, at the top, the name Damien Williams in bold in the middle, and SRI International italicized at the bottom; to the right a blurry wavy image of what appears to be a tree with a person standing next to it and another tree in the background to the left., all partially mirrored in a surface at the bottom of the image.

[Image of my name card from the Technology & Consciousness workshop series.]

We traveled from Arlington, VA, to Menlo Park, CA, to Cambridge, UK, and back, and while my primary role was that of conference co-ordinator and note-taker (that place in the intro where it says I “maintained scrupulous notes?” Think 405 pages/160,656 words of notes, taken over eight 5-day weeks of meetings), I also had three separate opportunities to present: Once on interdisciplinary perspectives on minds and mindedness; then on Daoism and Machine Consciousness; and finally on a unifying view of my thoughts across all of the sessions. In relation to this report, I would draw your attention to the following passage:

An objection to this privileging of sentience is that it is anthropomorphic “meat chauvinism”: we are projecting considerations onto technology that derive from our biology. Perhaps conscious technology could have morally salient aspects distinct from sentience: the basic elements of its consciousness could be different than ours.

All of these meetings were held under the auspices of the Chatham House Rule, which meant that there were many things I couldn’t tell you about them, such as the names of the other attendees, or what exactly they said in the context of the meetings. What I was able tell you, however, was what I talked about, and I did, several times. But as of this week, I can give you even more than that.

This past Thursday, SRI released an official public report on all of the proceedings and findings from the 2017 SRI Technology and Consciousness Workshop Series, and they have told all of the participants that they can share said report as widely as they wish. Crucially, that means that I can share it with you. You can either click this link, here, or read it directly, after the cut.

Continue Reading