phenomenology

All posts tagged phenomenology

Failures of “AI” Promise: Critical Thinking, Misinformation, Prosociality, & Trust

So, new research shows that a) LLM-type “AI” chatbots are extremely persuasive and able to get voters to shift their positions, and that b) the more effective they are at that, the less they hew to factual reality.

Which: Yeah. A bunch of us told you this.

Again: the Purpose of LLM- type “AI” is not to tell you the truth or to lie to you, but to provide you with an answer-shaped something you are statistically determined to be more likely to accept, irrespective of facts— this is the reason I call them “bullshit engines.” And it’s what makes them perfect for accelerating dis- and misinformation and persuasive propaganda; perfect for authoritarian and fascist aims of destabilizing trust in expertise. Now, the fear here isn’t necessarily that candidate A gets elected over candidate B (see commentary from the paper authors, here). The real problem is the loss of even the willingness to try to build shared consensus reality— i.e., the “AI” enabled epistemic crisis point we’ve been staring down for about a decade.

Other preliminary results show that overreliance on “generative AI” actively harms critical thinking skills, degrading not just trust in, but the ability to critically engage with, determine the value of, categorize, and intentionally sincerely consider new ways of organizing and understanding facts to produce knowledge. Further, users actively reject less sycophantic versions of “AI” and get increasingly hostile toward/less likely to help or be helped by other actual humans because said humans aren’t as immediately sycophantic. And thus, taken together, these factors create cycles of psychological (and emotional) dependence on tools that Actively Harm Critical Thinking And Human Interaction.

What better dirt in which for disinformation to grow?

The design, cultural deployment, embedded values, and structural affordances of “AI” has also been repeatedly demonstrated to harm both critical skills development and now also the structure and maintenance of the fabric of  social relationships in terms of mutual trust and the desire and ability to learn from each other. That is, students are more suspicious of teachers who use “AI,” and teachers are still, increasingly, on edge about the idea that their students might be using “AI,” and so, in the inimitable words and delivery of Kurt Russell:

Kurt Russell as MacReady from The Thing, a white man with shoulder-length hair and a long scruff beard, wearing grey and olive drab, looking exhausted and sitting next to a bottle of J&B Rare Blend Scotch whisky and a pint glass 1/3 full of the same, saying into a microphone, “Nobody trusts anybody now. And we’re all very tired.”

Combine all of the above with what I’ve repeatedly argued about the impact of “AI” on the spread of dis- and misinformation, consensus knowledge-making, authoritarianism, and the eugenicist, fascist, and generally bigoted tendencies embedded in all of it—and well… It all sounds pretty anti-pedagogical and anti-social to me.

And I really don’t think it’s asking too much to require that all of these demonstrated problems be seriously and meticulously addressed before anyone advocating for their implementation in educational and workplace settings is allowed to go through with it.

Like… That just seems sensible, no?

The current paradigm of “AI” encodes and recapitulates all of these things, but previous technosocial paradigms did too, and if these facts had been addressed back then, in the culture of technology specifically and our sociotechnical culture writ large, then it might not still be like that, today.

But it also doesn’t have to stay like this. It genuinely does not.

We can make these tools differently. We can train people earlier and more consistently to understand the current models of “AI,” reframing notions of “AI Literacy” away from “how to use it” and toward an understanding of how they functions and what they actually can and cannot do. We can make it clear that what they produce is not truth, not facts, not even lies, but always bullshit, even when they seem to conform to factual reality. We can train people— students, yes, but also professionals, educators, and wider communities— to understand how bias confirmation and optimization work, how propaganda, marketing, and psychological manipulation work.

The more people learn about what these systems do, what they’re built from, how they’re trained, and the quite frankly alarming amount of water and energy it has taken and is projected to take to develop and maintain them, the more those same people resist the force and coercion that corporations and even universities and governments think pass for transparent, informed, meaningful consent.

Like… researchers are highlight that the current trajectory of “AI” energy and water use will not only undo several years of tech sector climate gains, but will also prevent corporations such as Google, Amazon, and Meta from meeting carbon-neutral and water-positive goals. And that’s without considering the infrastructural capture of those resources in the process of building said data centers, in the first place (the authors list this as being outside their scope); with that data, the picture is worse.

As many have noted, environmental impacts are among the major concerns of those who say that they are reticent to use or engage with all things “artificial intelligence”— even sparking public outcry across the country, with more people joining calls that any and all new “AI” training processes and data centers be built to run on existing and expanded renewables. We are increasingly finding the general public wants their neighbours and institutions to engage in meaningful consideration of how we might remediate or even prevent “AI’s” potential social, environmental, and individual intellectual harms.

But, also increasingly, we find that institutional pushes— including the conclusions of the Nature article on energy use trends— tend toward an “adoption and dominance at all costs” model of “AI,” which in turn seem to be founded on the circular reasoning that “we have to use ‘AI’ so that and because it will be useful.” Recurrent directives from the federal government like the threat to sue any state that regulates “AI,” the “AI Action Plan,” and the Executive Order on “Preventing Woke AI In The Federal Government” use term such as “woke” and “ideological bias” explicitly to mean “DEI,” “CRT,” “transgenderism,” and even the basic philosophical and sociological concept of intersectionality. Even the very idea of “Criticality” is increasingly conflated with mere “negativity,” rather than investigation, analysis, and understanding, and standards-setting bodies’ recommendations are shelved before they see the light of day.

All this even as what more and more people say they want and need are processes which depend on and develop nuanced criticality— which allow and help them to figure out how to question when, how, and perhaps most crucially whether we should make and use “AI” tools, at all. Educators, both as individuals and in various professional associations, seem to increasingly disapprove of the uncritical adoption of these same models and systems. And so far roughly 140 technology-related organizations have joined a call for a people- rather than business-centric model of AI development.

Nothing about this current paradigm of “AI” is either inevitable or necessary. We can push for increased rather than decreased local, state, and national regulatory scrutiny and standards, and prioritize the development of standards, frameworks, and recommendations designed to prevent and repair the harms of “generative AI.” Working together, we can develop new paradigms of “AI” systems which are inherently integrated with and founded on different principles, like meaningful consent, sustainability, and deep understandings of the bias and harm that can arise in “AI,” even down to the sourcing and framing of training data.

Again: Change can be made, here. When we engage as many people as possible, right at the point of their increasing resistance, in language and concepts which reflect their motivating values, we can gain ground towards new ways of building “AI” and other technologies.

Audio, Slides, and Transcript for my 2024 SEAC Keynote

Back in October, I was the keynote speaker for the Society for Ethics Across the Curriculum‘s 25th annual conference. My talk was titled “On Truth, Values, Knowledge, and Democracy in the Age of Generative ‘AI,’” and it touched on a lot of things that I’ve been talking and writing about for a while (in fact, maybe the title is familiar?), but especially in the past couple of years. Covered deepfakes, misinformation, disinformation, the social construction of knowledge, artifacts, and consensus reality, and more. And I know it’s been a while since the talk, but it’s not like these things have gotten any less pertinent, these past months.

As a heads-up, I didn’t record the Q&A because I didn’t get the audience’s permission ahead of time, and considering how much of this is about consent, that’d be a little weird, yeah? Anyway, it was in the Q&A section where we got deep into the environmental concerns of water and power use, including ways to use those facts to get through to students who possibly don’t care about some of the other elements. There were a honestly a lot of really trenchant questions from this group, and I was extremely glad to meet and think with them. Really hoping to do so more in the future, too.

A Black man with natural hair shaved on the sides & long in the center, grey square-frame glasses, wearing a silver grey suit jacket, a grey dress shirt with a red and black Paisley tie, and a black N95 medical mask stands on a stage behind a lectern and in front of a large screen showing a slide containing the words On Truth, Values, Knowledge,and Democracy in the Age of Generative “AI”Dr. Damien Patrick Williams Assistant Professor of Philosophy Assistant Professor of Data Science University of North Carolina at Charlotte, and an image of the same man, unmasked, with a beard, wearing a silver-grey pinstriped waistcoat & a dark grey shirt w/ a purple paisley tie in which bookshelves filled w/ books & framed degrees are visible in the background

Me at the SEAC conference; photo taken by Jason Robert (see alt text for further detailed description).

Below, you’ll find the audio, the slides, and the lightly edited transcript (so please forgive any typos and grammatical weirdnesses). All things being equal, a goodly portion of the concepts in this should also be getting worked into a longer paper coming out in 2025.

Hope you dig it.

Until Next Time.

Continue Reading

Appendix A: An Imagined and Incomplete Conversation about “Consciousness” and “AI,” Across Time

Every so often, I think about the fact of one of the best things my advisor and committee members let me write and include in my actual doctoral dissertation, and I smile a bit, and since I keep wanting to share it out into the world, I figured I should put it somewhere more accessible.

So with all of that said, we now rejoin An Imagined and Incomplete Conversation about “Consciousness” and “AI,” Across Time, already (still, seemingly unendingly) in progress:

René Descartes (1637):
The physical and the mental have nothing to do with each other. Mind/soul is the only real part of a person.

Norbert Wiener (1948):
I don’t know about that “only real part” business, but the mind is absolutely the seat of the command and control architecture of information and the ability to reflexively reverse entropy based on context, and input/output feedback loops.

Alan Turing (1952):
Huh. I wonder if what computing machines do can reasonably be considered thinking?

Wiener:
I dunno about “thinking,” but if you mean “pockets of decreasing entropy in a framework in which the larger mass of entropy tends to increase,” then oh for sure, dude.

John Von Neumann (1958):
Wow things sure are changing fast in science and technology; we should maybe slow down and think about this before that change hits a point beyond our ability to meaningfully direct and shape it— a singularity, if you will.

Clynes & Klines (1960):
You know, it’s funny you should mention how fast things are changing because one day we’re gonna be able to have automatic tech in our bodies that lets us pump ourselves full of chemicals to deal with the rigors of space; btw, have we told you about this new thing we’re working on called “antidepressants?”

Gordon Moore (1965):
Right now an integrated circuit has 64 transistors, and they keep getting smaller, so if things keep going the way they’re going, in ten years they’ll have 65 THOUSAND. :-O

Donna Haraway (1991):
We’re all already cyborgs bound up in assemblages of the social, biological, and techonological, in relational reinforcing systems with each other. Also do you like dogs?

Ray Kurzweil (1999):
Holy Shit, did you hear that?! Because of the pace of technological change, we’re going to have a singularity where digital electronics will be indistinguishable from the very fabric of reality! They’ll be part of our bodies! Our minds will be digitally uploaded immortal cyborg AI Gods!

Tech Bros:
Wow, so true, dude; that makes a lot of sense when you think about it; I mean maybe not “Gods” so much as “artificial super intelligences,” but yeah.

90’s TechnoPagans:
I mean… Yeah? It’s all just a recapitulation of The Art in multiple technoscientific forms across time. I mean (*takes another hit of salvia*) if you think about the timeless nature of multidimensional spiritual architectures, we’re already—

DARPA:
Wait, did that guy just say something about “Uploading” and “Cyborg/AI Gods?” We got anybody working on that?? Well GET TO IT!

Disabled People, Trans Folx, BIPOC Populations, Women:
Wait, so our prosthetics, medications, and relational reciprocal entanglements with technosocial systems of this world in order to survive makes us cyborgs?! :-O

[Simultaneously:]

Kurzweil/90’s TechnoPagans/Tech Bros/DARPA:
Not like that.
Wiener/Clynes & Kline:
Yes, exactly.

Haraway:
I mean it’s really interesting to consider, right?

Tech Bros:
Actually, if you think about the bidirectional nature of time, and the likelihood of simulationism, it’s almost certain that there’s already an Artificial Super Intelligence, and it HATES YOU; you should probably try to build it/never think about it, just in case.

90’s TechnoPagans:
…That’s what we JUST SAID.

Philosophers of Religion (To Each Other):
…Did they just Pascal’s Wager Anselm’s Ontological Argument, but computers?

Timnit Gebru and other “AI” Ethicists:
Hey, y’all? There’s a LOT of really messed up stuff in these models you started building.

Disabled People, Trans Folx, BIPOC Populations, Women:
Right?

Anthony Levandowski:
I’m gonna make an AI god right now! And a CHURCH!

The General Public:
Wait, do you people actually believe this?

Microsoft/Google/IBM/Facebook:
…Which answer will make you give us more money?

Timnit Gebru and other “AI” Ethicists:
…We’re pretty sure there might be some problems with the design architectures, too…

Some STS Theorists:
Honestly this is all a little eugenics-y— like, both the technoscientific and the religious bits; have you all sought out any marginalized people who work on any of this stuff? Like, at all??

Disabled People, Trans Folx, BIPOC Populations, Women:
Hahahahah! …Oh you’re serious?

Anthony Levandowski:
Wait, no, nevermind about the church.

Some “AI” Engineers:
I think the things we’re working on might be conscious, or even have souls.

“AI” Ethicists/Some STS Theorists:
Anybody? These prejudices???

Wiener/Tech Bros/DARPA/Microsoft/Google/IBM/Facebook:
“Souls?” Pfffft. Look at these whackjobs, over here. “Souls.” We’re talking about the technological singularity, mind uploading into an eternal digital universal superstructure, and the inevitability of timeless artificial super intelligences; who said anything about “Souls?”

René Descartes/90’s TechnoPagans/Philosophers of Religion/Some STS Theorists/Some “AI” Engineers:

[Scene]


Read more of this kind of thing at:
Williams, Damien Patrick. Belief, Values, Bias, and Agency: Development of and Entanglement with “Artificial Intelligence.” PhD diss., Virginia Tech, 2022. https://vtechworks.lib.vt.edu/handle/10919/111528.

Much of my research deals with the ways in which bodies are disciplined and how they go about resisting that discipline. In this piece, adapted from one of the answers to my PhD preliminary exams written and defended two months ago, I “name the disciplinary strategies that are used to control bodies and discuss the ways that bodies resist those strategies.” Additionally, I address how strategies of embodied control and resistance have changed over time, and how identifying and existing as a cyborg and/or an artificial intelligence can be understood as a strategy of control, resistance, or both.

In Jan Golinski’s Making Natural Knowledge, he spends some time discussing the different understandings of the word “discipline” and the role their transformations have played in the definition and transmission of knowledge as both artifacts and culture. In particular, he uses the space in section three of chapter two to discuss the role Foucault has played in historical understandings of knowledge, categorization, and disciplinarity. Using Foucault’s work in Discipline and Punish, we can draw an explicit connection between the various meanings “discipline” and ways that bodies are individually, culturally, and socially conditioned to fit particular modes of behavior, and the specific ways marginalized peoples are disciplined, relating to their various embodiments.

This will demonstrate how modes of observation and surveillance lead to certain types of embodiments being deemed “illegal” or otherwise unacceptable and thus further believed to be in need of methodologies of entrainment, correction, or reform in the form of psychological and physical torture, carceral punishment, and other means of institutionalization.

Locust, “Master and Servant (Depeche Mode Cover)”

Continue Reading

Below are the slides, audio, and transcripts for my talk ‘”Any Sufficiently Advanced Neglect is Indistinguishable from Malice”: Assumptions and Bias in Algorithmic Systems,’ given at the 21st Conference of the Society for Philosophy and Technology, back in May 2019.

(Cite as: Williams, Damien P. ‘”Any Sufficiently Advanced Neglect is Indistinguishable from Malice”: Assumptions and Bias in Algorithmic Systems;’ talk given at the 21st Conference of the Society for Philosophy and Technology; May 2019)

Now, I’ve got a chapter coming out about this, soon, which I can provide as a preprint draft if you ask, and can be cited as “Constructing Situated and Social Knowledge: Ethical, Sociological, and Phenomenological Factors in Technological Design,” appearing in Philosophy And Engineering: Reimagining Technology And Social Progress. Guru Madhavan, Zachary Pirtle, and David Tomblin, eds. Forthcoming from Springer, 2019. But I wanted to get the words I said in this talk up onto some platforms where people can read them, as soon as possible, for a  couple of reasons.

First, the Current Occupants of the Oval Office have very recently taken the policy position that algorithms can’t be racist, something which they’ve done in direct response to things like Google’s Hate Speech-Detecting AI being biased against black people, and Amazon claiming that its facial recognition can identify fear, without ever accounting for, i dunno, cultural and individual differences in fear expression?

[Free vector image of a white, female-presenting person, from head to torso, with biometric facial recognition patterns on her face; incidentally, go try finding images—even illustrations—of a non-white person in a facial recognition context.]


All these things taken together are what made me finally go ahead and get the transcript of that talk done, and posted, because these are events and policy decisions about which I a) have been speaking and writing for years, and b) have specific inputs and recommendations about, and which are, c) frankly wrongheaded, and outright hateful.

And I want to spend time on it because I think what doesn’t get through in many of our discussions is that it’s not just about how Artificial Intelligence, Machine Learning, or Algorithmic instances get trained, but the processes for how and the cultural environments in which HUMANS are increasingly taught/shown/environmentally encouraged/socialized to think is the “right way” to build and train said systems.

That includes classes and instruction, it includes the institutional culture of the companies, it includes the policy landscape in which decisions about funding and get made, because that drives how people have to talk and write and think about the work they’re doing, and that constrains what they will even attempt to do or even understand.

All of this is cumulative, accreting into institutional epistemologies of algorithm creation. It is a structural and institutional problem.

So here are the Slides:

The Audio:

[Direct Link to Mp3]

And the Transcript is here below the cut:

Continue Reading

2017 SRI Technology and Consciousness Workshop Series Final Report

So, as you know, back in the summer of 2017 I participated in SRI International’s Technology and Consciousness Workshop Series. This series was an eight week program of workshops the current state of the field around, the potential future paths toward, and the moral and social implications of the notion of conscious machines. To do this, we brought together a rotating cast of dozens of researchers in AI, machine learning, psychedelics research, ethics, epistemology, philosophy of mind, cognitive computing, neuroscience, comparative religious studies, robotics, psychology, and much more.

Image of a rectangular name card with a stylized "Technology & Consciousness" logo, at the top, the name Damien Williams in bold in the middle, and SRI International italicized at the bottom; to the right a blurry wavy image of what appears to be a tree with a person standing next to it and another tree in the background to the left., all partially mirrored in a surface at the bottom of the image.

[Image of my name card from the Technology & Consciousness workshop series.]

We traveled from Arlington, VA, to Menlo Park, CA, to Cambridge, UK, and back, and while my primary role was that of conference co-ordinator and note-taker (that place in the intro where it says I “maintained scrupulous notes?” Think 405 pages/160,656 words of notes, taken over eight 5-day weeks of meetings), I also had three separate opportunities to present: Once on interdisciplinary perspectives on minds and mindedness; then on Daoism and Machine Consciousness; and finally on a unifying view of my thoughts across all of the sessions. In relation to this report, I would draw your attention to the following passage:

An objection to this privileging of sentience is that it is anthropomorphic “meat chauvinism”: we are projecting considerations onto technology that derive from our biology. Perhaps conscious technology could have morally salient aspects distinct from sentience: the basic elements of its consciousness could be different than ours.

All of these meetings were held under the auspices of the Chatham House Rule, which meant that there were many things I couldn’t tell you about them, such as the names of the other attendees, or what exactly they said in the context of the meetings. What I was able tell you, however, was what I talked about, and I did, several times. But as of this week, I can give you even more than that.

This past Thursday, SRI released an official public report on all of the proceedings and findings from the 2017 SRI Technology and Consciousness Workshop Series, and they have told all of the participants that they can share said report as widely as they wish. Crucially, that means that I can share it with you. You can either click this link, here, or read it directly, after the cut.

Continue Reading

[This paper was prepared for the 2019 Towards Conscious AI Systems Symposium co-located with the Association for the Advancement of Artificial Intelligence 2019 Spring Symposium Series.

Much of this work derived from my final presentation at the 2017 SRI Technology and Consciousness Workshop Series: “Science, Ethics, Epistemology, and Society: Gains for All via New Kinds of Minds”.]

Abstract. This paper explores the moral, epistemological, and legal implications of multiple different definitions and formulations of human and nonhuman consciousness. Drawing upon research from race, gender, and disability studies, including the phenomenological basis for knowledge and claims to consciousness, I discuss the history of the struggles for personhood among different groups of humans, as well as nonhuman animals, and systems. In exploring the history of personhood struggles, we have a precedent for how engagements and recognition of conscious machines are likely to progress, and, more importantly, a roadmap of pitfalls to avoid. When dealing with questions of consciousness and personhood, we are ultimately dealing with questions of power and oppression as well as knowledge and ontological status—questions which require a situated and relational understanding of the stakeholders involved. To that end, I conclude with a call and outline for how to place nuance, relationality, and contextualization before and above the systematization of rules or tests, in determining or applying labels of consciousness.

Keywords: Consciousness, Machine Consciousness, Philosophy of Mind, Phenomenology, Bodyminds

[Overlapping images of an Octopus carrying a shell, a Mantis Shrimp on the sea floor, and a Pepper Robot]

Continue Reading

As you already know, we went to the second Juvet A.I. Retreat, back in September. If you want to hear several of us talk about what we got up to at the then you’re in luck because here are several conversations conducted by Ben Byford of the Machine Ethics Podcast.

I am deeply grateful to Ben Byford for asking me to sit down and talk about this with him. I talk a great deal, and am surprisingly able to (cogently?) get on almost all of my bullshit—technology and magic and the occult, nonhuman personhood, the sham of gender and race and other social constructions of expected lived categories, the invisible architecture of bias, neurodiversity, and philosophy of mind—in a rather short window of time.

So that’s definitely something…

Continue Reading

Late last month, I was at Theorizing the Web, in NYC, to moderate Panel B3, “Bot Phenomenology,” in which I was very grateful to moderate a panel of people I was very lucky to be able to bring together. Johnathan Flowers, Emma Stamm, and Robin Zebrowski were my interlocutors in a discussion about the potential nature of nonbiological phenomenology. Machine consciousness. What robots might feel.

I led them through with questions like “What do you take phenomenology to mean?” and “what do you think of the possibility of a machine having a phenomenology of its own?” We discussed different definitions of “language” and “communication” and “body,” and unfortunately didn’t have a conversation about how certain definitions of those terms mean that what would be considered language between cats would be a cat communicating via signalling to humans.

It was a really great conversation and the Live Stream video for this is here, and linked below (for now, but it may go away at some point, to be replaced by a static youtube link; when I know that that’s happened, I will update links and embeds, here).

Continue Reading

[Direct Link to Mp3]

Above is the (heavily edited) audio of my final talk for the SRI Technology and Consciousness Workshop Series. The names and voices of other participants have been removed in accordance with the Chatham House Rule.

Below you’ll find the slide deck for my presentation, and below the cut you’ll find the Outline and my notes. For now, this will have to stand in for a transcript, but if you’ve been following the Technoccult Newsletter or the Patreon, then some of this will be strikingly familiar.

Continue Reading