data science

All posts tagged data science

Failures of “AI” Promise: Critical Thinking, Misinformation, Prosociality, & Trust

So, new research shows that a) LLM-type “AI” chatbots are extremely persuasive and able to get voters to shift their positions, and that b) the more effective they are at that, the less they hew to factual reality.

Which: Yeah. A bunch of us told you this.

Again: the Purpose of LLM- type “AI” is not to tell you the truth or to lie to you, but to provide you with an answer-shaped something you are statistically determined to be more likely to accept, irrespective of facts— this is the reason I call them “bullshit engines.” And it’s what makes them perfect for accelerating dis- and misinformation and persuasive propaganda; perfect for authoritarian and fascist aims of destabilizing trust in expertise. Now, the fear here isn’t necessarily that candidate A gets elected over candidate B (see commentary from the paper authors, here). The real problem is the loss of even the willingness to try to build shared consensus reality— i.e., the “AI” enabled epistemic crisis point we’ve been staring down for about a decade.

Other preliminary results show that overreliance on “generative AI” actively harms critical thinking skills, degrading not just trust in, but the ability to critically engage with, determine the value of, categorize, and intentionally sincerely consider new ways of organizing and understanding facts to produce knowledge. Further, users actively reject less sycophantic versions of “AI” and get increasingly hostile toward/less likely to help or be helped by other actual humans because said humans aren’t as immediately sycophantic. And thus, taken together, these factors create cycles of psychological (and emotional) dependence on tools that Actively Harm Critical Thinking And Human Interaction.

What better dirt in which for disinformation to grow?

The design, cultural deployment, embedded values, and structural affordances of “AI” has also been repeatedly demonstrated to harm both critical skills development and now also the structure and maintenance of the fabric of  social relationships in terms of mutual trust and the desire and ability to learn from each other. That is, students are more suspicious of teachers who use “AI,” and teachers are still, increasingly, on edge about the idea that their students might be using “AI,” and so, in the inimitable words and delivery of Kurt Russell:

Kurt Russell as MacReady from The Thing, a white man with shoulder-length hair and a long scruff beard, wearing grey and olive drab, looking exhausted and sitting next to a bottle of J&B Rare Blend Scotch whisky and a pint glass 1/3 full of the same, saying into a microphone, “Nobody trusts anybody now. And we’re all very tired.”

Combine all of the above with what I’ve repeatedly argued about the impact of “AI” on the spread of dis- and misinformation, consensus knowledge-making, authoritarianism, and the eugenicist, fascist, and generally bigoted tendencies embedded in all of it—and well… It all sounds pretty anti-pedagogical and anti-social to me.

And I really don’t think it’s asking too much to require that all of these demonstrated problems be seriously and meticulously addressed before anyone advocating for their implementation in educational and workplace settings is allowed to go through with it.

Like… That just seems sensible, no?

The current paradigm of “AI” encodes and recapitulates all of these things, but previous technosocial paradigms did too, and if these facts had been addressed back then, in the culture of technology specifically and our sociotechnical culture writ large, then it might not still be like that, today.

But it also doesn’t have to stay like this. It genuinely does not.

We can make these tools differently. We can train people earlier and more consistently to understand the current models of “AI,” reframing notions of “AI Literacy” away from “how to use it” and toward an understanding of how they functions and what they actually can and cannot do. We can make it clear that what they produce is not truth, not facts, not even lies, but always bullshit, even when they seem to conform to factual reality. We can train people— students, yes, but also professionals, educators, and wider communities— to understand how bias confirmation and optimization work, how propaganda, marketing, and psychological manipulation work.

The more people learn about what these systems do, what they’re built from, how they’re trained, and the quite frankly alarming amount of water and energy it has taken and is projected to take to develop and maintain them, the more those same people resist the force and coercion that corporations and even universities and governments think pass for transparent, informed, meaningful consent.

Like… researchers are highlight that the current trajectory of “AI” energy and water use will not only undo several years of tech sector climate gains, but will also prevent corporations such as Google, Amazon, and Meta from meeting carbon-neutral and water-positive goals. And that’s without considering the infrastructural capture of those resources in the process of building said data centers, in the first place (the authors list this as being outside their scope); with that data, the picture is worse.

As many have noted, environmental impacts are among the major concerns of those who say that they are reticent to use or engage with all things “artificial intelligence”— even sparking public outcry across the country, with more people joining calls that any and all new “AI” training processes and data centers be built to run on existing and expanded renewables. We are increasingly finding the general public wants their neighbours and institutions to engage in meaningful consideration of how we might remediate or even prevent “AI’s” potential social, environmental, and individual intellectual harms.

But, also increasingly, we find that institutional pushes— including the conclusions of the Nature article on energy use trends— tend toward an “adoption and dominance at all costs” model of “AI,” which in turn seem to be founded on the circular reasoning that “we have to use ‘AI’ so that and because it will be useful.” Recurrent directives from the federal government like the threat to sue any state that regulates “AI,” the “AI Action Plan,” and the Executive Order on “Preventing Woke AI In The Federal Government” use term such as “woke” and “ideological bias” explicitly to mean “DEI,” “CRT,” “transgenderism,” and even the basic philosophical and sociological concept of intersectionality. Even the very idea of “Criticality” is increasingly conflated with mere “negativity,” rather than investigation, analysis, and understanding, and standards-setting bodies’ recommendations are shelved before they see the light of day.

All this even as what more and more people say they want and need are processes which depend on and develop nuanced criticality— which allow and help them to figure out how to question when, how, and perhaps most crucially whether we should make and use “AI” tools, at all. Educators, both as individuals and in various professional associations, seem to increasingly disapprove of the uncritical adoption of these same models and systems. And so far roughly 140 technology-related organizations have joined a call for a people- rather than business-centric model of AI development.

Nothing about this current paradigm of “AI” is either inevitable or necessary. We can push for increased rather than decreased local, state, and national regulatory scrutiny and standards, and prioritize the development of standards, frameworks, and recommendations designed to prevent and repair the harms of “generative AI.” Working together, we can develop new paradigms of “AI” systems which are inherently integrated with and founded on different principles, like meaningful consent, sustainability, and deep understandings of the bias and harm that can arise in “AI,” even down to the sourcing and framing of training data.

Again: Change can be made, here. When we engage as many people as possible, right at the point of their increasing resistance, in language and concepts which reflect their motivating values, we can gain ground towards new ways of building “AI” and other technologies.

Reimagining “AI’s” Environmental and Sociotechnical Materialities

There’s a new open-access book of collected essays called Reimagining AI for Environmental Justice and Creativity, and I happen to have an essay in it. The collection is made of contributions from participants in the October 2024 “Reimagining AI for Environmental Justice and Creativity” panels and workshops put on by Jess Reia, MC Forelle, and Yingchong Wang, and I’ve included my essay here, for you. That said, I highly recommend checking out the rest of the book, because all the contributions are fantastic.

This work was co-sponsored by: The Karsh Institute Digital Technology for Democracy Lab, The Environmental Institute, and The School of Data Science, all at UVA. The videos for both days of the “Reimagining AI for Environmental Justice and Creativity” talks are now available, and you can find them at the Karsh Institute website, and also below, before the text of my essay.

All in all, I think these these are some really great conversations on “AI” and environmental justice. They cover “AI”‘s extremely material practical aspects, the deeply philosophical aspects, and the necessary and fundamental connections between the two, and these are crucial discussions to be having, especially right now.

Hope you dig it.

Continue Reading

It’s really disheartening and honestly kind of telling that in spite of everything, ChatGPT is actively marketing itself to students in the run-up to college finals season.

We’ve talked many (many) times before about the kinds of harm that can come from giving over too much epistemic and heuristic authority over to systems built by people who have repeatedly, doggedly proven that they will a) buy into their own hype and b) refuse to ever question their own biases and hubris. But additionally, there’s been at least two papers in the past few months alone, and more in the last two years (1, 2, 3), demonstrating that over-reliance on “AI” tools diminishes critical thinking capacity and prevents students from building the kinds of foundational skills which allow them to learn more complex concepts, adapt to novel situations, and grow into experts.

Screenshot of ChatpGPT page:ChaptGPT Promo: 2 months free for students ChatGPT Plus is now free for college students through May Offer valid for students in the US and Canada [Buttons reading "Claim offer" and "learn more" An image of a pencil scrawling a scribbly and looping line] ChatGPT Plus is here to help you through finals

Screenshot of ChatGPT[.]com/students showing an introductory offer for college students during finals; captured 04/04/2025

That lack of expertise and capacity has a direct impact on people’s ability to discern facts, produce knowledge, and even participate in civic/public life. The diminishment of critical thinking skills makes people more susceptible to propaganda and other forms of dis- and misinformation— problems which, themselves, are already being exacerbated by the proliferation of “Generative AI” text and image systems and people not fulling understanding them for the bullshit engines they are.

The abovementioned susceptibility allows authoritarian-minded individuals and groups to thus further degrade belief in shared knowledge and consensus reality and to erode trust in expertise, thus exacerbating and worsening the next turn on the cycle when it starts all over again.

All of this creates the very conditions by which authoritarians seek to cement their control: by undercutting the individual tools and social mechanisms which can empower the populace to understand and challenge the kinds of damage dictators, theocrats, fascists, and kleptocrats seek to do on the path to enriching themselves and consolidating power.

And here’s OpenAI flagrantly encouraging said over-reliance. The original post on linkedIn even has an image of someone prompting ChatGPT to guide them on “mastering [a] calc 101 syllabus in two weeks.” So that’s nice.

No wait; the other thing… Terrible. It’s terrible.

View Kate Rouch’s graphic linkKate RouchKate Rouch • 3rd+Premium • 3rd+ Chief Marketing Officer at OpenAI.Chief Marketing Officer at OpenAI. 21h • Edited • 21 hours ago • Edited • Visible to anyone on or off LinkedIn ChatGPT Plus is free during finals! We can’t achieve our mission without empowering young people to use AI. Fittingly, today we launched our first scaled marketing campaign. The campaign shows students different ways to take advantage of ChatGPT as they study, work out, try to land jobs, and plan their summers. It also offers ChatGPT Plus’s more advanced capabilities to students for free through their finals. You’ll see creative on billboards, digital ads, podcasts, and more throughout the coming weeks. We hope you learn something useful! If you’re a college student in the US or Canada, you can claim the offer at www.chatgpt.com/students

Screenshot of a linkedIn post from OpenAI’s chief marketing officer. Captured 04/04/2025

Understand this. Push back against it. Reject its wholesale uncritical adoption and proliferation. Demand a more critical and nuanced stance on “AI” from yourself, from your representatives at every level, and from every company seeking to shove this technology down our throats.

Audio, Slides, and Transcript for my 2024 SEAC Keynote

Back in October, I was the keynote speaker for the Society for Ethics Across the Curriculum‘s 25th annual conference. My talk was titled “On Truth, Values, Knowledge, and Democracy in the Age of Generative ‘AI,’” and it touched on a lot of things that I’ve been talking and writing about for a while (in fact, maybe the title is familiar?), but especially in the past couple of years. Covered deepfakes, misinformation, disinformation, the social construction of knowledge, artifacts, and consensus reality, and more. And I know it’s been a while since the talk, but it’s not like these things have gotten any less pertinent, these past months.

As a heads-up, I didn’t record the Q&A because I didn’t get the audience’s permission ahead of time, and considering how much of this is about consent, that’d be a little weird, yeah? Anyway, it was in the Q&A section where we got deep into the environmental concerns of water and power use, including ways to use those facts to get through to students who possibly don’t care about some of the other elements. There were a honestly a lot of really trenchant questions from this group, and I was extremely glad to meet and think with them. Really hoping to do so more in the future, too.

A Black man with natural hair shaved on the sides & long in the center, grey square-frame glasses, wearing a silver grey suit jacket, a grey dress shirt with a red and black Paisley tie, and a black N95 medical mask stands on a stage behind a lectern and in front of a large screen showing a slide containing the words On Truth, Values, Knowledge,and Democracy in the Age of Generative “AI”Dr. Damien Patrick Williams Assistant Professor of Philosophy Assistant Professor of Data Science University of North Carolina at Charlotte, and an image of the same man, unmasked, with a beard, wearing a silver-grey pinstriped waistcoat & a dark grey shirt w/ a purple paisley tie in which bookshelves filled w/ books & framed degrees are visible in the background

Me at the SEAC conference; photo taken by Jason Robert (see alt text for further detailed description).

Below, you’ll find the audio, the slides, and the lightly edited transcript (so please forgive any typos and grammatical weirdnesses). All things being equal, a goodly portion of the concepts in this should also be getting worked into a longer paper coming out in 2025.

Hope you dig it.

Until Next Time.

Continue Reading

A few months ago, I was approached by the School of Data Science, and the University Communications office, here at UNC Charlotte, to ask me to sit down for some coverage my Analytics Frontiers keynote, and my work on “AI,” broadly construed.

Well, I just found out that the profile that local station WRAL wrote on me went live back in June.

A Black man in a charcoal pinstipe suit jacket, a light grey dress shirt with a red and black Paisley tie, black jeans, black boots, and a black N95 medical mask stands on a stage in front of tables, chairs, and a large screen showing a slide containing images of the meta logo, the skynet logo, the google logo, a headshot of boris karloff as frankenstein's creature, the rectangular black interface with glowing red circle of HAL-9000, the OpenAI logo, and an image of the handwritten list of the attendees of the original 1956 Dartmouth Summer Research Project on Artificial Intelligence (NB: all named attendees are men)

My conversations with the writer Shappelle Marshall both on the phone and email were really interesting, and I’m really quite pleased with the resulting piece, on the whole, especially our discussion of how bias (perspectives, values) of some kind will always make its way into all the technologies we make, so we should be trying to make sure they’re the perspectives and values we want, rather than the prejudices we might just so happen to have. Additionally, I appreciate that she included my differentiation between the practice of equity and the felt experience of fairness, because, well… *gestures broadly at everything*.

With all that being said, I definitely would’ve liked if they could have included some of our longer discussion around the ideas in the passage that starts “…AI and automation often create different types of work for human beings rather than eliminating work entirely.” What I was saying there is that “AI” companies keep promising a future where all “tedious work” is automated away, but actually creating a situation in which humans will actually have to do a lot more work (a la Ruth Schwartz Cowan)— and as we know, this has already been shown to be happening.

What I am for sure not saying there is some kind of “don’t worry, we’ll all still have jobs! :D” capitalist boosterism. We’re adaptable, yes, but the need for these particular adaptations is down to capitalism doing a combination of making us fill in any extra leisure time we get from automation with more work, and forcing us to figure a new way to Jobity Job or, y’know, starve.

But, ultimately, I think there’s still intimations of all of my positions, in this piece, along with everything else, even if they couldn’t include every single thing we discussed; there are only so many column inches in a day, after all. Also, anyone who finds me for the first through this article and then goes on to directly engage any of my writing or presentations (fingers crossed on that) will very quickly be disabused of any notion that I’m like, “rah-rah capital.”

Hopefully they’ll even learn and begin to understand Why I’m not. That’d be the real win.

Anywho: Shappelle did a fantastic job, and if you get a chance to talk with her, I recommend it. Here’s the piece, and I hope you enjoy it.