gender

All posts tagged gender

ChatGPT is Actively Marketing to Students During University Finals Season

It’s really disheartening and honestly kind of telling that in spite of everything, ChatGPT is actively marketing itself to students in the run-up to college finals season.

We’ve talked many (many) times before about the kinds of harm that can come from giving over too much epistemic and heuristic authority over to systems built by people who have repeatedly, doggedly proven that they will a) buy into their own hype and b) refuse to ever question their own biases and hubris. But additionally, there’s been at least two papers in the past few months alone, and more in the last two years (1, 2, 3), demonstrating that over-reliance on “AI” tools diminishes critical thinking capacity and prevents students from building the kinds of foundational skills which allow them to learn more complex concepts, adapt to novel situations, and grow into experts.

Screenshot of ChatpGPT page:ChaptGPT Promo: 2 months free for students ChatGPT Plus is now free for college students through May Offer valid for students in the US and Canada [Buttons reading "Claim offer" and "learn more" An image of a pencil scrawling a scribbly and looping line] ChatGPT Plus is here to help you through finals

Screenshot of ChatGPT[.]com/students showing an introductory offer for college students during finals; captured 04/04/2025

That lack of expertise and capacity has a direct impact on people’s ability to discern facts, produce knowledge, and even participate in civic/public life. The diminishment of critical thinking skills makes people more susceptible to propaganda and other forms of dis- and misinformation— problems which, themselves, are already being exacerbated by the proliferation of “Generative AI” text and image systems and people not fulling understanding them for the bullshit engines they are.

The abovementioned susceptibility allows authoritarian-minded individuals and groups to thus further degrade belief in shared knowledge and consensus reality and to erode trust in expertise, thus exacerbating and worsening the next turn on the cycle when it starts all over again.

All of this creates the very conditions by which authoritarians seek to cement their control: by undercutting the individual tools and social mechanisms which can empower the populace to understand and challenge the kinds of damage dictators, theocrats, fascists, and kleptocrats seek to do on the path to enriching themselves and consolidating power.

And here’s OpenAI flagrantly encouraging said over-reliance. The original post on linkedIn even has an image of someone prompting ChatGPT to guide them on “mastering [a] calc 101 syllabus in two weeks.” So that’s nice.

No wait; the other thing… Terrible. It’s terrible.

View Kate Rouch’s graphic linkKate RouchKate Rouch • 3rd+Premium • 3rd+ Chief Marketing Officer at OpenAI.Chief Marketing Officer at OpenAI. 21h • Edited • 21 hours ago • Edited • Visible to anyone on or off LinkedIn ChatGPT Plus is free during finals! We can’t achieve our mission without empowering young people to use AI. Fittingly, today we launched our first scaled marketing campaign. The campaign shows students different ways to take advantage of ChatGPT as they study, work out, try to land jobs, and plan their summers. It also offers ChatGPT Plus’s more advanced capabilities to students for free through their finals. You’ll see creative on billboards, digital ads, podcasts, and more throughout the coming weeks. We hope you learn something useful! If you’re a college student in the US or Canada, you can claim the offer at www.chatgpt.com/students

Screenshot of a linkedIn post from OpenAI’s chief marketing officer. Captured 04/04/2025

Understand this. Push back against it. Reject its wholesale uncritical adoption and proliferation. Demand a more critical and nuanced stance on “AI” from yourself, from your representatives at every level, and from every company seeking to shove this technology down our throats.

Audio, Slides, and Transcript for my 2024 SEAC Keynote

Back in October, I was the keynote speaker for the Society for Ethics Across the Curriculum‘s 25th annual conference. My talk was titled “On Truth, Values, Knowledge, and Democracy in the Age of Generative ‘AI,’” and it touched on a lot of things that I’ve been talking and writing about for a while (in fact, maybe the title is familiar?), but especially in the past couple of years. Covered deepfakes, misinformation, disinformation, the social construction of knowledge, artifacts, and consensus reality, and more. And I know it’s been a while since the talk, but it’s not like these things have gotten any less pertinent, these past months.

As a heads-up, I didn’t record the Q&A because I didn’t get the audience’s permission ahead of time, and considering how much of this is about consent, that’d be a little weird, yeah? Anyway, it was in the Q&A section where we got deep into the environmental concerns of water and power use, including ways to use those facts to get through to students who possibly don’t care about some of the other elements. There were a honestly a lot of really trenchant questions from this group, and I was extremely glad to meet and think with them. Really hoping to do so more in the future, too.

A Black man with natural hair shaved on the sides & long in the center, grey square-frame glasses, wearing a silver grey suit jacket, a grey dress shirt with a red and black Paisley tie, and a black N95 medical mask stands on a stage behind a lectern and in front of a large screen showing a slide containing the words On Truth, Values, Knowledge,and Democracy in the Age of Generative “AI”Dr. Damien Patrick Williams Assistant Professor of Philosophy Assistant Professor of Data Science University of North Carolina at Charlotte, and an image of the same man, unmasked, with a beard, wearing a silver-grey pinstriped waistcoat & a dark grey shirt w/ a purple paisley tie in which bookshelves filled w/ books & framed degrees are visible in the background

Me at the SEAC conference; photo taken by Jason Robert (see alt text for further detailed description).

Below, you’ll find the audio, the slides, and the lightly edited transcript (so please forgive any typos and grammatical weirdnesses). All things being equal, a goodly portion of the concepts in this should also be getting worked into a longer paper coming out in 2025.

Hope you dig it.

Until Next Time.

Continue Reading

A few months ago, I was approached by the School of Data Science, and the University Communications office, here at UNC Charlotte, to ask me to sit down for some coverage my Analytics Frontiers keynote, and my work on “AI,” broadly construed.

Well, I just found out that the profile that local station WRAL wrote on me went live back in June.

A Black man in a charcoal pinstipe suit jacket, a light grey dress shirt with a red and black Paisley tie, black jeans, black boots, and a black N95 medical mask stands on a stage in front of tables, chairs, and a large screen showing a slide containing images of the meta logo, the skynet logo, the google logo, a headshot of boris karloff as frankenstein's creature, the rectangular black interface with glowing red circle of HAL-9000, the OpenAI logo, and an image of the handwritten list of the attendees of the original 1956 Dartmouth Summer Research Project on Artificial Intelligence (NB: all named attendees are men)

My conversations with the writer Shappelle Marshall both on the phone and email were really interesting, and I’m really quite pleased with the resulting piece, on the whole, especially our discussion of how bias (perspectives, values) of some kind will always make its way into all the technologies we make, so we should be trying to make sure they’re the perspectives and values we want, rather than the prejudices we might just so happen to have. Additionally, I appreciate that she included my differentiation between the practice of equity and the felt experience of fairness, because, well… *gestures broadly at everything*.

With all that being said, I definitely would’ve liked if they could have included some of our longer discussion around the ideas in the passage that starts “…AI and automation often create different types of work for human beings rather than eliminating work entirely.” What I was saying there is that “AI” companies keep promising a future where all “tedious work” is automated away, but actually creating a situation in which humans will actually have to do a lot more work (a la Ruth Schwartz Cowan)— and as we know, this has already been shown to be happening.

What I am for sure not saying there is some kind of “don’t worry, we’ll all still have jobs! :D” capitalist boosterism. We’re adaptable, yes, but the need for these particular adaptations is down to capitalism doing a combination of making us fill in any extra leisure time we get from automation with more work, and forcing us to figure a new way to Jobity Job or, y’know, starve.

But, ultimately, I think there’s still intimations of all of my positions, in this piece, along with everything else, even if they couldn’t include every single thing we discussed; there are only so many column inches in a day, after all. Also, anyone who finds me for the first through this article and then goes on to directly engage any of my writing or presentations (fingers crossed on that) will very quickly be disabused of any notion that I’m like, “rah-rah capital.”

Hopefully they’ll even learn and begin to understand Why I’m not. That’d be the real win.

Anywho: Shappelle did a fantastic job, and if you get a chance to talk with her, I recommend it. Here’s the piece, and I hope you enjoy it.

Hello Everyone.

Here is my prerecorded talk for the NC State R.L. Rabb Symposium on Embedding AI in Society.

There are captions in the video already, but I’ve also gone ahead and C/P’d the SRT text here, as well.
[2024 Note: Something in GDrive video hosting has broken the captions, but I’ve contacted them and hopefully they’ll be fixed soon.]

There were also two things I meant to mention, but failed to in the video:

1) The history of facial recognition and carceral surveillance being used against Black and Brown communities ties into work from Lundy Braun, Melissa N Stein, Seiberth et al., and myself on the medicalization and datafication of Black bodies without their consent, down through history. (Cf. Me, here: Fitting the description: historical and sociotechnical elements of facial recognition and anti-black surveillance”.)

2) Not only does GPT-3 fail to write about humanities-oriented topics with respect, it still can’t write about ISLAM AT ALL without writing in connotations of violence and hatred.

Also I somehow forgot to describe the slide with my email address and this website? What the hell Damien.

Anyway.

I’ve embedded the content of the resource slides in the transcript, but those are by no means all of the resources on this, just the most pertinent.

All of that begins below the cut.

 Black man with a mohawk and glasses, wearing a black button up shirt, a red paisley tie, a light grey check suit jacket, and black jeans, stands in front of two tall bookshelves full of books, one thin & red, one of wide untreated pine, and a large monitor with a printer and papers on the stand beneath it.

[First conference of the year; figured i might as well get gussied up.]

Continue Reading

To view this content, you must be a member of Damien's Patreon at $1 or more
Already a qualifying Patreon member? Refresh to access this content.
To view this content, you must be a member of Damien's Patreon at $1 or more
Already a qualifying Patreon member? Refresh to access this content.
Below are the slides, audio, and transcripts for my talk ‘”Any Sufficiently Advanced Neglect is Indistinguishable from Malice”: Assumptions and Bias in Algorithmic Systems,’ given at the 21st Conference of the Society for Philosophy and Technology, back in May 2019.

(Cite as: Williams, Damien P. ‘”Any Sufficiently Advanced Neglect is Indistinguishable from Malice”: Assumptions and Bias in Algorithmic Systems;’ talk given at the 21st Conference of the Society for Philosophy and Technology; May 2019)

Now, I’ve got a chapter coming out about this, soon, which I can provide as a preprint draft if you ask, and can be cited as “Constructing Situated and Social Knowledge: Ethical, Sociological, and Phenomenological Factors in Technological Design,” appearing in Philosophy And Engineering: Reimagining Technology And Social Progress. Guru Madhavan, Zachary Pirtle, and David Tomblin, eds. Forthcoming from Springer, 2019. But I wanted to get the words I said in this talk up onto some platforms where people can read them, as soon as possible, for a  couple of reasons.

First, the Current Occupants of the Oval Office have very recently taken the policy position that algorithms can’t be racist, something which they’ve done in direct response to things like Google’s Hate Speech-Detecting AI being biased against black people, and Amazon claiming that its facial recognition can identify fear, without ever accounting for, i dunno, cultural and individual differences in fear expression?

[Free vector image of a white, female-presenting person, from head to torso, with biometric facial recognition patterns on her face; incidentally, go try finding images—even illustrations—of a non-white person in a facial recognition context.]


All these things taken together are what made me finally go ahead and get the transcript of that talk done, and posted, because these are events and policy decisions about which I a) have been speaking and writing for years, and b) have specific inputs and recommendations about, and which are, c) frankly wrongheaded, and outright hateful.

And I want to spend time on it because I think what doesn’t get through in many of our discussions is that it’s not just about how Artificial Intelligence, Machine Learning, or Algorithmic instances get trained, but the processes for how and the cultural environments in which HUMANS are increasingly taught/shown/environmentally encouraged/socialized to think is the “right way” to build and train said systems.

That includes classes and instruction, it includes the institutional culture of the companies, it includes the policy landscape in which decisions about funding and get made, because that drives how people have to talk and write and think about the work they’re doing, and that constrains what they will even attempt to do or even understand.

All of this is cumulative, accreting into institutional epistemologies of algorithm creation. It is a structural and institutional problem.

So here are the Slides:

The Audio:

Audio Player

[Direct Link to Mp3]

And the Transcript is here below the cut:

Continue Reading

2017 SRI Technology and Consciousness Workshop Series Final Report

To view this content, you must be a member of Damien's Patreon at $1 or more
Already a qualifying Patreon member? Refresh to access this content.

[This paper was prepared for the 2019 Towards Conscious AI Systems Symposium co-located with the Association for the Advancement of Artificial Intelligence 2019 Spring Symposium Series.

Much of this work derived from my final presentation at the 2017 SRI Technology and Consciousness Workshop Series: “Science, Ethics, Epistemology, and Society: Gains for All via New Kinds of Minds”.]

Abstract. This paper explores the moral, epistemological, and legal implications of multiple different definitions and formulations of human and nonhuman consciousness. Drawing upon research from race, gender, and disability studies, including the phenomenological basis for knowledge and claims to consciousness, I discuss the history of the struggles for personhood among different groups of humans, as well as nonhuman animals, and systems. In exploring the history of personhood struggles, we have a precedent for how engagements and recognition of conscious machines are likely to progress, and, more importantly, a roadmap of pitfalls to avoid. When dealing with questions of consciousness and personhood, we are ultimately dealing with questions of power and oppression as well as knowledge and ontological status—questions which require a situated and relational understanding of the stakeholders involved. To that end, I conclude with a call and outline for how to place nuance, relationality, and contextualization before and above the systematization of rules or tests, in determining or applying labels of consciousness.

Keywords: Consciousness, Machine Consciousness, Philosophy of Mind, Phenomenology, Bodyminds

[Overlapping images of an Octopus carrying a shell, a Mantis Shrimp on the sea floor, and a Pepper Robot]

Continue Reading

As you already know, we went to the second Juvet A.I. Retreat, back in September. If you want to hear several of us talk about what we got up to at the then you’re in luck because here are several conversations conducted by Ben Byford of the Machine Ethics Podcast.

I am deeply grateful to Ben Byford for asking me to sit down and talk about this with him. I talk a great deal, and am surprisingly able to (cogently?) get on almost all of my bullshit—technology and magic and the occult, nonhuman personhood, the sham of gender and race and other social constructions of expected lived categories, the invisible architecture of bias, neurodiversity, and philosophy of mind—in a rather short window of time.

So that’s definitely something…

Continue Reading