homophobia

All posts tagged homophobia

As of this week, I have a new article in the July-August 2023 Special Issue of American Scientist Magazine. It’s called “Bias Optimizers,” and it’s all about the problems and potential remedies of and for GPT-type tools and other “A.I.”

This article picks up and expands on thoughts started in “The ‘P’ Stands for Pre-Trained” and in a few threads on the socials, as well as touching on some of my comments quoted here, about the use of chatbots and “A.I.” in medicine.

I’m particularly proud of the two intro grafs:

Recently, I learned that men can sometimes be nurses and secretaries, but women can never be doctors or presidents. I also learned that Black people are more likely to owe money than to have it owed to them. And I learned that if you need disability assistance, you’ll get more of it if you live in a facility than if you receive care at home.

At least, that is what I would believe if I accepted the sexist, racist, and misleading ableist pronouncements from today’s new artificial intelligence systems. It has been less than a year since OpenAI released ChatGPT, and mere months since its GPT-4 update and Google’s release of a competing AI chatbot, Bard. The creators of these systems promise they will make our lives easier, removing drudge work such as writing emails, filling out forms, and even writing code. But the bias programmed into these systems threatens to spread more prejudice into the world. AI-facilitated biases can affect who gets hired for what jobs, who gets believed as an expert in their field, and who is more likely to be targeted and prosecuted by police.

As you probably well know, I’ve been thinking about the ethical, epistemological, and social implications of GPT-type tools and “A.I.” in general for quite a while now, and I’m so grateful to the team at American Scientist for the opportunity to discuss all of those things with such a broad and frankly crucial audience.

I hope you enjoy it.

Below are the slides, audio, and transcripts for my talk ‘”Any Sufficiently Advanced Neglect is Indistinguishable from Malice”: Assumptions and Bias in Algorithmic Systems,’ given at the 21st Conference of the Society for Philosophy and Technology, back in May 2019.

(Cite as: Williams, Damien P. ‘”Any Sufficiently Advanced Neglect is Indistinguishable from Malice”: Assumptions and Bias in Algorithmic Systems;’ talk given at the 21st Conference of the Society for Philosophy and Technology; May 2019)

Now, I’ve got a chapter coming out about this, soon, which I can provide as a preprint draft if you ask, and can be cited as “Constructing Situated and Social Knowledge: Ethical, Sociological, and Phenomenological Factors in Technological Design,” appearing in Philosophy And Engineering: Reimagining Technology And Social Progress. Guru Madhavan, Zachary Pirtle, and David Tomblin, eds. Forthcoming from Springer, 2019. But I wanted to get the words I said in this talk up onto some platforms where people can read them, as soon as possible, for a  couple of reasons.

First, the Current Occupants of the Oval Office have very recently taken the policy position that algorithms can’t be racist, something which they’ve done in direct response to things like Google’s Hate Speech-Detecting AI being biased against black people, and Amazon claiming that its facial recognition can identify fear, without ever accounting for, i dunno, cultural and individual differences in fear expression?

[Free vector image of a white, female-presenting person, from head to torso, with biometric facial recognition patterns on her face; incidentally, go try finding images—even illustrations—of a non-white person in a facial recognition context.]


All these things taken together are what made me finally go ahead and get the transcript of that talk done, and posted, because these are events and policy decisions about which I a) have been speaking and writing for years, and b) have specific inputs and recommendations about, and which are, c) frankly wrongheaded, and outright hateful.

And I want to spend time on it because I think what doesn’t get through in many of our discussions is that it’s not just about how Artificial Intelligence, Machine Learning, or Algorithmic instances get trained, but the processes for how and the cultural environments in which HUMANS are increasingly taught/shown/environmentally encouraged/socialized to think is the “right way” to build and train said systems.

That includes classes and instruction, it includes the institutional culture of the companies, it includes the policy landscape in which decisions about funding and get made, because that drives how people have to talk and write and think about the work they’re doing, and that constrains what they will even attempt to do or even understand.

All of this is cumulative, accreting into institutional epistemologies of algorithm creation. It is a structural and institutional problem.

So here are the Slides:

The Audio:

[Direct Link to Mp3]

And the Transcript is here below the cut:

Continue Reading

2017 SRI Technology and Consciousness Workshop Series Final Report

So, as you know, back in the summer of 2017 I participated in SRI International’s Technology and Consciousness Workshop Series. This series was an eight week program of workshops the current state of the field around, the potential future paths toward, and the moral and social implications of the notion of conscious machines. To do this, we brought together a rotating cast of dozens of researchers in AI, machine learning, psychedelics research, ethics, epistemology, philosophy of mind, cognitive computing, neuroscience, comparative religious studies, robotics, psychology, and much more.

Image of a rectangular name card with a stylized "Technology & Consciousness" logo, at the top, the name Damien Williams in bold in the middle, and SRI International italicized at the bottom; to the right a blurry wavy image of what appears to be a tree with a person standing next to it and another tree in the background to the left., all partially mirrored in a surface at the bottom of the image.

[Image of my name card from the Technology & Consciousness workshop series.]

We traveled from Arlington, VA, to Menlo Park, CA, to Cambridge, UK, and back, and while my primary role was that of conference co-ordinator and note-taker (that place in the intro where it says I “maintained scrupulous notes?” Think 405 pages/160,656 words of notes, taken over eight 5-day weeks of meetings), I also had three separate opportunities to present: Once on interdisciplinary perspectives on minds and mindedness; then on Daoism and Machine Consciousness; and finally on a unifying view of my thoughts across all of the sessions. In relation to this report, I would draw your attention to the following passage:

An objection to this privileging of sentience is that it is anthropomorphic “meat chauvinism”: we are projecting considerations onto technology that derive from our biology. Perhaps conscious technology could have morally salient aspects distinct from sentience: the basic elements of its consciousness could be different than ours.

All of these meetings were held under the auspices of the Chatham House Rule, which meant that there were many things I couldn’t tell you about them, such as the names of the other attendees, or what exactly they said in the context of the meetings. What I was able tell you, however, was what I talked about, and I did, several times. But as of this week, I can give you even more than that.

This past Thursday, SRI released an official public report on all of the proceedings and findings from the 2017 SRI Technology and Consciousness Workshop Series, and they have told all of the participants that they can share said report as widely as they wish. Crucially, that means that I can share it with you. You can either click this link, here, or read it directly, after the cut.

Continue Reading

[This paper was prepared for the 2019 Towards Conscious AI Systems Symposium co-located with the Association for the Advancement of Artificial Intelligence 2019 Spring Symposium Series.

Much of this work derived from my final presentation at the 2017 SRI Technology and Consciousness Workshop Series: “Science, Ethics, Epistemology, and Society: Gains for All via New Kinds of Minds”.]

Abstract. This paper explores the moral, epistemological, and legal implications of multiple different definitions and formulations of human and nonhuman consciousness. Drawing upon research from race, gender, and disability studies, including the phenomenological basis for knowledge and claims to consciousness, I discuss the history of the struggles for personhood among different groups of humans, as well as nonhuman animals, and systems. In exploring the history of personhood struggles, we have a precedent for how engagements and recognition of conscious machines are likely to progress, and, more importantly, a roadmap of pitfalls to avoid. When dealing with questions of consciousness and personhood, we are ultimately dealing with questions of power and oppression as well as knowledge and ontological status—questions which require a situated and relational understanding of the stakeholders involved. To that end, I conclude with a call and outline for how to place nuance, relationality, and contextualization before and above the systematization of rules or tests, in determining or applying labels of consciousness.

Keywords: Consciousness, Machine Consciousness, Philosophy of Mind, Phenomenology, Bodyminds

[Overlapping images of an Octopus carrying a shell, a Mantis Shrimp on the sea floor, and a Pepper Robot]

Continue Reading

[Direct Link to Mp3]

Above is the (heavily edited) audio of my final talk for the SRI Technology and Consciousness Workshop Series. The names and voices of other participants have been removed in accordance with the Chatham House Rule.

Below you’ll find the slide deck for my presentation, and below the cut you’ll find the Outline and my notes. For now, this will have to stand in for a transcript, but if you’ve been following the Technoccult Newsletter or the Patreon, then some of this will be strikingly familiar.

Continue Reading

This summer I participated in SRI International’s Technology and Consciousness Workshop Series. The meetings were held under the auspices of the Chatham House Rule, which means that there are many things I can’t tell you about them, such as who else was there, or what they said in the context of the meetings; however I can tell you what I talked about. In light of this recent piece in The Boston Globe and the ongoing developments in the David Slater/PETA/Naruto case, I figured that now was a good time to do so.

I presented three times—once on interdisciplinary perspectives on minds and mindedness; then on Daoism and Machine Consciousness; and finally on a unifying view of my thoughts across all of the sessions. This is my outline and notes for the first of those talks.

I. Overview
In a 2013 aeon Article Michael Hanlon said he didn’t think we’d ever solve “The Hard Problem,” and there’s been some skepticism about it, elsewhere. I’ll just say that said question seems to completely miss a possibly central point. Something like consciousness is, and what it is is different for each thing that displays anything like what we think it might be. If we manage to generate at least one mind that is similar enough to what humans experience as “conscious” that we may communicate with it, what will we owe it and what would it be able to ask from us? How might our interactions be affected by the fact that its mind (or their minds) will be radically different from ours? What will it be able to know that we cannot, and what will we have to learn from it?

So I’m going to be talking today about intersectionality, embodiment, extended minds, epistemic valuation, phenomenological experience, and how all of these things come together to form the bases for our moral behavior and social interactions. To do that, I’m first going to need ask you some questions:

Continue Reading

(This was originally posted over at Medium, [well parts were originally posted in the newslettter, but], but I wanted it somewhere I could more easily manage.)


Hey.

I just wanna say (and you know who you are): I get you were scared of losing your way of life — the status quo was changing all around you. Suddenly it wasn’t okay anymore to say or do things that the world previously told you were harmless. People who didn’t “feel” like you were suddenly loudly everywhere, and no one just automatically believed what you or those you believed in had to say, anymore. That must have been utterly terrifying.

But here’s the thing: People are really scared now. Not just of obsolescence, or of being ignored. They’re terrified for their lives. They’re not worried about “the world they knew.” They’re worried about whether they’ll be rounded up and put in camps or shot or beaten in the street. Because, you see, many of the people who voted for this, and things like it around the world, see many of us — women, minorities, immigrants, LGBTQIA folks, disabled folks, neurodivergent folks — as less than “real” people, and want to be able to shut us up using whatever means they deem appropriate, including death.

The vice president elect thinks gay people can be “retrained,” and that we should attempt it via the same methods that make us side-eye dog owners. The man tapped to be a key advisor displays and has cultivated an environment of white supremacist hatred. The president-elect is said to be “mulling over” a registry for Muslim people in the country. A registry. Based on your religion.

My own cousin had food thrown at her in a diner, right before the election. And things haven’t exactly gotten better, since then.

Certain hateful elements want many of us dead or silent and “in our place,” now, just as much as ever. And all we want and ask for is equal respect, life, and justice.

I said it on election night and I’ll say it again: there’s no take-backsies, here. I’m speaking to those who actively voted for this, or didn’t actively plant yourselves against it (and you know who you are): You did this. You cultivated it. And I know you did what you thought you had to, but people you love are scared, because their lives are literally in danger, so it’s time to wake up now. It’s time to say “No.”

We’re all worried about jobs and money and “enough,” because that’s what this system was designed to make us worry about. Your Muslim neighbour, your gay neighbour, your trans neighbour, your immigrant neighbour, your NEIGHBOUR IS NOT YOUR ENEMY. The system that tells you to hate and fear them is. And if you bought into that system because you couldn’t help being afraid then I’m sorry, but it’s time to put it down and Wake Up. Find it in yourself to ask forgiveness of yourself and of those you’ve caused mortal terror. If you call yourself Christian, that should ring really familiar. But other faiths (and nonfaiths) know it too.

We do better together. So it’s time to gather up, together, work, together, and say “No,” together.

So snap yourself out of it, and help us. If you’re in the US, please call your representatives, federal and local. Tell them what you want, tell them why you’re scared. Tell them that these people don’t represent our values and the world we wish to see:
http://www.house.gov/representatives/find/
http://www.senate.gov/senators/contact/

Because this, right here, is the fundamental difference between fearing the loss of your way of life, and the fear of losing your literal life.

Be with the people you love. Be by their side and raise their voices if they can’t do it for themselves, for whatever reason. Listen to them, and create a space where they feel heard and loved, and where others will listen to them as well.

And when you come around, don’t let your pendulum swing so far that you fault those who can’t move forward, yet. Please remember that there is a large contingent of people who, for many various reasons, cannot be out there protesting. Shaming people who have anxiety, depression, crippling fear of their LIVES, or are trying to not get arrested so their kids can, y’know, EAT FOOD? Doesn’t help.

So show some fucking compassion. Don’t shame those who are tired and scared and just need time to collect themselves. Urge and offer assistance where you can, and try to understand their needs. Just do what you can to help us all believe that we can get through this. We may need to lean extra hard on each other for a while, but we can do this.

You know who you are. We know you didn’t mean to. But this is where we are, now. Shake it off. Start again. We can do this.


If you liked this article, consider dropping something into the A Future Worth Thinking About Tip Jar