Here is my prerecorded talk for the NC State R.L. Rabb Symposium on Embedding AI in Society.
There are captions in the video already, but I’ve also gone ahead and C/P’d the SRT text here, as well.
There were also two things I meant to mention, but failed to in the video:
1) The history of facial recognition and carceral surveillance being used against Black and Brown communities ties into work from Lundy Braun, Melissa N Stein, Seiberth et al., and myself on the medicalization and datafication of Black bodies without their consent, down through history. (Cf. Me, here: Fitting the description: historical and sociotechnical elements of facial recognition and anti-black surveillance”.)
2) Not only does GPT-3 fail to write about humanities-oriented topics with respect, it still can’t write about ISLAM AT ALL without writing in connotations of violence and hatred.
Also I somehow forgot to describe the slide with my email address and this website? What the hell Damien.
I’ve embedded the content of the resource slides in the transcript, but those are by no means all of the resources on this, just the most pertinent.
All of that begins below the cut.
00:00:01,170 –> 00:00:02,730
Hello, my name is Damien Williams.
00:00:02,820 –> 00:00:12,390
I’m a PhD candidate at Virginia Tech’s department of Science, Technology, and Society, and my talk today is called “Why AI Research Needs Disabled and Marginalized Perspectives.”
00:00:15,510 –> 00:00:22,320
One of the things that I want to make clear at first is that when I talk about AI today, I’m talking about things like algorithmic systems, machine learning,
00:00:23,100 –> 00:00:34,080
systemic institutionalized solutions, support systems, not so much talking about things that we think of as, “strong AI,” or “artificial general intelligence”—
00:00:34,110 –> 00:00:38,670
what I like to think of as “autonomous generative intelligences.”
00:00:39,740 –> 00:00:47,240
That being said, everything that I’m going to say is exponentially more important in the considerations of strong AI,
00:00:47,930 –> 00:00:53,750
even over and above what its importance is for the considerations within algorithmic systems.
00:00:55,850 –> 00:01:05,630
All that being said, before we talk about why it is that we need disabled and marginalized perspectives in AI research,
00:01:05,660 –> 00:01:09,890
we have to talk about what perspectives are currently embedded in AI research.
00:01:10,200 –> 00:01:22,350
And when we take a look at the raft of AI research today, we find that there are a whole host of things that get included and assumed to be true.
00:01:22,410 –> 00:01:29,460
And those assumptions, those values embodied by those assumptions, get embedded within the research that gets done,
00:01:29,670 –> 00:01:34,050
and within the AI products that get put out into the world and with which we all must live.
00:01:35,050 –> 00:01:42,640
Those perspectives can be capitalist, thinking about profit motive, thinking about the bottom line,
00:01:42,640 –> 00:01:49,960
as in the case of certain AI healthcare systems, or insurance systems, which will put— in many cases *have* put—
00:01:50,230 –> 00:01:57,880
the bottom line of the insurance company as more important than the life or health of a patient, because that’s what has been trained to do;
00:01:57,661 –> 00:02:04,651
it’s been trained to make sure that the premiums and payouts of the insurance company are as low as possible, regardless of what that takes.
00:02:05,440 –> 00:02:10,660
But you can also see that in the cases of things like the Temporary Assistance for Needy Families benefits,
00:02:10,690 –> 00:02:16,840
the algorithms that run those systems, as showcased in the works of Virginia Eubanks, in her Automating Inequality,
00:02:17,680 –> 00:02:29,920
in which cases— people who are already at the lower socioeconomic status are made more subject to systems that will keep them in poverty,
00:02:29,890 –> 00:02:34,570
rather than being able to be elevated out of poverty because of the kinds of assumptions about their life
00:02:34,540 –> 00:02:41,770
and what kinds of needs they have and the payouts of the systems that they depend on, get embedded in the systems.
00:02:43,250 –> 00:02:46,370
We find disableist perspectives embedded in these systems.
00:02:46,930 –> 00:02:57,730
Systems about disability payouts, or even systems about machine vision that tries to monitor how people cross the street—
00:02:59,250 –> 00:03:07,980
automated vehicles that don’t see people in wheelchairs, or people using crutches, *as* pedestrians, and so doesn’t categorize those people for safety
00:03:07,980 –> 00:03:12,810
in the same way as it would someone walking on upright on two legs. Right?
00:03:13,260 –> 00:03:21,060
But then there’s disability benefit systems which make decisions, determinations about the kind of help and health care that people need to live.
00:03:21,570 –> 00:03:27,300
These systems are often opaque and they’re trained on datasets which are, in many cases,
00:03:27,900 –> 00:03:33,090
filled with assumptions about what the right kind of way to live is about what the right kind of healthcare is.
00:03:33,570 –> 00:03:41,550
Assumptions that, in many cases, hark back to the 1800s; y’know, assumptions about institutionalization of disabled people.
00:03:41,810 –> 00:03:45,620
These things persist today and are in many cases blackboxed,
00:03:45,860 –> 00:03:54,200
because disabled people have not been consulted in the administration of these things, let alone in their construction.
00:03:55,190 –> 00:04:00,650
You can see this the work of people like Karen Hao, asking “Can we ever make an AI that isn’t ableist?”
00:04:00,630 –> 00:04:08,220
You can see this in the work of Alexandra Reeve-Givens with the Future Tense article whose headline you can see here, “How Algorithmic Bias Hurts People With Disabilities.”
00:04:08,270 –> 00:04:13,730
And in Lydia X. Z. Brown and their work at the Center for Democracy and Technology,
00:04:14,180 –> 00:04:21,050
which looks at benefits determinations, algorithmic systems’ benefits determinations for disabled individuals.
00:04:21,270 –> 00:04:26,370
This report just came out last year, it’s really quite in depth and fantastic.
00:04:28,020 –> 00:04:34,830
We find that racist bias, racial perspectives are embedded in AI and algorithmic systems, all the time.
00:04:35,110 –> 00:04:41,320
Facial recognition systems famously don’t see Black people anywhere near as well as white people.
00:04:41,540 –> 00:04:50,360
People with darker skin tones, in many cases, simple facial recognition systems like blink detection systems on Nikon cameras
00:04:50,360 –> 00:04:55,100
will ask whether people of Asian descent have blinked when they’re merely smiling.
00:04:55,390 –> 00:05:01,210
These kinds of things are, you know, old biases that get embedded into new systems
00:05:01,210 –> 00:05:07,990
because the new systems are encoded on the old assumptions that animate the technologies on which the new systems are based.
00:05:08,500 –> 00:05:15,370
Photographic technology was never really designed to see black people very well, and so when digital technology kind of updated it,
00:05:15,400 –> 00:05:19,810
it took the same principles and just mapped them onto a digital space.
00:05:21,490 –> 00:05:30,520
Facial recognition systems that are meant to categorize individuals who are breaking the law are often trained on mugshots;
00:05:30,550 –> 00:05:37,540
mugshot databases are notoriously overpopulated with Black and brown individuals, because black and brown individuals are *assumed* to be criminal.
00:05:37,870 –> 00:05:45,250
And so those individuals populate mugshot databases more often, and so those systems have more of those faces in them,
00:05:45,450 –> 00:05:50,550
and so you get cases where like, 28 members of Congress (top right, or sorry, top left picture),
00:05:50,760 –> 00:05:54,090
28 members of Congress were falsely matched to mugshot databases.
00:05:54,930 –> 00:06:01,110
You can see this in multiple different works from the ACLU, from ProPublica, a number of different places.
00:06:01,630 –> 00:06:10,690
The GIF on the top right is from the HP face tracking camera scandal from 2009,
00:06:11,020 –> 00:06:15,520
when it was shown that HBS face tracking camera did not track the faces of black people.
00:06:15,730 –> 00:06:22,510
In that GIF, a Black man, a computer store employee, is saying, “I’m Black, I think my blackness is interfering with the computer’s ability to follow me.”
00:06:22,960 –> 00:06:30,610
And at the bottom [of this slide] you have a six, two by three grid, six grid of white women in various clothes, lighter skinned women in various clothes.
00:06:30,880 –> 00:06:34,720
This is the model of what’s known as the Shirley Car, and this comes from Kodak, right?
00:06:34,750 –> 00:06:41,260
This is, the Shirley Card was a white woman named Shirley, and if you could see Shirley’s face, regardless of what she was wearing,
00:06:41,260 –> 00:06:45,580
or what background she was standing in front of, the image was properly balanced.
00:06:46,470 –> 00:06:55,860
This is the industry standard on which photography was based and continued to be modeled, even into the development of digital camera technologies.
00:06:56,970 –> 00:07:03,360
The same goes for carceral surveillance, carceral systems of justice, which use surveillance systems, facial recognition systems,
00:07:03,540 –> 00:07:08,310
predictive policing, which says, “certain groups of people are more likely to be criminal,
00:07:08,580 –> 00:07:14,400
you place those cameras there, you do predictive modeling based on your criminal metrics, based on the data that it’s trained on,”
00:07:14,810 –> 00:07:21,410
when that data is notoriously filled with over populations of Black and brown individuals, minority communities,
00:07:21,770 –> 00:07:29,720
those systems will be trained to think of those communities *as criminal*, first and foremost.
00:07:30,710 –> 00:07:38,750
If you then deploy those systems, they will make the same kind of racialized judgments, as previously were made by human beings.
00:07:39,320 –> 00:07:47,060
You can see this in the work of Clare Garvie and others at Georgetown, their work “The Perpetual Line-Up,” from 2016,
00:07:46,721 –> 00:08:13,181
and the deployment of facial recognition and surveillance systems in various communities of color throughout the United States and England.
00:07:56,810 –> 00:08:02,600
Institutional bias is a perspective that gets encoded not just in the surveillance state,
00:08:02,600 –> 00:08:08,390
but in the judgments made about people who are then made subject to justice systems in the West,
00:08:08,660 –> 00:08:21,110
wherein algorithmic bail setting and sentencing recommendation systems will make recommendations that say that a Black man with no priors and a lower likelihood of recidivism—
00:08:21,650 –> 00:08:25,730
based on the system itself’s own judgments, based on its own estimations—
00:08:26,750 –> 00:08:34,280
a Black man with no priors, lower likelihood of recidivism is recommended a lower likelihood of bail,
00:08:34,670 –> 00:08:42,560
and a higher, more harsh sentence than a white man with priors and a higher likelihood of recidivism.
00:08:43,890 –> 00:08:49,440
ProPublica’s investigation of this in 2016 showed how this system was at play in Broward County—
00:08:49,440 –> 00:08:54,240
this is the Compas bail-setting and sentencing recommendation guidelines—
00:08:54,000 –> 00:09:00,810
and this, again, is based on what it’s trained on: the behavior of human beings trains these algorithmic AI systems
00:09:00,810 –> 00:09:06,870
and the systems then replicate and iterate on that behavior exacerbating these outcomes.
00:09:08,220 –> 00:09:09,660
Then you have all of the above.
00:09:10,260 –> 00:09:13,770
On this page, you have a host of different headlines:
00:09:14,100 –> 00:09:18,330
“It’s Our Fault That AI Thinks That White Names Are More ‘Pleasant’ Than Black Names.”
00:09:18,690 –> 00:09:22,950
Next headline reads, “Health Care Algorithm Offered Less Care to Black Patients.”
00:09:23,380 –> 00:09:30,340
Next one reads, “AI scraps,” or, sorry, “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women.”
00:09:31,270 –> 00:09:38,890
In the lower right corner you’ve got a GIF of a graph, modeling the Word2Vec software
00:09:38,870 –> 00:09:45,770
where certain correlations are made along gendered lines between words like “King” and “Man,” “Queen” and “Woman.”
00:09:46,460 –> 00:09:54,950
Within the same study, Caliskan et al., in 2017, you’ll see that there’s correlations made between “CEO” and “man,”
00:09:55,160 –> 00:10:03,380
“secretary” and “woman,” “doctor” and “man,” “nurse” and “woman,” “President” and “man,” that kind of thing.
00:10:03,870 –> 00:10:14,130
This gendered bias gets encoded in Word2Vec systems, but it has also persisted in GPT-3 systems in a kind of even more nuanced and systemic way,
00:10:14,130 –> 00:10:21,240
where whole hosts of disciplines that GPT-3 gets trained on— to kind of mimic those writing styles—
00:10:22,830 –> 00:10:29,310
it will cast whole disciplines as meaningless or inadequate or frivolous,
00:10:29,820 –> 00:10:33,900
as it did with philosophy when it was tasked to write a philosophy paper.
00:10:35,850 –> 00:10:44,370
Things on the horizon include the NeuraLink AI system, the Amazon Halo, benefits determination systems are going to increase their proliferation,
00:10:45,930 –> 00:10:54,000
including things like COVID determinations, who gets what shots when, who gets what treatments in what scenarios, those kinds of things.
00:10:54,870 –> 00:11:00,480
NeuraLink AI is the brain chip interface from Elon Musk, and Co.
00:11:00,720 –> 00:11:08,340
Amazon Halo is meant to be a kind of full-suite biometric reader, where it tells your heart rate, tells your perspiration,
00:11:08,340 –> 00:11:12,120
it tells your blood-ox. level, tells, y’know, how much water you need.
00:11:12,330 –> 00:11:19,470
But it also is meant to do things like tell you what your tone is, in conversations, and whether you might want to modulate your tone.
00:11:20,490 –> 00:11:30,360
Now, one of the things that’s always been true in the United States is that things like a Black woman’s tone are often up to scrutiny.
00:11:31,230 –> 00:11:36,750
Black people are more harshly judged on the whole as to their comportment in social situations,
00:11:36,930 –> 00:11:43,680
and black women’s tones, in particular, are often policed, for how they interact with each other
00:11:43,680 –> 00:11:49,260
and comport themselves in conversation— often told that they’re being overly agitated or angry.
00:11:49,480 –> 00:11:58,930
Now, if the Amazon Halo is trained on general human interactions— or what it’s programmers and designers think of as “General human interaction”—
00:11:59,170 –> 00:12:07,870
the hosts of assumptions about what kind of tone is the “right kind” of tone to strike in conversation is *inherently* cultural,
00:12:08,440 –> 00:12:15,070
and if the culture of the people who design and program this tool, don’t take into account the kind of inherent biases that there are
00:12:15,070 –> 00:12:24,700
towards certain types of comportment, expression, lived experience, and behavior, those things will then replicate in terms of
00:12:24,730 –> 00:12:32,500
the Amazon Halo suggesting to Black people that, “hey, maybe you want to calm down,” when they’re just having a normal conversation.
00:12:33,820 –> 00:12:48,100
Instantiating the microaggression of the “angry black man” or “angry black woman” into a systemic, culture-wide device that everyone has monitoring their speech at all times.
00:12:51,250 –> 00:12:52,270
So what does all this mean?
00:12:53,470 –> 00:13:03,580
This is a meme that I made. [Laughs] It’s Zoidberg from the show ‘Futurama,’ sitting in a opera house and tuxedo, and he’s yelling,
00:13:03,760 –> 00:13:08,050
“Your AI and Algorithmic Facial Recognition Applications Are Bad, and You Should Feel Bad!”
00:13:08,990 –> 00:13:16,910
All of these tools, replicate and instantiate the lived experiences and the perspectives and the assumptions the people who program them.
00:13:17,330 –> 00:13:27,830
They instantiate and iterate upon the assumptions and the values of the people who have commissioned them, who have programmed them, who have trained them,
00:13:28,050 –> 00:13:36,540
and all of the interactions that these systems have when they’re out in the world form components of the data on which it learns how to be
00:13:36,540 –> 00:13:39,540
and how to do what it is meant to do in the world.
00:13:41,070 –> 00:13:52,470
So what this means is that what we have to do here is ensure that there is no work done in these realms, without the perspectives of marginalized individuals
00:13:52,680 –> 00:14:02,760
being not just tokenisticly “included,” not just polled and mined for perspectives or, or opinions about the way that these systems come to be,
00:14:03,020 –> 00:14:13,130
but actively engaged and put at the forefront of the conversations we have and the development we do around AI and algorithmic systems.
00:14:14,720 –> 00:14:17,990
This isn’t the first time we’ve had these kinds of conversations.
00:14:18,020 –> 00:14:23,510
These conversations have been at play throughout the history of technology and science.
00:14:23,960 –> 00:14:28,850
And we can see it in the lives and the lived experiences and the contributions of many different people.
00:14:29,300 –> 00:14:32,960
This page is a raft of seven different pictures.
00:14:33,380 –> 00:14:46,010
We have at the top left an image of Dr. Ruha Benjamin, whose work on algorithmic justice and the nature of carceral surveillance
00:14:46,000 –> 00:14:52,360
and certain types of abolitionist perspectives regarding AI and facial recognition systems
00:14:52,450 –> 00:15:00,070
has said that, ultimately, certain things maybe just shouldn’t be developed, because there’s no just way to develop them in the world.
00:15:01,230 –> 00:15:15,150
The work that is just kind of fundamental to this idea that some things are just impossible to do in a way that is without real, lasting meaningful harm.
00:15:16,470 –> 00:15:23,190
Next we have Wendy Carlos, the trans woman whose work is at the forefront of all electronic music,
00:15:23,370 –> 00:15:29,850
who was instrumental in developing the tools to translate music into an electronic format.
00:15:31,440 –> 00:15:37,350
Next, going to the right we have Dr. Ashley Shew, whose work on technology and disability,
00:15:37,350 –> 00:15:41,940
on the lives of disabled people and how they interface with their technologies on a day to day basis,
00:15:41,940 –> 00:15:54,420
is doing real kind of long-lasting investigations into what is available to the disabled community versus what the disabled community,
00:15:54,420 –> 00:16:01,650
and members of the disabled community individually, say that they need from technologies that they need to live their lives.
00:16:02,830 –> 00:16:13,480
Below Dr. Shew’s image we have the image of Dr. Alondra Nelson, who is now the social science coordinator for the Office of Science Technology Research for the White House, under Joe Biden’s
00:16:14,260 –> 00:16:26,170
Her work is crucial in thinking about the ways that social implications of Science and Technology need to be interrogated and understood,
00:16:27,370 –> 00:16:32,440
thought about at the outset, rather than as an after the fact, kind of post hoc consideration.
00:16:33,320 –> 00:16:43,640
Next, going to the left, we have Dr. Anna Lauren Hoffman. Anna Lauren Hoffman’s work focuses on the ways that technology and gender collide,
00:16:43,630 –> 00:16:53,350
and specifically, one of the things that Dr. Hoffman is talking about is this notion of “data violence” and the ways that perspectives on trans lived experience,
00:16:53,000 –> 00:17:05,330
transgender individuals’ experience with technology in the world, is kind of predicated upon other people’s assumptions about what a transgender lived experience ought to be.
00:17:05,540 –> 00:17:12,110
You see this in everything from just day-to-day life to things like TSA body scanners, and the kinds of assumptions that get made by the
00:17:12,270 –> 00:17:16,680
human individuals at work, there, but also the algorithmic systems at work, there.
00:17:17,190 –> 00:17:23,490
And then next to Dr. Hoffman, we have Katherine Johnson, whose work on the Apollo Project got human beings to the moon,
00:17:23,490 –> 00:17:29,550
who was famously almost completely excluded from being able to work in that space because of her race.
00:17:29,940 –> 00:17:33,810
In the center of all of this we have seven members of the team who are known as the Gallaudet 11.
00:17:34,140 –> 00:17:47,550
This is a team of Deaf individuals who were brought in by NASA to test the effects of weightlessness and disorientation on individuals who didn’t have inner ear concerns.
00:17:47,000 –> 00:17:57,560
For those of you who don’t know, Gallaudet University is a Deaf university in Washington, DC, and all of the students there are Deaf or hard of hearing.
00:17:57,680 –> 00:18:08,330
So the Gallaudet 11, were eleven Deaf and Hard of Hearing men, whose experiences with the inner ear were drastically different than individuals who hear “normally.”
00:18:09,830 –> 00:18:20,480
And as a result, they were, according to NASA, prime subjects, for being able to (sorry about that), being able to you know, test these notions.
00:18:21,440 –> 00:18:30,740
You know, ask the question about, “what kind of life in space, in weightlessness— what kind of disorientation might human beings suffer?”
00:18:32,720 –> 00:18:34,850
There has never been a Deaf astronaut.
00:18:36,220 –> 00:18:44,710
Deaf people have been used to train astronauts, data from Deaf individuals has been used to train astronauts, but there has never been a Deaf astronaut.
00:18:50,150 –> 00:18:59,180
At the very end of all of this, this question of why AI research needs disabled and marginalized perspectives,
00:18:59,600 –> 00:19:09,020
comes down to this notion of “whose perspectives, whose lived experiences animate the technology that we make, and to which we are all made subject?”
00:19:09,000 –> 00:19:20,220
As AI research increases its reach and its depth and its breadth and its power, we need to be ensuring that the perspectives, the values that get encoded into these systems,
00:19:20,400 –> 00:19:34,410
are values and perspectives that will not just, again, post hoc accommodate or repair or seek to “include” in a tokenistic manner, the experiences of marginalized individuals,
00:19:34,620 –> 00:19:43,020
but take those perspectives into account at the outset. Because those perspectives have something to teach us that is otherwise inaccessible to us.
00:19:45,390 –> 00:19:50,760
We have to ensure that the perspectives and lived experiences of marginalized people are heeded in this conversation
00:19:51,630 –> 00:19:58,410
about the design and implementation of algorithmic applications, even and perhaps *especially* when those perspectives make us uncomfortable.
00:19:58,990 –> 00:20:05,710
The perspectives and lived experiential knowledge of women, disabled people, trans and gender non conforming individuals,
00:20:05,920 –> 00:20:13,450
Black people, Indigenous people, other marginalized identities are, in large part, informed by being made subject to
00:20:13,450 –> 00:20:17,650
the worst excesses of technology, up to and including AI.
00:20:18,790 –> 00:20:26,320
Putting them at the forefront of our conversations about AI may require us to radically rethink our founding assumptions about what AI and automation are for.
00:20:27,470 –> 00:20:32,930
But for millions of people, doing this will very literally mean the difference between life and death.
00:20:35,630 –> 00:20:48,050
I have here a whole host of resources, papers, videos, articles, I highly recommend taking them, spending some time with them,
00:20:48,000 –> 00:20:52,470
and thinking about the ways that we animate our conversations about this.
- Ahmed, Sara. The Cultural Politics of Emotion. New York: Routledge, 2004.
- “Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots.” Jacob Snow, Technology & Civil Liberties Attorney, ACLU of Northern California. July 26, 2018. https://www.aclu.org/blog/privacy-technology/surveillance-technologies/amazons-face-recognition-falsely-matched-28.
- aoun, sarah; Ahmed, Nasma. “Don’t Include Us, Thank You” (2018) https://livestream.com/internetsociety/ttw18/videos/174091941.
- Benjamin, Ruha. 2019. Race after technology: Abolitionist tools for the new Jim code. Cambridge: Polity.
- Bennett, Cynthia L., Keyes, Os. “What is the Point of Fairness? Disability, AI and The Complexity of Justice.” 2019. https://org/abs/1908.01024
- Braun, Lundy Breathing Race into the Machine: the Surprising Career of the Spirometer from Plantation to Genetics. Minneapolis, MN: University of Minnesota Press, 2014. doi:10.5749/minnesota/9780816683574.001.0001.
- Brown, Lydia X. Z., Michelle Richardson, Ridhi Shetty, Andrew Crawford. “Report: Challenging the Use of Algorithm-driven Decision-making in Benefits Determinations Affecting People with Disabilities.” Center For Democracy and Technology. October 2020. https://cdt.org/insights/report-challenging-the-use-of-algorithm-driven-decision-making-in-benefits-determinations-affecting-people-with-disabilities/.
- Browne, Simone. Dark Matters: On the Surveillance of Blackness. (Durham: Duke University Press, 2015)
- Buolamwini, Joy, and Timnit Gebru. “Gender shades: Intersectional accuracy disparities in commercial gender classification.” In Conference on fairness, accountability and transparency, pp. 77-91. 2018.
- Caliskan, Aylin; Bryson, Joanna J.; Narayanan, Arvind. “Semantics Derived Automatically From Language Corpora Contain Human-Like Biases.” 14 Apr 2017 : 183-186. http://science.sciencemag.org/content/356/6334/183.full.
- Cave, Stephen, and Kanta Dihal. “The Whiteness of AI.” Philosophy & Technology4 (2020): 685-703. https://doi.org/10.1007/s13347-020-00415-6.
- del Barco, Mandalit. “How Kodak’s Shirley Cards Set Photography’s Skin-Tone Standard.” November 13, 2014. NPR https://www.npr.org/2014/11/13/363517842/for-decades-kodak-s-shirley-cards-set-photography-s-skin-tone-standard.
- Eubanks, Virginia. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. New York, NY : St. Martin’s Press, 2018.
- Farivar, Cyrus. “Central Londoners to be subjected to facial recognition test this week.” Ars Technica. December 17, 2018. https://arstechnica.com/tech-policy/2018/12/londons-police-will-be-testing-facial-recognition-in-public-for-2-days/
- Hamraie, Aimi, & Fritsch, Kelly. “Crip technoscience manifesto.” Catalyst: Feminism, Theory, Technoscience, 5(1), 1-34. 2019. http://www.catalystjournal.org | ISSN: 2380-3312 https://catalystjournal.org/index.php/catalyst/article/view/29607/24771.
- Hao, Karen. “Can You Make an AI That Isn’t Ableist?” MIT Technology Review. November 28, 2018. https://www.technologyreview.com/s/612489/can-you-make-an-ai-that-isnt-ableist/.
- Hoffman, Kelly M. & Sophie Trawalter, Jordan R. Axt, M. Norman Oliver. “Racial bias in pain assessment.” Proceedings of the National Academy of Sciences. April 2016, 113 (16) 4296-4301; DOI: 10.1073/pnas.1516047113.
- Joseph, George and Lipp, Kenneth. “IBM Used NYPD Surveillance Footage to Develop Technology That Lets Police Search by Skin Color.” The Intercept. September 6, 2018. https://theintercept.com/2018/09/06/nypd-surveillance-camera-skin-tone-search/.
- Keyes, Os. “Automating autism: Disability, discourse, and Artificial Intelligence.” Journal of Sociotechnical Critique, 1(1), 2020, 1-31. https://org/10.25779/89bj-j396
- Noble, Safiya U. Algorithms of Oppression: How Search Engines Reinforce Racism. New York : New York University Press, 2018.
- “The Overlooked Reality of Police Violence Against Disabled Black Americans,” The Takeaway. WNYC. June 15, 2020. https://www.wnycstudios.org/podcasts/takeaway/segments/police-violence-disabled-black-americans.
- “The Perpetual Line-up: Unregulated Police Face Recognition in America;” Garvie, Clare; Bedoya, Alvaro; Frankle, Jonathan. Georgetown Law’s Center for Privacy & Technology. https://www.law.georgetown.edu/privacy-technology-center/publications/the-perpetual-line-up/.
- Rose, Adam. “Are Face-Detection Cameras Racist?” Time. January 22, 2010. http://content.time.com/time/business/article/0,8599,1954643,00.html.
- Sauder, Kim. “When Celebrating Accessible Technology is Just Reinforcing Ableism.” Crippled Scholar. July 4, 2015. https://crippledscholar.com/2015/07/04/when-celebrating-accessible-technology-is-just-reinforcing-ableism/
- Seiberth, Sophi; Yoshioka, Jeremy; and Smith, Daniel (2017). “Physiognomy.” Measuring Prejudice: Race Sciences of the 18th-19th Centuries. http://scalar.usc.edu/works/measuring-prejudice/blank.
- Shew, Ashley. (2020.) “Ableism, Technoableism, and Future AI.” IEEE Technology and Society Magazine, Volume 39( 1), 40-85. doi: 10.1109/MTS.2020.2967492.
- Spivak, Gayatri Chakravorty. “Can the Subaltern Speak.” 1988
- Stein, Melissa N. Measuring Manhood: Race and the Science of Masculinity, 1830–1934. University of Minnesota Press, 2015. https://jstor.org/stable/10.5749/j.ctt189ttgm.
- Washington, Harriet A. Medical Apartheid: The Dark History of Medical Experimentation on Black Americans from Colonial Times to the Present. 1st ed. New York: Doubleday, 2006.
- Wells-Jensen, Sheri. “The Case for Disabled Astronauts.” Scientific American: Observations. May 30, 2018. https://blogs.scientificamerican.com/observations/the-case-for-disabled-astronauts/
- Williams, Damien Patrick. “Technology, Disability, & Human Augmentation.” A Future Worth Thinking About. April 15, 2017. https://afutureworththinkingabout.com/?p=5162
- “What It’s Like To Be a Bot,” Real Life Magazine, May 7, 2018. http://reallifemag.com/what-its-like-to-be-a-bot/
- “Consciousness and Conscious Machines: What’s At Stake?” appearing in Papers of the 2019 Towards Conscious AI Systems Symposium, co-located with the Association for the Advancement of Artificial Intelligence 2019 Spring Symposium Series (AAAI SSS-19), Stanford, CA, March 25-27, 2019. http://ceur-ws.org/Vol-2287/paper5.pdf
- “Heavenly Bodies: Why It Matters That Cyborgs Have Always Been About Disability, Mental Health, and Marginalization.” (June 8, 2019). Available at SSRN: https://ssrn.com/abstract=3401342 or http://doi.org/10.2139/ssrn.3401342
- “Fitting the Description: Historical and Sociotechnical Elements of Facial Recognition and Anti-Black Surveillance,” appearing in The Journal of Responsible Innovation, edited by Shannon N. Conley, Erik Fisher, and Emily York; published by Taylor and Francis. https://doi.org/10.1080/23299460.2020.1831365.
- Williams, Rua Mae. (2019). “Metaeugenics and Metaresistance: from manufacturing the ‘includable body’ to walking away from the broom closet,” Canadian Journal of Children’s Rights, (in press).
- (2018). “Autonomously Autistic: exposing the locus of autistic pathology,” Canadian Journal of Disability Studies, vol. 7, no. 2, pp. 60–82.
- With Gilbert, J. E. (2019). “‘Nothing About Us Without Us’: Transforming Participatory Research and Ethics in Human Systems Engineering” in Diversity, Inclusion, and Social Justice in Human Systems Engineering. Human Factors and Ergonomics Society. (In press)
00:20:52,800 –> 00:21:05,790
Who is in the room when we make these decisions? Who is driving these questions that we ask? And who is shaping the answers that we give?
00:21:06,690 –> 00:21:11,970
Not just at the end of the day, but at the very beginning of the day.
00:21:14,190 –> 00:21:15,210
Thank you very much.
00:21:16,080 –> 00:21:23,550
This is where you can find me online. This is my email. If you have any questions, I will be happy to answer them at the end.