{"id":5442,"date":"2019-08-20T15:09:12","date_gmt":"2019-08-20T19:09:12","guid":{"rendered":"https:\/\/afutureworththinkingabout.com\/?p=5442"},"modified":"2019-10-16T00:52:33","modified_gmt":"2019-10-16T04:52:33","slug":"audio-transcripts-and-slides-from-any-sufficiently-advanced-neglect-is-indistinguishable-from-malice","status":"publish","type":"post","link":"https:\/\/afutureworththinkingabout.com\/?p=5442","title":{"rendered":"Audio, Transcripts, and Slides from &#8220;Any Sufficiently Advanced Neglect is Indistinguishable from Malice&#8221;"},"content":{"rendered":"<div>Below are the slides, audio, and transcripts for my talk &#8216;&#8221;Any Sufficiently Advanced Neglect is Indistinguishable from Malice&#8221;: Assumptions and Bias in Algorithmic Systems,&#8217; given at the <a href=\"http:\/\/www.spt2019.org\/\">21st Conference of the Society for Philosophy and Technology<\/a>, back in May 2019.<\/div>\n<p>(Cite as: Williams, Damien P. &#8216;&#8221;Any Sufficiently Advanced Neglect is Indistinguishable from Malice&#8221;: Assumptions and Bias in Algorithmic Systems;&#8217; talk given at the 21st Conference of the Society for Philosophy and Technology; May 2019)<\/p>\n<p>Now, I&#8217;ve got a chapter coming out about this, soon, which I can provide as a preprint draft if you ask, and can be cited as &#8220;Constructing Situated and Social Knowledge: Ethical, Sociological, and Phenomenological Factors in Technological Design,&#8221; appearing in <i>Philosophy And Engineering: Reimagining Technology And Social Progress<\/i>. Guru Madhavan, Zachary Pirtle, and David Tomblin, eds. Forthcoming from Springer, 2019. But I wanted to get the words I said in this talk up onto some platforms where people can read them, as soon as possible, for a\u00a0 couple of reasons.<\/p>\n<p>First, the Current Occupants of the Oval Office have very recently <a href=\"https:\/\/www.revealnews.org\/article\/can-algorithms-be-racist-trumps-housing-department-says-no\/\">taken the policy position that algorithms can&#8217;t be racist<\/a>, something which they&#8217;ve done <b><i>in direct response to<\/i><\/b> things like <a href=\"https:\/\/futurism.com\/the-byte\/google-hate-speech-ai-biased\">Google\u2019s Hate Speech-Detecting AI being biased against black people<\/a>, and <a href=\"https:\/\/www.cnbc.com\/2019\/08\/14\/amazon-says-its-facial-recognition-can-now-identify-fear.html\">Amazon claiming that its facial recognition can identify fear<\/a>, without ever accounting for, i dunno, cultural and individual differences in fear expression?<\/p>\n<p><div style=\"width: 442px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"tl-email-image\" src=\"https:\/\/cdn.pixabay.com\/photo\/2018\/03\/23\/08\/09\/flat-3252983_960_720.png\" alt=\"\" width=\"432\" height=\"360\" \/><p class=\"wp-caption-text\">[Free vector image of a white, female-presenting person, from head to torso, with biometric facial recognition patterns on her face; incidentally, go try finding images\u2014even illustrations\u2014of a non-white person in a facial recognition context.]<\/p><\/div><br \/>\nAll these things taken together are what made me finally go ahead and get the transcript of that talk done, and posted, because these are events and policy decisions about which I a) have been speaking and writing for years, and b) have specific inputs and recommendations about, and which are, c) frankly wrongheaded, and outright hateful.<\/p>\n<p>And I want to spend time on it because I think what doesn&#8217;t get through in many of our discussions is that it&#8217;s not just about how Artificial Intelligence, Machine Learning, or Algorithmic <b><i>instances<\/i><\/b> get trained, but the <b><i>processes<\/i><\/b> for how and the <b><i>cultural environments in which HUMANS<\/i><\/b> are increasingly taught\/shown\/environmentally encouraged\/socialized to think is the &#8220;right way&#8221; to build and train said systems.<\/p>\n<p>That includes classes and instruction, it includes the institutional culture of the companies, it includes the policy landscape in which decisions about funding and get made, because that drives how people have to talk and write and <b><i>think<\/i><\/b> about the work they&#8217;re doing, and that constrains what they will even <b><i>attempt<\/i><\/b> to do or even understand.<\/p>\n<p>All of this is cumulative, accreting into institutional epistemologies of algorithm creation. It is a structural and <b><i>institutional<\/i><\/b> problem.<\/p>\n<p>So here are the <strong>Slides<\/strong>:<\/p>\n<p style=\"text-align: center;\"><iframe loading=\"lazy\" src=\"https:\/\/docs.google.com\/presentation\/d\/e\/2PACX-1vRFHICRrFJXPJ-lVIW6Bt4H-zOZ0soQ7kfTDJcMDI9po2nu-gvq1qJPVGr5H-VgvsCnjOH8HecZNleD\/embed?start=false&amp;loop=false&amp;delayms=3000\" width=\"605\" height=\"471\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"><span style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" data-mce-type=\"bookmark\" class=\"mce_SELRES_start\">\ufeff<\/span><span style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" data-mce-type=\"bookmark\" class=\"mce_SELRES_start\">\ufeff<\/span><span style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" data-mce-type=\"bookmark\" class=\"mce_SELRES_start\">\ufeff<\/span><span style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" data-mce-type=\"bookmark\" class=\"mce_SELRES_start\">\ufeff<\/span><span style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" data-mce-type=\"bookmark\" class=\"mce_SELRES_start\">\ufeff<\/span><\/iframe><\/p>\n<p>The <strong>Audio<\/strong>:<\/p>\n<audio class=\"wp-audio-shortcode\" id=\"audio-5442-1\" preload=\"none\" style=\"width: 100%;\" controls=\"controls\"><source type=\"audio\/mpeg\" src=\"https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/08\/NeglectMaliceBiasAlgorithms.mp3?_=1\" \/><a href=\"https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/08\/NeglectMaliceBiasAlgorithms.mp3\">https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/08\/NeglectMaliceBiasAlgorithms.mp3<\/a><\/audio>\n<p>[<a href=\"https:\/\/drive.google.com\/open?id=1C-aHTW1oxz_8oNONzdX4d5jzvXlX9G-u\">Direct Link to Mp3<\/a>]<\/p>\n<p>And the <strong>Transcript<\/strong> is here below the cut:<\/p>\n<p><!--more--><\/p>\n<p><strong>0:00<\/strong><br \/>\n\u2026algorithmic bias, machine learning, artificial intelligence, and it is called \u201cAny Sufficiently Advanced Neglect Is Indistinguishable From Malice.\u201d<\/p>\n<p>This title comes from a repurposing of Clarke\u2019s Law, \u201cany sufficiently advanced technology is indistinguishable from magic,\u201d repurposed by Dr. Debbie Chachra. She\u2019s a material science engineer at Olin College of Engineering, up in Boston.<\/p>\n<p>This idea that any sufficiently advanced neglect is indistinguishable from malice means that, we\u2019re talking about instances in which a hazard was known, or least was foreseen by certain groups, was warned about and was warned about persistently enough in relation to either a system put in place, a technology created, or both, but was ignored\u2014that the position of those who forward this knowledge was went unheeded. And it then created what were, for those groups, entirely foreseeable harms. Those harms then have, in turn, persisted long enough, with enough people raising a cry about them, who then go on to also be subsequently ignored, that those who claim ignorance to them are not really meaningfully distinguished from those who would actively seek to harm.<\/p>\n<p>A harm created through persistent ignorance\u2014through willful ignorance <strong><em>of<\/em><\/strong> harm raised\u2014is not necessarily very different <em><strong>from<\/strong> <\/em>harm intentionally done.<\/p>\n<p>To talk about this, I want to go through a few case studies\u2014a couple of them are going to be very familiar, because we\u2019ve literally just heard about them\u2014but a couple of them will hopefully be new to you. And I\u2019m going to give the case studies of them and I\u2019m going to go ahead and give some of their backgrounds as well.<\/p>\n<p>Various resume sorting algorithms currently exist, which have been trained to sort resumes, based on applicant pools that have problems with things like &#8220;women sounding&#8221; names, or \u201cBlack sounding\u201d names. Those resumes, even when controlled for exactly the same credentials, training, background, and education go on to be rated lower than names that are &#8220;white sounding&#8221; or &#8220;male sounding.&#8221; This has been a persistent problem in resume sorting by humans for a very long time, and when the resume sorting training was given to various machine learning algorithms, those biases made their way into those systems.<\/p>\n<p>Many are trying to find ways around this. Anything from as simple as just removing names from the applicant pool and doing a more \u201cblind\u201d review, to actively antagonizing against that bias, for a different kind of bias to the other end of things. Natural Language Processing has a lot to do with this. There\u2019s a case that came out, 2016, that said, that biases in natural language corpora, like email caches that are used to train natural language processing for machine learning algorithms, then managed to pass on biases like gender differentials, to those machine learning algorithms and those natural language processing systems.<\/p>\n<p>There\u2019s a reason for this.<\/p>\n<p>Do you know what the largest cache of openly available natural language [corpus] is?<\/p>\n<p>&nbsp;<\/p>\n<p><strong>[AUDIENCE MEMBER]<br \/>\n<\/strong>Google search?<\/p>\n<p><strong>[DPW]<br \/>\n<\/strong>No. It\u2019s the Enron emails.<\/p>\n<p>So. A group of emails, comprising hundreds of thousands, if not millions of lines of natural language text between a very specific class and category of people, who talk in very specific, gendered, powered, and racialized ways about the topics under discussion. This is what\u2019s used because the Enron emails were entered into public domain when Enron was put on trial, and so now it is publicly and freely available to all to us to train their algorithms.<\/p>\n<p>Bail and sentencing algorithms we literally just heard about, but I want to talk about a little bit of a different case. This is from the Broward County, Florida uses of Compas. You see pretty much exactly the same things that Clinton was talking about, though. On the left, we have Bernard Parker, one prior offense for resisting arrest without violence\u2014that is most likely he probably moved his arm while an officer was putting his cuffs on (that is \u201cresisting arrest,\u201d by the way)\u2014zero subsequent offenses until the time at which he was re-entered into the system and rated a high risk. Dylann Fugett: one prior offense for attempted burglary, three subsequent offenses after that, for drug possession, rated according to Compas as a low risk.<\/p>\n<p>Same sentencing county, obviously: Broward County, Florida; same, high likelihood of being the same judge doing the sentencing; and these very drastically different outcomes. You can get this information and you can see the comments algorithms, all of their metrics, looking at roughly 11,000 to 12,000 different subjects in Broward County, Florida, you can find this on ProPublica&#8217;s investigation, looking at the you know the way that Compas operates.<\/p>\n<p>Self-driving cars: Machine vision, LIDAR, detection of self-driving cars, and how self-driving cars operate on the road. Self-driving cars and the algorithms that allow them to see the road are trained based off of pattern recognition, matching algorithms that are designed to teach it how to see what it sees. What self-driving cars cannot see very well\u2026are Black people and wheelchairs. Especially if a person using a wheelchair does what many people do when they use wheelchairs. I don\u2019t know how many of you know wheelchair users in your lives. Sometimes instead of what we think of as standard sitting-in-a-chair-and-pushing-the-wheels-forward kind of use, people in wheelchairs\u2014wheelchair users tend to push backwards off of things. Because it increases speed and power and the ability to maneuver.<\/p>\n<p>Yeah, a self-driving car has no idea what to do with that. And the likelihood of a self-driving car hitting a wheelchair user who is using a wheelchair in what it considers to be a \u201cnon-standard\u201d way is roughly 90%.<\/p>\n<p>Speaking of imaging systems not being able to see Black people: Google had a very persistent problem of not being able to properly categorize Black people in its image search. In fact, when it was given images of Black individuals, it returned pictures of gorillas or chimpanzees. This is not because somebody went in and taught it that Black people are gorillas or chimpanzees. This is because it was not taught anything and it tried to match its \u201cbest fit.\u201d Because <strong><em>no one<\/em><\/strong> in the <strong><em>team<\/em><\/strong> doing the initial training and data collection, thought to give it a better way of understanding or seeing\u2026people with darker skin tones. This has a long, <em><strong>long<\/strong> <\/em>history in image collection and production.<\/p>\n<p>We\u2019ll talk about that again in a second. But before we get there, I also want to talk about the fact that Nikon cameras\u2026ask questions like this:<\/p>\n<p>They look at individuals with Asian phenotypes, who happened to be smiling in their image and facial recognition, and ask, \u201cDid somebody blink?\u201d Which\u2014in case you weren\u2019t aware, somehow\u2014is <strong><em>super racist<\/em><\/strong>. But because nobody on that team, nobody on the development, the design, the process, the training, thought that, \u201cHey, there are certain metrics that we are coding for, or assuming to be universal across the board,\u201d this didn\u2019t come up until this was out in the field.<\/p>\n<p>In addition, we have problems like automatic sinks, soap dispensers, a paper towel dispensers not being able to see darker skin tones. I had this problem yesterday in the YMCA building on the fourth floor. Had to walk downstairs to use the sink.<\/p>\n<p>You have HP developing a motion sensitive camera that\u2019s supposed to track faces\u2014to keep the face in the center of the frame, regardless of where the user sits\u2014not being able to see Black people.<\/p>\n<p>You have a long racialized history of darker skin tones not being able to be properly rendered in photographic equipment, that has been digitized and translated into digital camera technologies, and encoded as tools and techniques that get rendered and used in technologies.<\/p>\n<p>Photographic technology was designed and developed for the use of affluent white people. That sounds reductive, but it\u2019s just the fact of the matter. When it was developed, that&#8217;s who would get to use it. When you were asking, \u201cHow do we make sure the details are renderable for the people in this picture,\u201d the details you were looking at where the faces of the white people being photographed. That required the use of certain tools of optics, certain techniques of chemistry, to make sure that the contrast was properly allocated, so that those people <strong><em>could be seen<\/em><\/strong>. What that <em><strong>turned into<\/strong><\/em> was clear detail on light colors\u2026 and almost <em><strong>impossible-<\/strong><\/em>to-render details on anything darker.<\/p>\n<p>If you look at pictures, photography from the 19 and early 20th century, what you will see very often\u2014if there happens to somehow for some reason be a Black person in that picture\u2014is a dark blur. These tools and techniques were rendered and reinforced and re-inscribed, over and over and over again, until such point as they became the accepted way of doing photography. That \u201caccepted way\u201d of doing photography then became the techniques and tools, the <em><strong>accepted<\/strong> <\/em>methodology, by which digital camera technologies were trained. Even today, a digital camera will \u201cwhite balance\u201d on the lightest thing in the frame before it tries to balance anything darker. This comes from Kodak\u2019s Shirley Cards, which was literally a white woman named Shirley and you would use it to balance the white image for the picture, and you would try to like, get as clear an image of her as you could.<\/p>\n<p>Again, this is not to say that somebody said, \u201cYou know who I don\u2019t want to take pictures of? Black people!\u201d I mean, somebody probably said that, but like, photography <strong><em>as a whole<\/em><\/strong> didn\u2019t say that. Right? What happened is that a series of assumptions, a series of biases about the way things are, and the way things probably would continue to be were inscribed as assumed knowledge. And that re-inscription and re-assumption, once again, got encoded for hundreds of years.<\/p>\n<p>It has made its way into surveillance technology.<\/p>\n<p>One of the weird offshoots of facial recognition not being able to see Black people very well is that, ironically and paradoxically, it gets used with greater frequency, on communities of color. Communities of color are very much more often subject to over-policing, on the assumption that the person \u201cfits the description,\u201d brought in by police, and, in many cases, just outright harassed, because they again \u201cfit the description.\u201d Facial recognition systems that cannot see darker skin tones are still much more likely to be rendered off of mug shots from databases that include those people who \u201cfit the description,\u201d but match no specific features <strong><em>of<\/em><\/strong> description. What this means, ultimately, is that facial recognition is going to be the least accurate on the population on whom it is used the most often.<\/p>\n<p>You can look at the Georgetown Center For Privacy and Technology study, \u201cThe Perpetual Lineup,\u201d from 2016; they talk a lot about exactly this. It\u2019s a very long, very detailed and thoroughgoing report. It is fantastic.<\/p>\n<p>However, many people think, &#8220;well the answer to this is inclusion, right? We diversify the teams who are doing the training, we diversify the applicant pool, we diversify the training pool.&#8221; However. That doesn\u2019t work for everybody. In their 2018 talk called \u201cDon\u2019t Include Us, Thank You,\u201d sarah aoun and Nasma Ahmed look at Simone Browne\u2019s <em>Dark Matters<\/em>, talking about the history of facial recognition and photographic technologies as we recently discussed them, and they talk about the idea that, even if we were to make these technologies more accurate <strong><em>for<\/em><\/strong> the people who are most likely to be subject <strong><em>to<\/em><\/strong> them, it\u2019s not going to make them <strong><em>less often used<\/em><\/strong> on communities of color; it will in fact, provide an <em><strong>excuse<\/strong> <\/em>to use them more often on communities of color. Taylor Stone\u2019s talk yesterday about street lights and surveillance made me think a lot about this, when I was thinking about the idea of people who do surveillance on communities that they <em><strong>expect<\/strong><\/em> to be trouble. This is that exact same kind of problem.<\/p>\n<p>Moving away from facial recognition, but back into Google, we can talk about Dylann Roof\u2019s Google history. Dylann Roof killed nine people in a church in Charleston, South Carolina. He did so because he had a moment during the Trayvon Martin\/George Zimmerman trial in which he thought that he was understanding something about the nature of crime and inequality in America. And he was driven on his own account, for reasons that he couldn\u2019t quite articulate, to search \u201cBlack on white crime.\u201d We have no idea to know exactly what he was returned, but the way that Google Search works (and you can look up exactly how Google Search works), it\u2019s highly likely that very first thing he saw was\u2014based on his ISP, based on his location, and based on the searches in his area\u2014white supremacist propaganda about statistics about Black on white crime.<\/p>\n<p>I\u2019ve got a short primer basically on how to change your Google Search settings, by the way, if anybody wants to take a look at that later. You can make it not take your results from your surrounding area, and you can make it remove as much about you as possible.<\/p>\n<p>A more recent, tragic example, the Tree of Life synagogue shooting in Pittsburgh, just this last year. A white supremacist wandered into the synagogue and murdered a dozen people. Days\u2014literally <strong><em>days<\/em><\/strong>\u2014after this happened, Facebook\u2019s online ad metric architecture, which has been given wide berth to create categories for advertising on its own remit, generated a category for the white supremacist conspiracy theory, \u201cwhite genocide.\u201d It said to someone who was talking about Jewish life and the hassles and hazards of Jewish life, an investigator from The Intercept, \u201cHey, this post that you\u2019re making seems to fit with the ideas of these people who are interested in &#8216;white genocide,&#8217; there\u2019s about 180,000 people on Facebook who are interested in \u2018white genocide,\u2019 and if you add that word or that phrase in, your post will reach a wider audience.\u201d<\/p>\n<p>Facebook\u2019s ad mechanism did this on its own. It was trained to do this to find those patterns and to generate development add categories <em><strong>on its own<\/strong><\/em>. A week later, Amazon did the same thing.<\/p>\n<p>So how does this happen?<\/p>\n<p>Like I said, it happens because the data sets these things are given, the code that they\u2019re trained with, and assumptions at base\u2014assumptions of things like objectivity of neutrality, or shared knowledge and experience of the world. In each of these cases, again, a community of individuals spoke up in advance and very clearly said, \u201cHey, maybe don\u2019t do that. Maybe don\u2019t create the algorithm to do these things. maybe think about the outcome of these technologies. Because these technologies have a history, prior to this, in such a way that it is highly likely that they will continue to reproduce systems of oppression, and bias, and bigotry. So maybe rethink what you\u2019re doing.\u201d And in each case, they were not heeded. Why?<\/p>\n<p>Because of what we count as knowledge in the first place. What we think counts as knowledge doesn\u2019t often include things like lived experience, and if it does include lived experience, the person whose lived experience that includes is often not those who <em><strong>have been marginalized<\/strong><\/em> by overarching systems of knowledge, assumption, authority, and expertise. We tend to preference <em><strong>systematized<\/strong> <\/em>knowledge, but systems based on what? Systems based on what kinds of inputs? It&#8217;s a question that we do not often enough, ask? And the question that we very rarely ever ask is, \u201cWhat about both of these things in tandem? What about people who, through their lived experience, have developed systems of knowledge that are not exactly reproducible for anyone who has not had that lived experience, directly?\u201d<\/p>\n<p>Who gets to know who gets to lay claim to knowledge, expertise, to have that knowledge and that expertise, heeded and recognized by the wider world?<\/p>\n<p>Sorry for the wall of text.<\/p>\n<p>A few fundamental points, here:<\/p>\n<p>\u201cDifferent phenomenological and post-phenomenological experiences produce different pictures of the world, different systems of knowledge by which to navigate that world.<\/p>\n<p>\u201cCode is not neutral, it is a language and like with any language, translation is an issue; we are translating our knowledge\u2014our lived experience gained from perspective\u2014into technoscientific language [that] systems can understand.<\/p>\n<p>\u201cPeople inscribe their values, their perspectives, into every single tool and system they create <strong><em>and<\/em><\/strong> into how they use them.\u201d<\/p>\n<p>We need to think intersectionally and <strong><em>intersubjectively<\/em><\/strong> about the construction of our knowledge need to think about those people who have not been included in the conversation about what it is we ought to be thinking about in the first place, what systems we ought, and ought <strong><em>not<\/em><\/strong>, to be trying to create, and how we ought to be, and ought not to be, deploying them.<\/p>\n<p>For this we can think about Donna Haraway\u2019s notion of the subaltern*; we can think about again, Kimberl\u00e9 Williams Crenshaw\u2019s notion of intersectionality of oppression; we can think about the idea that this is not about some kind of Oppression Olympics, it\u2019s about the idea that different Locuses [sic] of Power, different identities, different subjectivities, will be impressed upon and subjectified in different ways, depending upon the societies in which they live.<\/p>\n<p>I usually ask these questions at the outset of my talks, but I feel like this is a good place for them. These foundational questions of things like, how do you travel home? When you travel home outside of a car, where are your keys? What do you do when a police officer pulls you over? What kinds of things about your body do you struggle with whether and when to tell a new romantic partner? If you are able to stand, for how long? How do you prepare your hair on any given morning? What strategies do you have for keeping yourself out of institutional mental care? Without looking, how many exits to the lobby are there, and how fast can you reach them, encountering the fewest people possible? What\u2019s the highest you can reach, unassisted? What\u2019s the best way to reject someone\u2019s romantic advances such that it is less likely that they will physically assault you?<\/p>\n<p>Each and every one of these questions represents a category of lived experience and a system of knowledge developed around a way of behaving and interacting with and predicting in the world\u2014developed around real, everyday lived experiences for trying to survive and save one\u2019s own life.<\/p>\n<p>It matters who gets to know, to be known, and to translate their knowledge into technoscientific systems and devices.<\/p>\n<p>Thank you.<\/p>\n<p><strong>[Begin Question and Answer Portion] 23:06<\/strong><br \/>\nI have here at the end, quite a long list of references to all the things that I was talking about, in case you want to look them up. They are increasing every day.<\/p>\n<p>Questions? Matt.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>[Dr Matthew Brown] 23:30<br \/>\n<\/strong>So, I sort of feel like the beginning of your talk\u2026 so the initial framing of the talk and the end of the talk are kind of competing to interpret the examples in the middle. So what I heard in the beginning was a kind of discussion about foreseeable harms and the failure to take into account foreseeable harms\u2014I would, you know, I would use the language of like \u201cmoral recklessness\u201d and \u201cmoral negligence\u201d to think about, and then I would then I would interpret the examples then as a sort of straightforward ethical failure, a failure to take into account the risk and appropriate way. The end of the talk is sort of epistemologically framed, and it\u2019s about situated knowledges and the failure is, is a failure, kind of, of the knowledge system, right? Rather than being a sort of values or ethics-oriented failure. So I was, I was hoping that you could say something to kind of bring it together.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>[DPW] 24:46<br \/>\n<\/strong>Not \u201crather than;\u201d \u201cbecause of.\u201d The failure in the values and the moral system is <strong><em>because of <\/em><\/strong>the failure in knowledge, <strong><em>because of<\/em><\/strong> the failure in epistemological reckoning, and a recognition of the epistemological <strong><em>status<\/em><\/strong> <strong><em>of<\/em><\/strong> those who might otherwise have prevented these moral failures, or at least mitigated them. That, in and of itself, is <strong><em>also<\/em><\/strong> a moral failure. It\u2019s a systemic moral failure. It\u2019s a moral failure to recognize those people <strong><em>as<\/em><\/strong> holders of knowledge, as caretakers of expertise, to recognize what they have as expertise and systems of knowledge <strong><em>as such<\/em><\/strong>, because they are not presented in very specific, structured ways. And by that mechanism\u2014by that metric\u2014they are then discounted as potential sites and sources of knowledge. And in so <strong><em>doing<\/em><\/strong>, we lose access to the moral status, the values framework on which we might have made better choices.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>[MB]<br \/>\n<\/strong>So foresight is the linking piece?<\/p>\n<p>&nbsp;<\/p>\n<p><strong>[DPW]<br \/>\n<\/strong>Yes. Yeah, Gordon.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>[Prof. Gordon Hull] 25:57<\/strong><br \/>\nYes, there\u2019s two ways to construe the claim here, I think, I just want you to tell me if I\u2019m out of it, or which one is more focal point of the thing. So one way is, we say, \u201cwell there\u2019s a problem with the failure to include different kinds of people in these systems; so, if the training data would include Black people, or taken into account that there was over-policing, then it would do a lot better,\u201d right? That\u2019s one way. The other way, and I think this is probably\u2014I don\u2019t remember the Simone Browne book that well, but somehow the idea of quantification and statistics, <strong><em>itself<\/em><\/strong>, is in several ways tied to racism, regarding the slave ship manifests.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>[DPW] 26:38<br \/>\n<\/strong>Yes. Yes.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>[GH] 26:41<br \/>\n<\/strong>Am I reading you right that you sort of favor the Simone Browne Argument?<\/p>\n<p>&nbsp;<\/p>\n<p><strong>[DPW]<br \/>\n<\/strong>Yes. More\u2014Moreover, like it is, it is clear to be able to say that, \u201cyes, we <strong><em>could<\/em><\/strong> do a better job of measuring and quantifying certain categories of people,\u201d but it is also clear that\u2014and this is Browne\u2019s argument, this is Safiya Noble\u2019s, this is Nasma Ahmed\u2019s and sarah aoun\u2019s argument\u2014that, <strong><em>when we have done so in the past<\/em><\/strong>, we have used it specifically to oppress and harm others.<\/p>\n<p>Blood quantum for Native Americans, the you know, measuring of breath and physiognomical capacity for African Americans. All of these have histories of very precise and inscribe and careful measurement being used to make the lives of certain groups of people hell.<\/p>\n<p>Yeah.<\/p>\n<p><strong>[Unknown Questioner] 27:36<br \/>\n<\/strong>So, following on Gordon\u2019s question, I had a similar question in mind about which of the sort of critiques you are looking at? Stuff like Joy Buolamwini\u2019s work, for instance, says that we need, technology needs to be more inclusive\u2026<\/p>\n<p>&nbsp;<\/p>\n<p><strong>[DPW] 27:49<br \/>\n<\/strong>Yeah, I meant to it, I meant to include Joy\u2019s working here because the Algorithm Justice Project [sic] is amazing, in terms of her remedy, here.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>[UQ] 27:54<br \/>\n<\/strong>Yeah, it is, though, like it more pitches to work Gordon framed as the first horn, right? Where you sort of say like, \u201cWe need to make these predictions more accurate across the board,\u201d rather than the latter critique that says \u201cthis, this kind of predictive apparatus is going to reproduce the kinds of projects that we\u2019re worried about, no matter what,\u201d right? So, I also think both are really interesting critiques; I hold with you in thinking that the latter critique, the more radical critique, is more promising. But then my question is, like, given what I think you laid out as like really useful sets of questions that can prime us to get outside of our normal, normative epistemological ways of solving these problems, like, what lesson do we take from that for further developing this radical critique, Do we take this as like\u2014Like, obviously, this is not an argument for, like, more diverse tech teams, right? Like, that\u2019s not going to do anything.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>[DPW] 28:52<br \/>\n<\/strong>No, I meant, I meant to put a picture of Stanford\u2019s like the the AI ethics team that was like 120, white people.<\/p>\n<p>&nbsp;<\/p>\n<p><strong>[UQ] 29:01<br \/>\n<\/strong>Well, even, there are independent arguments for diversifying tech. But like, this is not one of them. So do we think of this as like, helping us articulate constraints that we should place on the kinds of predictions that we think these systems should be allowed to make, do we think of it as an argument for abolishing these kinds of systems entirely, and coming up with new ways of doing this kind of work? Like what\u2019s the normative lesson?<\/p>\n<p>&nbsp;<\/p>\n<p><strong>[DPW] 29:26<\/strong><br \/>\nMy normative lesson is, \u201cHeed Marginalized People.\u201d Fundamentally and foundationally. And, like, don\u2019t include them necessarily in your training data, but include them in the questions that you ask at the outset, and who you think to ask about what you ought to do. I don\u2019t think that there\u2019s going to be (and this has come up a couple of times in our previous conversations), it\u2019s not going to be a one-size-fits all answer for what we do and when we do it, in every single kind of case; there\u2019s going to be a matrix [of] shifting, dynamic engagement of needs, stakeholders, and, ultimately, power dynamics that need to be redressed. And the only way that we\u2019re going to be able to do that in a way that harkens a bit towards justice is going to be some way that allows us to say, \u201cOkay, who have we not included?\u201d<\/p>\n<p>So to ask that question, \u201cWho have we not thought about; whose harms, whose needs, whose voice has been, perhaps speaking, but unheeded, for a very long time? And how do we ensure that the things that they have called out as potential sites of failure, don\u2019t go unremarked, don\u2019t go unaddressed, for such a long time that we one day turn around and go, \u2018whoever could have thought that this camera or this facial recognition technology might in some several ways be racist?\u2019\u201d except for all of the hundreds of people who told you that, \u00a0for the past several decades<\/p>\n<p>I do recognize, by the way, that it\u2019s really close up on our break time\u2014it\u2019s actually <strong><em>past<\/em><\/strong> the start of our break time, so if y\u2019all want to get some coffee, I definitely understand that and I don\u2019t hold that against you. But if you want to stay here and keep talking, I\u2019m also willing to do that.<\/p>\n<p>Josh.<\/p>\n<p><strong>[Joshua Earle] 31:10<br \/>\n<\/strong>So\u2026are there systems, or, I guess categories of systems, or categories of things that, in this that we should just, as [unheard name] asked in his question, just not do, or just not <strong><em>try<\/em><\/strong> to do?<\/p>\n<p>&nbsp;<\/p>\n<p><strong>[DPW] 31:33<br \/>\n<\/strong>Yeah, I mean, there have been several instances in the very recent past that have said, you know, everybody looks at and goes, \u201c\u2026Why would you do that? Like, why would you? Why would you try to make that technology? It would fundamentally be used for oppressive structures,\u201d and facial recognition technology in the service of surveillance is one of them, primarily, like\u2026 So, Ruha Benjamin, talked about this a couple of weeks ago at the Gender, Bodies, and Technology conference in, down in Roanoke; and she was talking about this idea that, you know, looking at the prison industrial complex, looking at the carceral justice system, there are certain technologies that are only ever going to forward the oppressive aims <strong><em>of<\/em><\/strong> carceral justice; that are never going to help us to overturn that oppressive use of power; and that facial recognition for surveillance is primary among them.<\/p>\n<p>So I think that Yeah, we can see instances in which things like that things like that facial recognition that purports to be able to tell you with your you know, if somebody\u2019s gay, like\u2026 When there are systems in the world, there are political systems and regimes in the world, right now, who want to better be, you know, to be able to better identify gay people, so that they can <strong><em>kill them<\/em><\/strong>\u2026 Why would you make that? Why would you even prove that concept? Or attempt or <strong><em>purport<\/em><\/strong> to prove that concept, because, by the way, their methodology is <strong><em>garbage<\/em><\/strong>, and they prove nothing. It proves, it proves not anything at all. Why would you even move towards something like it, though? Why would you give someone the tool to be able to say, \u201cI have a[n] \u2018objective\u2019 mathematical system that proves that certain people are gay?<\/p>\n<p><strong>[Prof. Chlo\u00e9 S. Georas(?)] 33:41<\/strong><br \/>\nI think partly what\u2019s interesting is, the technologies that based on racist biometric assumptions, is that they are part of that long cultural history of criminal anthropology. It has no scientific\u2014all of that has been debunked, historically, but now it\u2019s re-emerging with this legitimacy, and sort of scientific aura, and being sold and used, for \u201cnational security,\u201d \u201cborder control,\u201d etc. And it\u2019s a <strong><em>reenactment,<\/em><\/strong> of that history.<\/p>\n<p><strong>[DPW] 34:08<\/strong><\/p>\n<p>Yes. It\u2019s one of the one of the links that I that I put forward\u2014and I think I put it in the, in my reference slides\u2014there\u2019s a, it\u2019s just physiognomy, again, all over again, it\u2019s physiognomy and phrenology, again. It\u2019s this idea that there is a particular type of bodily metric that we can be able to, to make and make \u201cfit.\u201d And that necessarily elides disabled bodies that necessarily elides fat bodies, it necessarily elides any non-normative body that we want to say is you\u2019re not right, you\u2019re not the right kind of person, quote, unquote, right, kind, right. And that is exactly what\u2019s being repurposed. But it\u2019s being again, given this air of this kind of scientific veneer, again.<\/p>\n<p>It\u2019s being said, \u201cOh, you know, the math says it\u2019s okay.\u201d \u201cThe math\u201d is just another system that we\u2019ve realized biases into that\u2019s you can make it do anything. You just have to know the system well enough to make it do anything. But that doesn\u2019t mean it\u2019s, \u201cobjective.\u201d Doesn\u2019t mean it\u2019s \u201cbias free,\u201d somehow. There\u2019s no such thing.<\/p>\n<p><strong>[DPW] 35:19<br \/>\n<\/strong>Thank y\u2019all. Really appreciate you being here.<\/p>\n<p>[*This concept was actually first explored, in this way, by Gayatri Spivak, in her 1988 lecture, \u201cCan the Subaltern Speak?\u201d]<\/p>\n<hr \/>\n<p>Until Next Time.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Below are the slides, audio, and transcripts for my talk &#8216;&#8221;Any Sufficiently Advanced Neglect is Indistinguishable from Malice&#8221;: Assumptions and Bias in Algorithmic Systems,&#8217; given at the 21st Conference of the Society for Philosophy and Technology, back in May 2019. (Cite as: Williams, Damien P. &#8216;&#8221;Any Sufficiently Advanced Neglect is Indistinguishable from Malice&#8221;: Assumptions and [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[1],"tags":[8,1118,1081,1300,959,1109,974,1198,101,1115,1021,1129,190,1398,1413,1116,1117,1210,278,1111,1411,294,1410,1131,1002,1134,1112,1159,418,419,1408,1399,493,494,1133,944,560,561,562,1207,942,978,624,628,1026,678,684,1409,1229,1230,1132,1211,1124,1412,960,1149,1030,811,1213,1135],"class_list":["post-5442","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-a-future-worth-thinking-about","tag-ableism","tag-algorithmic-bias","tag-algorithmic-justice","tag-algorithmic-systems","tag-amazon","tag-animal-ethics","tag-audio","tag-bias","tag-biomedical-ethics","tag-biotech-ethics","tag-compassion","tag-consciousness","tag-damien-patrick-williams","tag-debbie-chachra","tag-disability","tag-disability-studies","tag-embodied-cognition","tag-ethics","tag-facebook","tag-facial-recognition","tag-feminist-ethics","tag-gayatri-spivak","tag-gender","tag-google","tag-homophobia","tag-implicit-bias","tag-intersubjectivity","tag-invisible-architecture-of-bias","tag-invisible-architectures-of-bias","tag-joy-buolamwini","tag-kimberle-williams-crenshaw","tag-machine-ethics","tag-machine-intelligence","tag-misogyny","tag-my-voice","tag-my-words","tag-my-work","tag-my-writing","tag-neurodiversity","tag-neuroethics","tag-personhood-rights","tag-phenomenology","tag-philosophy-of-mind","tag-philosophy-of-technology","tag-race","tag-racism","tag-ruha-benjamin","tag-science-and-technology-studies","tag-science-technology-and-society","tag-sexism","tag-social-cognition","tag-social-dynamics","tag-society-for-philosophy-and-technology","tag-surveillance-culture","tag-systemic-disparity","tag-technological-ethics","tag-technology","tag-transcripts","tag-transphobia"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p5WByP-1pM","jetpack_likes_enabled":true,"jetpack-related-posts":[{"id":5227,"url":"https:\/\/afutureworththinkingabout.com\/?p=5227","url_meta":{"origin":5442,"position":0},"title":"Appearance on the You Are Not So Smart Podcast","author":"Damien P. Williams","date":"December 4, 2017","format":false,"excerpt":"A few weeks ago I had a conversation with David McRaney of the You Are Not So Smart podcast, for his episode on Machine Bias. As he says on the blog: Now that algorithms are everywhere, helping us to both run and make sense of the world, a strange question\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":5316,"url":"https:\/\/afutureworththinkingabout.com\/?p=5316","url_meta":{"origin":5442,"position":1},"title":"My Appearance on The Machine Ethics Podcast&#8217;s A.I. Retreat Episode","author":"Damien P. Williams","date":"October 23, 2018","format":false,"excerpt":"As you already know, we went to the second Juvet A.I. Retreat, back in September. If you want to hear several of us talk about what we got up to at the then you're in luck because here are several conversations conducted by Ben Byford of the Machine Ethics Podcast.\u2026","rel":"","context":"In \"algorithmic bias\"","block_context":{"text":"algorithmic bias","link":"https:\/\/afutureworththinkingabout.com\/?tag=algorithmic-bias"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/img.youtube.com\/vi\/ownE2zxTN2U\/0.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":5249,"url":"https:\/\/afutureworththinkingabout.com\/?p=5249","url_meta":{"origin":5442,"position":2},"title":"&#8220;We Built Them From Us&#8221;: My Appearance on the TEAM HUMAN Podcast","author":"Damien P. Williams","date":"February 22, 2018","format":false,"excerpt":"Earlier this month I was honoured to have the opportunity to sit and talk to Douglas Rushkoff on his TEAM HUMAN podcast. If you know me at all, you know this isn't by any means the only team for which I play, or even the only way I think about\u2026","rel":"","context":"In \"algorithmic bias\"","block_context":{"text":"algorithmic bias","link":"https:\/\/afutureworththinkingabout.com\/?tag=algorithmic-bias"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":5899,"url":"https:\/\/afutureworththinkingabout.com\/?p=5899","url_meta":{"origin":5442,"position":3},"title":"ChatGPT is Actively Marketing to Students During University Finals Season","author":"Damien P. Williams","date":"April 4, 2025","format":false,"excerpt":"It's really disheartening and honestly kind of telling that in spite of everything, ChatGPT is actively marketing itself to students in the run-up to college finals season. We've talked many (many) times before about the kinds of harm that can come from giving over too much epistemic and heuristic authority\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"Screenshot of ChatpGPT page:ChaptGPT Promo: 2 months free for students ChatGPT Plus is now free for college students through May Offer valid for students in the US and Canada [Buttons reading \"Claim offer\" and \"learn more\" An image of a pencil scrawling a scribbly and looping line] ChatGPT Plus is here to help you through finals","src":"https:\/\/cdn.bsky.app\/img\/feed_fullsize\/plain\/did:plc:ybkylffhwhn2an2ic2lxh76k\/bafkreidh6mhffosfxhbgnxx6aybjycvgj3c2ygzto2xhzvsohdsv3g6evm@jpeg","width":350,"height":200,"srcset":"https:\/\/cdn.bsky.app\/img\/feed_fullsize\/plain\/did:plc:ybkylffhwhn2an2ic2lxh76k\/bafkreidh6mhffosfxhbgnxx6aybjycvgj3c2ygzto2xhzvsohdsv3g6evm@jpeg 1x, https:\/\/cdn.bsky.app\/img\/feed_fullsize\/plain\/did:plc:ybkylffhwhn2an2ic2lxh76k\/bafkreidh6mhffosfxhbgnxx6aybjycvgj3c2ygzto2xhzvsohdsv3g6evm@jpeg 1.5x, https:\/\/cdn.bsky.app\/img\/feed_fullsize\/plain\/did:plc:ybkylffhwhn2an2ic2lxh76k\/bafkreidh6mhffosfxhbgnxx6aybjycvgj3c2ygzto2xhzvsohdsv3g6evm@jpeg 2x"},"classes":[]},{"id":5375,"url":"https:\/\/afutureworththinkingabout.com\/?p=5375","url_meta":{"origin":5442,"position":4},"title":"2017 SRI Technology and Consciousness Workshop Series Final Report","author":"Damien P. Williams","date":"March 8, 2019","format":false,"excerpt":"So, as you know, back in the summer of 2017 I participated in SRI International\u2019s Technology and Consciousness Workshop Series. This series was an eight week program of workshops the current state of the field around, the potential future paths toward, and the moral and social implications of the notion\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"Image of a rectangular name card with a stylized \"Technology & Consciousness\" logo, at the top, the name Damien Williams in bold in the middle, and SRI International italicized at the bottom; to the right a blurry wavy image of what appears to be a tree with a person standing next to it and another tree in the background to the left., all partially mirrored in a surface at the bottom of the image.","src":"https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/03\/20190308_021228.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/03\/20190308_021228.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/03\/20190308_021228.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/03\/20190308_021228.jpg?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/03\/20190308_021228.jpg?resize=1050%2C600&ssl=1 3x, https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/03\/20190308_021228.jpg?resize=1400%2C800&ssl=1 4x"},"classes":[]},{"id":5269,"url":"https:\/\/afutureworththinkingabout.com\/?p=5269","url_meta":{"origin":5442,"position":5},"title":"My Review of Shannon Vallor&#8217;s TECHNOLOGY AND THE VIRTUES","author":"Damien P. Williams","date":"May 10, 2018","format":false,"excerpt":"My piece \"Cultivating Technomoral Interrelations,\" a review of\u00a0Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting has been up over at the Social Epistemology Research and Reply Collective for a few months, now, so I figured I should post something about it, here. As you'll read, I\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"https:\/\/socialepistemologydotcom.files.wordpress.com\/2018\/02\/shannon-vallor-technology-virtues-cover.jpg?w=350&h=200&crop=1","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/5442","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5442"}],"version-history":[{"count":7,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/5442\/revisions"}],"predecessor-version":[{"id":5448,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/5442\/revisions\/5448"}],"wp:attachment":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5442"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5442"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5442"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}