{"id":5600,"date":"2021-10-08T01:21:37","date_gmt":"2021-10-08T05:21:37","guid":{"rendered":"https:\/\/afutureworththinkingabout.com\/?p=5600"},"modified":"2023-01-14T22:23:54","modified_gmt":"2023-01-15T03:23:54","slug":"im-not-afraid-of-ai-overlords-im-afraid-of-whoevers-training-them-to-think-that-way","status":"publish","type":"post","link":"https:\/\/afutureworththinkingabout.com\/?p=5600","title":{"rendered":"I\u2019m Not Afraid of AI Overlords\u2014 I\u2019m Afraid of Whoever&#8217;s Training Them To Think That Way"},"content":{"rendered":"<p><strong><span style=\"text-decoration: underline;\">I\u2019m Not Afraid of AI Overlords\u2014 I\u2019m Afraid of Whoever&#8217;s Training Them To Think That Way<\/span><\/strong><\/p>\n<p>by Damien P. Williams<\/p>\n<p>I want to let you in on a secret: According to Silicon Valley\u2019s AI&#8217;s, <a href=\"https:\/\/www.npr.org\/2021\/09\/04\/1034368231\/facebook-apologizes-ai-labels-black-men-primates-racial-bias\">I\u2019m not human<\/a>.<\/p>\n<p>Well, maybe they think I\u2019m human, but they don\u2019t think I\u2019m <a href=\"https:\/\/www.aclu.org\/blog\/privacy-technology\/surveillance-technologies\/amazons-face-recognition-falsely-matched-28\">me<\/a>. Or, if they think I\u2019m me and that I\u2019m human, they think I don\u2019t deserve <a href=\"https:\/\/www.statnews.com\/2020\/06\/17\/racial-bias-skews-algorithms-widely-used-to-guide-patient-care\/\">expensive medical care<\/a>. Or that I pose a <a href=\"https:\/\/www.propublica.org\/article\/machine-bias-risk-assessments-in-criminal-sentencing\">higher risk of criminal recidivism<\/a>. Or that my <a href=\"https:\/\/www.technologyreview.com\/2020\/08\/07\/1006132\/software-algorithms-proctoring-online-tests-ai-ethics\/\">fidgeting behaviours<\/a> or <a href=\"https:\/\/www.nytimes.com\/2020\/04\/04\/us\/politics\/coronavirus-zoom-college-classes.html\">culturally-perpetuated shame about my living situation<\/a> or my <a href=\"https:\/\/www.theverge.com\/2021\/4\/8\/22374386\/proctorio-racial-bias-issues-opencv-facial-detection-schools-tests-remote-learning\">race<\/a> mean I\u2019m more likely to be cheating on a test. Or that I want to see morally repugnant posts that my friends have commented on to call morally repugnant. Or that I shouldn\u2019t be given a <a href=\"https:\/\/www.cbsnews.com\/news\/redlining-what-is-history-mike-bloomberg-comments\/\">home loan<\/a> or a <a href=\"https:\/\/www.reuters.com\/article\/global-tech-ai-hiring\/analysis-ai-is-taking-over-job-hiring-but-can-it-be-racist-idUSL5N2NF5ZC\">job interview<\/a> or <a href=\"https:\/\/cdt.org\/insights\/report-challenging-the-use-of-algorithm-driven-decision-making-in-benefits-determinations-affecting-people-with-disabilities\">the benefits I need to stay alive<\/a>.<\/p>\n<p>Now, to be clear, \u201cAI\u201d is a misnomer, for several reasons, but we don\u2019t have time, here, to really dig into all the thorny discussion of values and beliefs about what it means to think, or <a href=\"http:\/\/reallifemag.com\/what-its-like-to-be-a-bot\/\">to be<\/a> a pow3r<a href=\"https:\/\/afutureworththinkingabout.com\/?p=5347\">mind<\/a>\u2014 especially because we need to take our time talking about why values and beliefs matter to conversations about \u201cAI,\u201d at all. So instead of \u201cAI,\u201d let\u2019s talk specifically about algorithms, and machine learning.<\/p>\n<p>Machine Learning (ML) is the name for a set of techniques for systematically reinforcing patterns, expectations, and desired outcomes in various computer systems. These techniques allow those systems to make sought after predictions based on the datasets they\u2019re trained on. ML systems learn the patterns in these datasets and then extrapolate them to model a range of statistical likelihoods of future outcomes.<\/p>\n<p>Algorithms are sets of instructions which, when run, perform functions such as searching, matching, sorting, and feeding the outputs of any of those processes back in on themselves, so that a system can learn from and refine itself. This feedback loop is what allows algorithmic machine learning systems to provide carefully curated search responses or newsfeed arrangements or facial recognition results to consumers like me and you and your friends and family and the police and the military. And while there are many different types of algorithms which can be used for the above purposes, they all remain sets of encoded instructions to perform a function.<\/p>\n<p>And so, in these systems\u2019 defense, it\u2019s no surprise that they think the way they do: That\u2019s exactly how we\u2019ve told them to think.<\/p>\n<div id=\"attachment_5602\" style=\"width: 637px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-5602\" class=\"wp-image-5602 size-large\" src=\"https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2021\/10\/POI_FB_0201-1024x576.webp\" alt=\"\" width=\"627\" height=\"353\" srcset=\"https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2021\/10\/POI_FB_0201-1024x576.webp 1024w, https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2021\/10\/POI_FB_0201-300x169.webp 300w, https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2021\/10\/POI_FB_0201-768x432.webp 768w, https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2021\/10\/POI_FB_0201.webp 1280w\" sizes=\"auto, (max-width: 627px) 100vw, 627px\" \/><p id=\"caption-attachment-5602\" class=\"wp-caption-text\">[Image of Michael Emerson as Harold Finch, in season 2, episode 1 of the show <em>Person of Interest<\/em>, &#8220;The Contingency.&#8221; His face is framed by a box of dashed yellow lines, the words &#8220;Admin&#8221; to the top right, and &#8220;Day 1&#8221; in the lower right corner.]<\/p><\/div>\n<p><!--more--><\/p>\n<p>The fact of the matter is, this isn\u2019t much of a secret. That\u2019s because these are not new issues and these problems and their root causes have been occurring for decades, or even centuries in some cases\u2014 a fact which makes their persistence more shocking and dismaying, rather than less. For instance, though Facebook\u2019s is the most recent instance, the problem of facial recognition systems miscategorizing Black people as non-human primates goes back at least as far as <a href=\"https:\/\/www.forbes.com\/sites\/mzhang\/2015\/07\/01\/google-photos-tags-two-african-americans-as-gorillas-through-facial-recognition-software\/?sh=3ff08fc8713d\">Google\u2019s 2015 incident<\/a> of the same type. And that problem, itself, is connected to the fact that the history of photography hasn\u2019t done well by darker skin tones <a href=\"https:\/\/www.tandfonline.com\/doi\/full\/10.1080\/23299460.2020.1831365\">until relatively recently<\/a>.<\/p>\n<p>In that same vein, the problems with Facebook\u2019s ranking and delivery algorithms for their newsfeed go back to well before the Cambridge Analytica incident, and their intentional manipulation of emotional, mental, and social states has been known to be baked into their <a href=\"https:\/\/afutureworththinkingabout.com\/?attachment_id=5563\">advertising revenue<\/a> and other profit models, even before whistleblower Frances Haugen unveiled these most recent <a href=\"https:\/\/www.wsj.com\/articles\/facebook-knows-instagram-is-toxic-for-teen-girls-company-documents-show-11631620739\">scandals<\/a> to the <a href=\"https:\/\/www.wsj.com\/articles\/the-facebook-files-11631713039\">world<\/a>. Not to mention, the attendant harms of unmoderated comment sections have been known for decades\u2014 which of course hasn\u2019t stopped Facebook and Twitter from using those same logics to drive clicks and \u201cengagement.\u201d<\/p>\n<p>So I\u2019ll say it again: It is no surprise that these systems reproduce bad prejudicial social outcomes, when they have been repeatedly and consistently designed, built, trained, and taught to operate with these prejudicial values in mind.<\/p>\n<p>All of these bad outcomes are still happening because the people in charge of commissioning, designing, building, and training the algorithmic systems fundamentally refuse to look at the prejudicially biased contexts we swim in\u2014 contexts which then foster the values we all hold, which creators then imbue into their creations.<\/p>\n<p>I am by no means the only person to say this. I\u2019m just one among the host of people like <a href=\"http:\/\/gendershades.org\/\">Timnit Gebru and Joy Buolamwini<\/a> and <a href=\"https:\/\/techanddisability.com\/\">Ashley Shew<\/a> and <a href=\"https:\/\/www.macfound.org\/fellows\/class-of-2021\/safiya-noble\">Safiya Noble<\/a> and <a href=\"https:\/\/www.ruhabenjamin.com\/race-after-technology\">Ruha Benjamin<\/a> and <a href=\"https:\/\/medium.com\/s\/story\/data-violence-and-how-bad-engineering-choices-can-damage-society-39e44150e1d4\">Anna Lauren Hoffman<\/a> and <a href=\"https:\/\/virginia-eubanks.com\/books\/\">Virginia Eubanks<\/a> and <a href=\"https:\/\/hashtagcauseascene.com\/\">Kim Crayton<\/a> and many, <a href=\"https:\/\/docs.google.com\/document\/d\/1-TbSYMiFhJbuswzfWJ3q8ET2H3ohhuIK\/edit?usp=sharing&amp;ouid=112030251199838137436&amp;rtpof=true&amp;sd=true\">many others<\/a> who have highlighted the individual and cultural harms of algorithmic and other technological tools and systems. The problem is, if corporations, regulators, and the general public heed these voices at all, it\u2019s only when something has already gone wrong\u2014 and even then only to <a href=\"https:\/\/www.degruyter.com\/document\/doi\/10.1515\/9789048550180-016\/html\">ethics-wash<\/a> their internal procedures.<\/p>\n<p>And the people who have spent their lives studying the social implications of technologies are also the same ones who keep trying to tell you that letting corporations regulate themselves\u2014 or letting them set up disingenuous \u201c<a href=\"https:\/\/www.theatlantic.com\/international\/archive\/2021\/05\/facebook-oversight-board-trump-problem\/618809\/\">oversight boards<\/a>\u201d to do it for them\u2014 is a terrible idea. Powerful corporations will obfuscate and outright lie about the harms they cause, and without real regulatory oversight, nothing will stop them. And so what all of this demonstrates is that calls for \u201cAlgorithmic Transparency\u201d are only one part of how to address these problems. Intelligibility of those algorithms to the general public has to be another part, and meaningful accountability for the harms these systems and their parent corporations perpetrate, another alongside both of those.<\/p>\n<p>That is to say, knowing how these <a href=\"https:\/\/www.propublica.org\/article\/breaking-the-black-box-how-machines-learn-to-be-racist\">intentionally blackboxed<\/a> systems learn what they learn and do what they do is important, especially as companies move to <a href=\"https:\/\/themarkup.org\/citizen-browser\/2021\/09\/21\/facebook-rolls-out-news-feed-change-that-blocks-watchdogs-from-gathering-data\">eliminate even what little access researchers have managed to scrape together<\/a>. But that knowledge is nothing if <a href=\"https:\/\/en.panoptykon.org\/algorithms-of-trauma\">we can&#8217;t meaningfully enforce changes to the design, construction, and implementation<\/a> of these companies\u2019 systems.<\/p>\n<p>How can anyone claim to be surprised that Facebook knew exactly how much psychosocial body image damage Instagram has been doing to everyone, especially young women and girls, and how much cultural damage their main newsfeed algorithms have perpetrated, even as they tried to deflect and minimize questions about it? Not only has Facebook obfuscated and outright lied about its deadly impacts before, such as during and after the <a href=\"https:\/\/www.bbc.com\/news\/world-asia-55929654\">Rohingya Genocide in Myannmar<\/a>, but in so doing, as now, they followed the model of large, harmful corporations like Philip-Morris (<a href=\"https:\/\/news.stanford.edu\/pr\/2007\/pr-proctor-021407.html\">and Big Tobacco as a whole<\/a>), pulled the same tactic, <a href=\"https:\/\/twitter.com\/jomc\/status\/1440529505907404805\">to a T<\/a>.<\/p>\n<p>Which, if anything like \u201cgood\u201d can be said to come from all of this, at least we know that we can add criminal conspiracy and racketeering charges in on top of our antitrust and monopoly complaints, in the wake of the <a href=\"https:\/\/www.reuters.com\/legal\/litigation\/shareholder-firms-fight-lead-facebook-privacy-derivative-suit-delaware-2021-08-11\/\">simply massive civil suit opened in August of 2021<\/a>.<\/p>\n<p>In light of this landscape of increasing Big Tech pushback, some corporations are currently trying a \u201cget out ahead of it\u201d strategy. They either make pre-emptive changes to their algorithms before specific grievances can be made, as <a href=\"https:\/\/www.npr.org\/2021\/09\/29\/1041493544\/youtube-vaccine-misinformation-ban\">YouTube has recently done<\/a> with vaccine misinformation, or they performatively acknowledge that information \u201cmay come to light,\u201d as <a href=\"https:\/\/www.nytimes.com\/2021\/10\/02\/technology\/whistle-blower-facebook-memo.html\">Facebook has done with regard<\/a> to its ongoing whistleblower situation. The problem with these strategies is that they don\u2019t get at the heart of the real problems.<\/p>\n<p>Because these corporations\u2019 algorithms are, in fact, responsible for these damages. As noted above, these platforms have pioneered ML techniques for content weighting, preference ranking, and <a href=\"https:\/\/slate.com\/technology\/2014\/06\/facebook-unethical-experiment-it-made-news-feeds-happier-or-sadder-to-manipulate-peoples-emotions.html\">sentiment manipulation<\/a>, all which have been learned, gamed, and emulated by everyone from rival algorithmically-mediated platforms to malicious <a href=\"https:\/\/www.technologyreview.com\/2021\/09\/16\/1035851\/facebook-troll-farms-report-us-2020-election\/\">bad actors <b><i>on<\/i><\/b> all of these platforms<\/a>. Facebook in particular has spurred an ecosystem which rewards those who spread\u2014 while actively disincentivizing anyone else\u2019s <b><i>understanding of<\/i><\/b>\u2014 emotionally charged, affectively resonant content.<\/p>\n<p>We know all of this. The question is, what are we going to <b><i>do<\/i><\/b> about it?<\/p>\n<p><div id=\"attachment_5612\" style=\"width: 304px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-5612\" class=\"wp-image-5612 size-medium\" src=\"https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2021\/10\/NoFBing-1-294x300.png\" alt=\"\" width=\"294\" height=\"300\" srcset=\"https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2021\/10\/NoFBing-1-294x300.png 294w, https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2021\/10\/NoFBing-1.png 300w\" sizes=\"auto, (max-width: 294px) 100vw, 294px\" \/><p id=\"caption-attachment-5612\" class=\"wp-caption-text\">[Image of the a red circle with a diagonal slash through it, around a blue Facebook logo on a black background, the bottom of the \u201cf\u201d rendered as cigarette ash]<\/p><\/div>For one thing, we may have to accept that, ultimately, some technological interventions might just need to be stopped, as a whole, until we can seriously reckon with their implications and consequences. Or that certain platforms have created so much harm, that they need to be broken up and fundamentally restructured. The former, at least, is a position with which the United Nations seems to agree, their Office of the High Commissioner for Human Rights having just <a href=\"https:\/\/www.ohchr.org\/en\/2021\/09\/artificial-intelligence-risks-privacy-demand-urgent-action-bachelet\">called for<\/a> a \u201ca moratorium on the sale and use of artificial intelligence (AI) systems that pose a serious risk to human rights until adequate safeguards are put in place.\u201d<\/p>\n<p>Now, there are plenty of good arguments for moratoria on development of AI\/ML systems, and the \u201cadequate safeguards [to protect human rights]\u201d language is good, but even this effort needs more specificity. Other moratoria and calls for abolition have clearly laid out the actual and potential harms to various communities, with an eye toward specific redress, whereas this call leaves a great deal of undefined space. While we can always benefit from leaving space for more research, when thinking about rights and justice, rather than merely considering probabilities and risk ratios, we need to clearly outline and discuss our values.<\/p>\n<p>The full text of that <a href=\"https:\/\/www.ohchr.org\/EN\/HRBodies\/HRC\/RegularSessions\/Session48\/Documents\/A_HRC_48_31_AdvanceEditedVersion.docx\">UN AI Human Rights Report<\/a> contains not one use of the word \u201cvalues,\u201d and most uses of \u201cjustice\u201d are in the context of the carceral system. There are in fact no explicit namings of racism, sexism, ableism, transphobia, or homophobia, here, despite guidance that,<\/p>\n<blockquote><p>Particular attention should be paid to disproportionate impacts on women and girls, lesbian, gay, bisexual, transgender and queer individuals, persons with disabilities, persons belonging to minorities, older persons, persons in poverty and other persons who are in a vulnerable situation. (See section IV, subsection B, on page 13.)<\/p><\/blockquote>\n<p>Instead, it leans on \u201cbias,\u201d a word which many take to be synonymous with \u201cprejudice,\u201d but is in fact much more like \u201chabits of thought,\u201d in that there is no way to be a thinking, perceiving person, without developing some. Two instances in which our biases may become a problem are when we don\u2019t investigate them, or when they\u2019re bigoted\u2014or both.<\/p>\n<p>But bias also carries connotations of individual action and responsibility\u2014 something a person holds or does, and something that can be countered with specific, discrete changes. By focusing on \u201cbias\u201d and \u201cdiscrimination,\u201d the UN Report, like many other discussions of AI and tech regulation, takes what could be a clear discussion of institutional injustice, and veers away at the last second into the realm of individual instances and personal choices. This kind of language leaves room for harms both individual and systemic, potentially letting abusers take advantage of the public\u2019s expectations.<\/p>\n<p>Now, this isn\u2019t to say there\u2019s no merit in either the UN\u2019s report itself or their follow-up call for an AI moratorium; in fact, I think these are extremely important first steps. Instead, what I\u2019m saying is that we need to be sincerely willing to explicitly name the problems we\u2019re trying to face. Whose values are involved? Which rights are at stake? Why exactly are these important? How long have these situations been going on, and what cultural assumptions are they built on?<\/p>\n<p>Because it\u2019s these kinds of questions that let us clarify exactly whose perspectives we need to bring in and what kind of work we need to do.<\/p>\n<p>A large part of the \u201cAI\u201d work being done by current algorithmic and ML systems is bound up in human values of and thinking around things like power, punishment, and oppression\u2014 a situation numerous people seem to think can be solved by yet more algorithmic ML systems. But seeking <a href=\"https:\/\/en.wikipedia.org\/wiki\/Technological_fix\">techno-fixes<\/a> to values problems is a losing proposition, because all you will do is shift where and how those same bad values get worked in. That is, if we don\u2019t tackle the foundational questions of what we believe in and hope to achieve, then each new technological fix will just reproduce old harms in new ways, while thinking we \u201csolved\u201d the problem.<\/p>\n<p>Yes, the bad outcomes of algorithmic systems and tools are about how they replicate, reinforce, and iterate upon racism and sexism and ableism and transphobia and homophobia and fatphobia, but all of these human-created systems also express human values about having power-over other people, whether through medical classifications, capitalist valuation of human labour, or otherwise. If technological projects are undertaken without first examining these oppressive logics at their root, then all they are likely to change is who wields the whip.<\/p>\n<p>A great deal of lip service is given to the idea of a \u201ccorporate culture,\u201d but not enough attention and genuine intent are paid to <a href=\"https:\/\/most.oercommons.org\/courseware\/lesson\/686\/overview\">what a \u201cculture\u201d is<\/a>. A culture is comprised of beliefs, practices, values, rules, expectations, and assumptions about the way the world works. A culture is inherently social, and so a human culture has to factor in questions of human social understandings. This means that if you seriously want to change corporate tech\u2019s culture so that we can put a stop to the racist, sexist, transphobic, ableist, and otherwise bigoted and oppressive outcomes of algorithms, then we need to change:<\/p>\n<p>Your training courses;<\/p>\n<p>Your data sets;<\/p>\n<p>Your dev teams;<\/p>\n<p>Your managers;<\/p>\n<p>Your CEOs;<\/p>\n<p>Your funding sources;<\/p>\n<p>Your research questions;<\/p>\n<p>Your aims;<\/p>\n<p>Your Beliefs;<\/p>\n<p>Your Values.<\/p>\n<p>And doing all of that means more than hiring team after team of ethicists to serve on an \u201cEthics Board\u201d which answers to no one but you, and which recommendations you can ignore as you see fit, and whom <a href=\"https:\/\/www.wired.com\/story\/prominent-ai-ethics-researcher-says-google-fired-her\/\">you can then fire<\/a> when they <a href=\"https:\/\/www.bbc.com\/news\/technology-56135817\">give you news you don\u2019t like<\/a>. Making these changes means integrating perspectives from academic disciplines like disability studies, philosophy, sociology, and science and technology studies, bringing them in from the ground up, rather than as an afterthought once something goes wrong.<\/p>\n<p>But this isn\u2019t just about the values at play in Silicon Valley or corporate cultures\u2014 it\u2019s about all of our values, as we engage with technology. Bringing the right perspectives into the creation and regulation of tech requires our whole culture to be forethoughtful about potential harms, and to recognise and clearly state when reform isn\u2019t possible, and we must instead consider abolishing a tool or system, or breaking up a company. To do that, we must value those people with deep knowledge of how science, technology, ethics, justice, and human values all intersect\u2014 people who very often happen to be among the most marginalized and disregarded, when it comes to the truth of their own lived experience.<\/p>\n<p>Women, disabled people, LGBTQIA individuals, PoC, the neurodivergent, and other marginalized and minoritized groups are often most expert at thinking about the harmful and unjust ways a technology will be used, because they or other members of their community have directly experienced those similar harms. At a societal level, we have to be willing both to recognize this lived experience for the expertise it is, placing it in conversation with the expertise of researchers and theorists\u2014 some of whom might be <a href=\"https:\/\/disabilityvisibilityproject.com\/podcast-2\/\">the same people<\/a>\u2014 and to put all of those experts in positions of meaningful, high-level oversight and authority.<\/p>\n<p>Specifically, these critical experts must be in C-Suites\u2014 let\u2019s call them \u201c<a href=\"https:\/\/twitter.com\/Wolven\/status\/1445934244027584517\">Chief Social Implications Officers<\/a>\u201d or &#8220;Chief Values Integration Officers&#8221; or something along those lines\u2014 directly advising companies\u2019 boards, including giving orders to stop work on certain products, or split projects off from the main body, if harm will be done by continuing to grow and develop them. But even before that, these experts must also be on governmental regulatory oversight boards, providing expert-level public testimony, and guiding public policy, including recommending things like making changes to laws around the increase of shareholder holder value, even if it results in social and ethical damage.<\/p>\n<p>All of this, rather than placing social science experts in positions where they&#8217;re forced to merely pass nebulous and easily \u201clost\u201d recommendations up a corporate or bureaucratic chain. Without this kind of vehement, unequivocal commitment to recognizing, valuing, and empowering the social sciences, humanities, and lived experience as realms of expertise, we\u2019re likely to continue making technologies which reflect only the values we unconsciously and accidentally embed into it, rather the ones we\u2019d prefer. Thankfully, there is some evidence that these kinds of adjustments are already being made, possibly indicating that we can achieve even more meaningful change.<\/p>\n<p>In 2021, the Biden Presidential Administration has already named Alondra Nelson to the position of <a href=\"https:\/\/www.nature.com\/articles\/d41586-021-00159-z\">Deputy Director for Science and Society in the White House Office of Science and Technology Policy<\/a>, and nominated <a href=\"https:\/\/www.law.georgetown.edu\/news\/georgetown-laws-alvaro-bedoya-nominated-as-ftc-commissioner\/\">Alvaro M. Bedoya<\/a> to serve as a member of the Federal Trade Commission. Nelson\u2019s extensive body of work is situated in the history of race and medicine, with a focus on how genomics has been constructed along racialized lines. In Bedoya\u2019s role as the founding director of the Georgetown Center for Privacy and Technology he worked to highlight many of vast problems of algorithmic surveillance, including co-authoring a <a href=\"https:\/\/www.law.georgetown.edu\/privacy-technology-center\/publications\/the-perpetual-line-up\/\">massive report the racial prejudices embedded in the police use of facial recognition<\/a>, led by <a href=\"https:\/\/www.law.georgetown.edu\/privacy-technology-center\/people\/\">Clare Garvie<\/a>.<\/p>\n<p><div style=\"width: 2010px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" class=\"img-responsive\" src=\"https:\/\/www.perpetuallineup.org\/sites\/default\/files\/MainHeaderImage.jpg\" alt=\"Photo of people being recognized by facial recognition software\" width=\"2000\" height=\"900\" \/><p class=\"wp-caption-text\">[Image from of people being recognized by facial recognition software from The Perpetual Line Up.]<\/p><\/div>These two individuals are not tech industry insiders, but careful, critical scholars working at the intersection of the social sciences, the humanities, science, and technology. Both Nelson, a Black woman, and Bedoya, a Latinx man, are experts at both the technical aspects of their work, and at thinking about the sociocultural implications of scientific advancements. Placing them in these high-profile federal positions sends a clear signal about the values we hold and directions we want to head, when it comes to science and technology policy, in the US. It also provides a template for the kind of regulatory oversight Big Tech needs to be willing to undergo.<\/p>\n<p>Because judging by statements from <a href=\"https:\/\/www.facebook.com\/4\/posts\/i-wanted-to-share-a-note-i-wrote-to-everyone-at-our-company-hey-everyone-its-bee\/10113961365418581\/\">Mark Zuckerberg<\/a> and <a href=\"https:\/\/www.facebook.com\/yann.lecun\/posts\/10157927949077143\">Yann LeCun<\/a> in response to Haugen\u2019s statements to the US Congress and the press, technology\u2019s designers and CEO\u2019s still refuse to acknowledge either culpability for what they\u2019ve built, or the recommendations of social science and humanities experts who tried to prevent the situation in which we all find ourselves.<\/p>\n<p>Unless and until Silicon Valley, the tech industry, and western society as a whole make these changes about which experts and what kinds of expertise they need to incorporate\u2014 or honestly, even admit that these changes need making\u2014 these bigoted and oppressive algorithmic and technological outcomes will keep happening. And that is because those bigoted and oppressive social values will continue to comprise the water we all swim in\u2014 the invisible architecture of the structures in which we live, and which we all then seek to build for each other.<\/p>\n<p>STS theorist <a href=\"https:\/\/en.wikipedia.org\/wiki\/Melvin_Kranzberg\">Melvin Kranzberg famously said<\/a>, \u201cTechnology is neither good nor bad; nor is it neutral.\u201d What he meant by this is that, depending on the context and the values brought to bear, the implications for a technological invention can vary wildly. This means that we must always be thinking not just of what might be done with the scientific discoveries we make and technologies we create, but of what we bring with us as we start.<\/p>\n<p>Or, to <a href=\"https:\/\/twitter.com\/MCHammer\/status\/1363908982289559553\">paraphrase MC Hammer<\/a>, when we measure, we must not forget to measure the measurer.<\/p>\n<p>Technological and scientific projects are always already social and philosophical ones, no matter how much some people like to pretend otherwise. The question we have to ask is, since values and social implications will be embedded in any technological tool or system a person creates for the use of other people, wouldn&#8217;t we rather create, administer, and regulate them with help from experts who know how human lives, human values, and human-made technologies intersect?<\/p>\n<p>At least that way we might have a hope of creating those tools and systems well\u2014 and of ceasing to perpetuate and expand on the kinds of prejudicial biases and systemic injustices which put the most marginalized among us at risk and in fear of our literal lives.<\/p>\n<hr \/>\n<p><em>This essay was started on the 11th of September, 2021, and has been updated and amended throughout the unfolding of the Frances Haugen whistleblower scandal. Coverage is moving fast, and I felt that this needed to get published sooner rather than later.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>I\u2019m Not Afraid of AI Overlords\u2014 I\u2019m Afraid of Whoever&#8217;s Training Them To Think That Way by Damien P. Williams I want to let you in on a secret: According to Silicon Valley\u2019s AI&#8217;s, I\u2019m not human. Well, maybe they think I\u2019m human, but they don\u2019t think I\u2019m me. Or, if they think I\u2019m me [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[1],"tags":[967,1081,1108,1300,959,1482,1474,1495,1475,73,1120,101,1494,1496,1116,1117,1075,1489,1111,1411,1492,1002,1491,418,419,1408,1470,1486,495,560,561,562,1488,627,1026,646,975,1409,1484,1229,1230,1362,1124,1490,769,1233,1030,1483,872,1493,1158,1485,1476,1487],"class_list":["post-5600","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-ai","tag-algorithmic-bias","tag-algorithmic-intelligence","tag-algorithmic-justice","tag-algorithmic-systems","tag-algorithms","tag-alondra-nelson","tag-alvaro-bedoya","tag-anna-lauren-hoffman","tag-artificial-intelligence","tag-ashley-shew","tag-bias","tag-cambridge-analytica","tag-clare-garvie","tag-disability","tag-disability-studies","tag-epistemology","tag-ethics-washing","tag-facebook","tag-facial-recognition","tag-frances-haugen","tag-google","tag-humanities","tag-invisible-architecture-of-bias","tag-invisible-architectures-of-bias","tag-joy-buolamwini","tag-justice","tag-kim-crayton","tag-machine-learning","tag-my-words","tag-my-work","tag-my-writing","tag-oversight-board","tag-philosophy","tag-philosophy-of-technology","tag-politics","tag-public-policy","tag-ruha-benjamin","tag-safiya-noble","tag-science-and-technology-studies","tag-science-technology-and-society","tag-social-construction-of-technology","tag-social-dynamics","tag-social-sciences","tag-sociology","tag-sts","tag-technological-ethics","tag-timnit-gebru","tag-twitter","tag-u-s-politics","tag-values","tag-virginia-eubanks","tag-white-house-office-of-science-and-technology-policy","tag-youtube"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p5WByP-1sk","jetpack_likes_enabled":true,"jetpack-related-posts":[{"id":5316,"url":"https:\/\/afutureworththinkingabout.com\/?p=5316","url_meta":{"origin":5600,"position":0},"title":"My Appearance on The Machine Ethics Podcast&#8217;s A.I. Retreat Episode","author":"Damien P. Williams","date":"October 23, 2018","format":false,"excerpt":"As you already know, we went to the second Juvet A.I. Retreat, back in September. If you want to hear several of us talk about what we got up to at the then you're in luck because here are several conversations conducted by Ben Byford of the Machine Ethics Podcast.\u2026","rel":"","context":"In \"algorithmic bias\"","block_context":{"text":"algorithmic bias","link":"https:\/\/afutureworththinkingabout.com\/?tag=algorithmic-bias"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/img.youtube.com\/vi\/ownE2zxTN2U\/0.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":5249,"url":"https:\/\/afutureworththinkingabout.com\/?p=5249","url_meta":{"origin":5600,"position":1},"title":"&#8220;We Built Them From Us&#8221;: My Appearance on the TEAM HUMAN Podcast","author":"Damien P. Williams","date":"February 22, 2018","format":false,"excerpt":"Earlier this month I was honoured to have the opportunity to sit and talk to Douglas Rushkoff on his TEAM HUMAN podcast. If you know me at all, you know this isn't by any means the only team for which I play, or even the only way I think about\u2026","rel":"","context":"In \"algorithmic bias\"","block_context":{"text":"algorithmic bias","link":"https:\/\/afutureworththinkingabout.com\/?tag=algorithmic-bias"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":5899,"url":"https:\/\/afutureworththinkingabout.com\/?p=5899","url_meta":{"origin":5600,"position":2},"title":"ChatGPT is Actively Marketing to Students During University Finals Season","author":"Damien P. Williams","date":"April 4, 2025","format":false,"excerpt":"It's really disheartening and honestly kind of telling that in spite of everything, ChatGPT is actively marketing itself to students in the run-up to college finals season. We've talked many (many) times before about the kinds of harm that can come from giving over too much epistemic and heuristic authority\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"Screenshot of ChatpGPT page:ChaptGPT Promo: 2 months free for students ChatGPT Plus is now free for college students through May Offer valid for students in the US and Canada [Buttons reading \"Claim offer\" and \"learn more\" An image of a pencil scrawling a scribbly and looping line] ChatGPT Plus is here to help you through finals","src":"https:\/\/cdn.bsky.app\/img\/feed_fullsize\/plain\/did:plc:ybkylffhwhn2an2ic2lxh76k\/bafkreidh6mhffosfxhbgnxx6aybjycvgj3c2ygzto2xhzvsohdsv3g6evm@jpeg","width":350,"height":200,"srcset":"https:\/\/cdn.bsky.app\/img\/feed_fullsize\/plain\/did:plc:ybkylffhwhn2an2ic2lxh76k\/bafkreidh6mhffosfxhbgnxx6aybjycvgj3c2ygzto2xhzvsohdsv3g6evm@jpeg 1x, https:\/\/cdn.bsky.app\/img\/feed_fullsize\/plain\/did:plc:ybkylffhwhn2an2ic2lxh76k\/bafkreidh6mhffosfxhbgnxx6aybjycvgj3c2ygzto2xhzvsohdsv3g6evm@jpeg 1.5x, https:\/\/cdn.bsky.app\/img\/feed_fullsize\/plain\/did:plc:ybkylffhwhn2an2ic2lxh76k\/bafkreidh6mhffosfxhbgnxx6aybjycvgj3c2ygzto2xhzvsohdsv3g6evm@jpeg 2x"},"classes":[]},{"id":5227,"url":"https:\/\/afutureworththinkingabout.com\/?p=5227","url_meta":{"origin":5600,"position":3},"title":"Appearance on the You Are Not So Smart Podcast","author":"Damien P. Williams","date":"December 4, 2017","format":false,"excerpt":"A few weeks ago I had a conversation with David McRaney of the You Are Not So Smart podcast, for his episode on Machine Bias. As he says on the blog: Now that algorithms are everywhere, helping us to both run and make sense of the world, a strange question\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":5269,"url":"https:\/\/afutureworththinkingabout.com\/?p=5269","url_meta":{"origin":5600,"position":4},"title":"My Review of Shannon Vallor&#8217;s TECHNOLOGY AND THE VIRTUES","author":"Damien P. Williams","date":"May 10, 2018","format":false,"excerpt":"My piece \"Cultivating Technomoral Interrelations,\" a review of\u00a0Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting has been up over at the Social Epistemology Research and Reply Collective for a few months, now, so I figured I should post something about it, here. As you'll read, I\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"https:\/\/socialepistemologydotcom.files.wordpress.com\/2018\/02\/shannon-vallor-technology-virtues-cover.jpg?w=350&h=200&crop=1","width":350,"height":200},"classes":[]},{"id":5295,"url":"https:\/\/afutureworththinkingabout.com\/?p=5295","url_meta":{"origin":5600,"position":5},"title":"At HPE: &#8220;4 obstacles to ethical AI (and how to address them)&#8221;","author":"Damien P. Williams","date":"July 8, 2018","format":false,"excerpt":"I talked with Hewlett Packard Enterprise's Curt Hopkins, for their article\u00a0\"4 obstacles to ethical AI (and how to address them).\" We spoke about the kinds of specific tools and techniques by which people who populate or manage artificial intelligence design teams can incorporate expertise from the humanities and social sciences.\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/web.archive.org\/web\/20201109005732\/https%3A\/\/www.hpe.com\/content\/dam\/hpe\/insights\/articles\/2018\/07\/4-obstacles-to-ethical-ai-and-how-to-address-them\/featuredStory\/How-to-fix-AI.jpg.transform\/nxt-1043x496-crop\/image.jpeg?resize=350%2C200","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/web.archive.org\/web\/20201109005732\/https%3A\/\/www.hpe.com\/content\/dam\/hpe\/insights\/articles\/2018\/07\/4-obstacles-to-ethical-ai-and-how-to-address-them\/featuredStory\/How-to-fix-AI.jpg.transform\/nxt-1043x496-crop\/image.jpeg?resize=350%2C200 1x, https:\/\/i0.wp.com\/web.archive.org\/web\/20201109005732\/https%3A\/\/www.hpe.com\/content\/dam\/hpe\/insights\/articles\/2018\/07\/4-obstacles-to-ethical-ai-and-how-to-address-them\/featuredStory\/How-to-fix-AI.jpg.transform\/nxt-1043x496-crop\/image.jpeg?resize=525%2C300 1.5x, https:\/\/i0.wp.com\/web.archive.org\/web\/20201109005732\/https%3A\/\/www.hpe.com\/content\/dam\/hpe\/insights\/articles\/2018\/07\/4-obstacles-to-ethical-ai-and-how-to-address-them\/featuredStory\/How-to-fix-AI.jpg.transform\/nxt-1043x496-crop\/image.jpeg?resize=700%2C400 2x"},"classes":[]}],"_links":{"self":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/5600","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5600"}],"version-history":[{"count":10,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/5600\/revisions"}],"predecessor-version":[{"id":5679,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/5600\/revisions\/5679"}],"wp:attachment":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5600"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5600"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5600"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}