I’m Not Afraid of AI Overlords— I’m Afraid of Whoever’s Training Them To Think That Way
by Damien P. Williams
I want to let you in on a secret: According to Silicon Valley’s AI’s, I’m not human.
Well, maybe they think I’m human, but they don’t think I’m me. Or, if they think I’m me and that I’m human, they think I don’t deserve expensive medical care. Or that I pose a higher risk of criminal recidivism. Or that my fidgeting behaviours or culturally-perpetuated shame about my living situation or my race mean I’m more likely to be cheating on a test. Or that I want to see morally repugnant posts that my friends have commented on to call morally repugnant. Or that I shouldn’t be given a home loan or a job interview or the benefits I need to stay alive.
Now, to be clear, “AI” is a misnomer, for several reasons, but we don’t have time, here, to really dig into all the thorny discussion of values and beliefs about what it means to think, or to be a pow3rmind— especially because we need to take our time talking about why values and beliefs matter to conversations about “AI,” at all. So instead of “AI,” let’s talk specifically about algorithms, and machine learning.
Machine Learning (ML) is the name for a set of techniques for systematically reinforcing patterns, expectations, and desired outcomes in various computer systems. These techniques allow those systems to make sought after predictions based on the datasets they’re trained on. ML systems learn the patterns in these datasets and then extrapolate them to model a range of statistical likelihoods of future outcomes.
Algorithms are sets of instructions which, when run, perform functions such as searching, matching, sorting, and feeding the outputs of any of those processes back in on themselves, so that a system can learn from and refine itself. This feedback loop is what allows algorithmic machine learning systems to provide carefully curated search responses or newsfeed arrangements or facial recognition results to consumers like me and you and your friends and family and the police and the military. And while there are many different types of algorithms which can be used for the above purposes, they all remain sets of encoded instructions to perform a function.
And so, in these systems’ defense, it’s no surprise that they think the way they do: That’s exactly how we’ve told them to think.
The fact of the matter is, this isn’t much of a secret. That’s because these are not new issues and these problems and their root causes have been occurring for decades, or even centuries in some cases— a fact which makes their persistence more shocking and dismaying, rather than less. For instance, though Facebook’s is the most recent instance, the problem of facial recognition systems miscategorizing Black people as non-human primates goes back at least as far as Google’s 2015 incident of the same type. And that problem, itself, is connected to the fact that the history of photography hasn’t done well by darker skin tones until relatively recently.
In that same vein, the problems with Facebook’s ranking and delivery algorithms for their newsfeed go back to well before the Cambridge Analytica incident, and their intentional manipulation of emotional, mental, and social states has been known to be baked into their advertising revenue and other profit models, even before whistleblower Frances Haugen unveiled these most recent scandals to the world. Not to mention, the attendant harms of unmoderated comment sections have been known for decades— which of course hasn’t stopped Facebook and Twitter from using those same logics to drive clicks and “engagement.”
So I’ll say it again: It is no surprise that these systems reproduce bad prejudicial social outcomes, when they have been repeatedly and consistently designed, built, trained, and taught to operate with these prejudicial values in mind.
All of these bad outcomes are still happening because the people in charge of commissioning, designing, building, and training the algorithmic systems fundamentally refuse to look at the prejudicially biased contexts we swim in— contexts which then foster the values we all hold, which creators then imbue into their creations.
I am by no means the only person to say this. I’m just one among the host of people like Timnit Gebru and Joy Buolamwini and Ashley Shew and Safiya Noble and Ruha Benjamin and Anna Lauren Hoffman and Virginia Eubanks and Kim Crayton and many, many others who have highlighted the individual and cultural harms of algorithmic and other technological tools and systems. The problem is, if corporations, regulators, and the general public heed these voices at all, it’s only when something has already gone wrong— and even then only to ethics-wash their internal procedures.
And the people who have spent their lives studying the social implications of technologies are also the same ones who keep trying to tell you that letting corporations regulate themselves— or letting them set up disingenuous “oversight boards” to do it for them— is a terrible idea. Powerful corporations will obfuscate and outright lie about the harms they cause, and without real regulatory oversight, nothing will stop them. And so what all of this demonstrates is that calls for “Algorithmic Transparency” are only one part of how to address these problems. Intelligibility of those algorithms to the general public has to be another part, and meaningful accountability for the harms these systems and their parent corporations perpetrate, another alongside both of those.
That is to say, knowing how these intentionally blackboxed systems learn what they learn and do what they do is important, especially as companies move to eliminate even what little access researchers have managed to scrape together. But that knowledge is nothing if we can’t meaningfully enforce changes to the design, construction, and implementation of these companies’ systems.
How can anyone claim to be surprised that Facebook knew exactly how much psychosocial body image damage Instagram has been doing to everyone, especially young women and girls, and how much cultural damage their main newsfeed algorithms have perpetrated, even as they tried to deflect and minimize questions about it? Not only has Facebook obfuscated and outright lied about its deadly impacts before, such as during and after the Rohingya Genocide in Myannmar, but in so doing, as now, they followed the model of large, harmful corporations like Philip-Morris (and Big Tobacco as a whole), pulled the same tactic, to a T.
Which, if anything like “good” can be said to come from all of this, at least we know that we can add criminal conspiracy and racketeering charges in on top of our antitrust and monopoly complaints, in the wake of the simply massive civil suit opened in August of 2021.
In light of this landscape of increasing Big Tech pushback, some corporations are currently trying a “get out ahead of it” strategy. They either make pre-emptive changes to their algorithms before specific grievances can be made, as YouTube has recently done with vaccine misinformation, or they performatively acknowledge that information “may come to light,” as Facebook has done with regard to its ongoing whistleblower situation. The problem with these strategies is that they don’t get at the heart of the real problems.
Because these corporations’ algorithms are, in fact, responsible for these damages. As noted above, these platforms have pioneered ML techniques for content weighting, preference ranking, and sentiment manipulation, all which have been learned, gamed, and emulated by everyone from rival algorithmically-mediated platforms to malicious bad actors on all of these platforms. Facebook in particular has spurred an ecosystem which rewards those who spread— while actively disincentivizing anyone else’s understanding of— emotionally charged, affectively resonant content.
We know all of this. The question is, what are we going to do about it?
For one thing, we may have to accept that, ultimately, some technological interventions might just need to be stopped, as a whole, until we can seriously reckon with their implications and consequences. Or that certain platforms have created so much harm, that they need to be broken up and fundamentally restructured. The former, at least, is a position with which the United Nations seems to agree, their Office of the High Commissioner for Human Rights having just called for a “a moratorium on the sale and use of artificial intelligence (AI) systems that pose a serious risk to human rights until adequate safeguards are put in place.”Now, there are plenty of good arguments for moratoria on development of AI/ML systems, and the “adequate safeguards [to protect human rights]” language is good, but even this effort needs more specificity. Other moratoria and calls for abolition have clearly laid out the actual and potential harms to various communities, with an eye toward specific redress, whereas this call leaves a great deal of undefined space. While we can always benefit from leaving space for more research, when thinking about rights and justice, rather than merely considering probabilities and risk ratios, we need to clearly outline and discuss our values.
The full text of that UN AI Human Rights Report contains not one use of the word “values,” and most uses of “justice” are in the context of the carceral system. There are in fact no explicit namings of racism, sexism, ableism, transphobia, or homophobia, here, despite guidance that,
Particular attention should be paid to disproportionate impacts on women and girls, lesbian, gay, bisexual, transgender and queer individuals, persons with disabilities, persons belonging to minorities, older persons, persons in poverty and other persons who are in a vulnerable situation. (See section IV, subsection B, on page 13.)
Instead, it leans on “bias,” a word which many take to be synonymous with “prejudice,” but is in fact much more like “habits of thought,” in that there is no way to be a thinking, perceiving person, without developing some. Two instances in which our biases may become a problem are when we don’t investigate them, or when they’re bigoted—or both.
But bias also carries connotations of individual action and responsibility— something a person holds or does, and something that can be countered with specific, discrete changes. By focusing on “bias” and “discrimination,” the UN Report, like many other discussions of AI and tech regulation, takes what could be a clear discussion of institutional injustice, and veers away at the last second into the realm of individual instances and personal choices. This kind of language leaves room for harms both individual and systemic, potentially letting abusers take advantage of the public’s expectations.
Now, this isn’t to say there’s no merit in either the UN’s report itself or their follow-up call for an AI moratorium; in fact, I think these are extremely important first steps. Instead, what I’m saying is that we need to be sincerely willing to explicitly name the problems we’re trying to face. Whose values are involved? Which rights are at stake? Why exactly are these important? How long have these situations been going on, and what cultural assumptions are they built on?
Because it’s these kinds of questions that let us clarify exactly whose perspectives we need to bring in and what kind of work we need to do.
A large part of the “AI” work being done by current algorithmic and ML systems is bound up in human values of and thinking around things like power, punishment, and oppression— a situation numerous people seem to think can be solved by yet more algorithmic ML systems. But seeking techno-fixes to values problems is a losing proposition, because all you will do is shift where and how those same bad values get worked in. That is, if we don’t tackle the foundational questions of what we believe in and hope to achieve, then each new technological fix will just reproduce old harms in new ways, while thinking we “solved” the problem.
Yes, the bad outcomes of algorithmic systems and tools are about how they replicate, reinforce, and iterate upon racism and sexism and ableism and transphobia and homophobia and fatphobia, but all of these human-created systems also express human values about having power-over other people, whether through medical classifications, capitalist valuation of human labour, or otherwise. If technological projects are undertaken without first examining these oppressive logics at their root, then all they are likely to change is who wields the whip.
A great deal of lip service is given to the idea of a “corporate culture,” but not enough attention and genuine intent are paid to what a “culture” is. A culture is comprised of beliefs, practices, values, rules, expectations, and assumptions about the way the world works. A culture is inherently social, and so a human culture has to factor in questions of human social understandings. This means that if you seriously want to change corporate tech’s culture so that we can put a stop to the racist, sexist, transphobic, ableist, and otherwise bigoted and oppressive outcomes of algorithms, then we need to change:
Your training courses;
Your data sets;
Your dev teams;
Your managers;
Your CEOs;
Your funding sources;
Your research questions;
Your aims;
Your Beliefs;
Your Values.
And doing all of that means more than hiring team after team of ethicists to serve on an “Ethics Board” which answers to no one but you, and which recommendations you can ignore as you see fit, and whom you can then fire when they give you news you don’t like. Making these changes means integrating perspectives from academic disciplines like disability studies, philosophy, sociology, and science and technology studies, bringing them in from the ground up, rather than as an afterthought once something goes wrong.
But this isn’t just about the values at play in Silicon Valley or corporate cultures— it’s about all of our values, as we engage with technology. Bringing the right perspectives into the creation and regulation of tech requires our whole culture to be forethoughtful about potential harms, and to recognise and clearly state when reform isn’t possible, and we must instead consider abolishing a tool or system, or breaking up a company. To do that, we must value those people with deep knowledge of how science, technology, ethics, justice, and human values all intersect— people who very often happen to be among the most marginalized and disregarded, when it comes to the truth of their own lived experience.
Women, disabled people, LGBTQIA individuals, PoC, the neurodivergent, and other marginalized and minoritized groups are often most expert at thinking about the harmful and unjust ways a technology will be used, because they or other members of their community have directly experienced those similar harms. At a societal level, we have to be willing both to recognize this lived experience for the expertise it is, placing it in conversation with the expertise of researchers and theorists— some of whom might be the same people— and to put all of those experts in positions of meaningful, high-level oversight and authority.
Specifically, these critical experts must be in C-Suites— let’s call them “Chief Social Implications Officers” or “Chief Values Integration Officers” or something along those lines— directly advising companies’ boards, including giving orders to stop work on certain products, or split projects off from the main body, if harm will be done by continuing to grow and develop them. But even before that, these experts must also be on governmental regulatory oversight boards, providing expert-level public testimony, and guiding public policy, including recommending things like making changes to laws around the increase of shareholder holder value, even if it results in social and ethical damage.
All of this, rather than placing social science experts in positions where they’re forced to merely pass nebulous and easily “lost” recommendations up a corporate or bureaucratic chain. Without this kind of vehement, unequivocal commitment to recognizing, valuing, and empowering the social sciences, humanities, and lived experience as realms of expertise, we’re likely to continue making technologies which reflect only the values we unconsciously and accidentally embed into it, rather the ones we’d prefer. Thankfully, there is some evidence that these kinds of adjustments are already being made, possibly indicating that we can achieve even more meaningful change.
In 2021, the Biden Presidential Administration has already named Alondra Nelson to the position of Deputy Director for Science and Society in the White House Office of Science and Technology Policy, and nominated Alvaro M. Bedoya to serve as a member of the Federal Trade Commission. Nelson’s extensive body of work is situated in the history of race and medicine, with a focus on how genomics has been constructed along racialized lines. In Bedoya’s role as the founding director of the Georgetown Center for Privacy and Technology he worked to highlight many of vast problems of algorithmic surveillance, including co-authoring a massive report the racial prejudices embedded in the police use of facial recognition, led by Clare Garvie.
These two individuals are not tech industry insiders, but careful, critical scholars working at the intersection of the social sciences, the humanities, science, and technology. Both Nelson, a Black woman, and Bedoya, a Latinx man, are experts at both the technical aspects of their work, and at thinking about the sociocultural implications of scientific advancements. Placing them in these high-profile federal positions sends a clear signal about the values we hold and directions we want to head, when it comes to science and technology policy, in the US. It also provides a template for the kind of regulatory oversight Big Tech needs to be willing to undergo.Because judging by statements from Mark Zuckerberg and Yann LeCun in response to Haugen’s statements to the US Congress and the press, technology’s designers and CEO’s still refuse to acknowledge either culpability for what they’ve built, or the recommendations of social science and humanities experts who tried to prevent the situation in which we all find ourselves.
Unless and until Silicon Valley, the tech industry, and western society as a whole make these changes about which experts and what kinds of expertise they need to incorporate— or honestly, even admit that these changes need making— these bigoted and oppressive algorithmic and technological outcomes will keep happening. And that is because those bigoted and oppressive social values will continue to comprise the water we all swim in— the invisible architecture of the structures in which we live, and which we all then seek to build for each other.
STS theorist Melvin Kranzberg famously said, “Technology is neither good nor bad; nor is it neutral.” What he meant by this is that, depending on the context and the values brought to bear, the implications for a technological invention can vary wildly. This means that we must always be thinking not just of what might be done with the scientific discoveries we make and technologies we create, but of what we bring with us as we start.
Or, to paraphrase MC Hammer, when we measure, we must not forget to measure the measurer.
Technological and scientific projects are always already social and philosophical ones, no matter how much some people like to pretend otherwise. The question we have to ask is, since values and social implications will be embedded in any technological tool or system a person creates for the use of other people, wouldn’t we rather create, administer, and regulate them with help from experts who know how human lives, human values, and human-made technologies intersect?
At least that way we might have a hope of creating those tools and systems well— and of ceasing to perpetuate and expand on the kinds of prejudicial biases and systemic injustices which put the most marginalized among us at risk and in fear of our literal lives.
This essay was started on the 11th of September, 2021, and has been updated and amended throughout the unfolding of the Frances Haugen whistleblower scandal. Coverage is moving fast, and I felt that this needed to get published sooner rather than later.
Pingback: Omnium Gatherum: 10oct2021 - Rigaroga's Odd Order
Pingback: Omnium Gatherum: 10oct2021 - The Hermetic Library Blog
Pingback: Further Thoughts on the “Blueprint for an AI Bill of Rights” | A Future Worth Thinking About