ashley shew

All posts tagged ashley shew

I’m Not Afraid of AI Overlords— I’m Afraid of Whoever’s Training Them To Think That Way

by Damien P. Williams

I want to let you in on a secret: According to Silicon Valley’s AI’s, I’m not human.

Well, maybe they think I’m human, but they don’t think I’m me. Or, if they think I’m me and that I’m human, they think I don’t deserve expensive medical care. Or that I pose a higher risk of criminal recidivism. Or that my fidgeting behaviours or culturally-perpetuated shame about my living situation or my race mean I’m more likely to be cheating on a test. Or that I want to see morally repugnant posts that my friends have commented on to call morally repugnant. Or that I shouldn’t be given a home loan or a job interview or the benefits I need to stay alive.

Now, to be clear, “AI” is a misnomer, for several reasons, but we don’t have time, here, to really dig into all the thorny discussion of values and beliefs about what it means to think, or to be a pow3rmind— especially because we need to take our time talking about why values and beliefs matter to conversations about “AI,” at all. So instead of “AI,” let’s talk specifically about algorithms, and machine learning.

Machine Learning (ML) is the name for a set of techniques for systematically reinforcing patterns, expectations, and desired outcomes in various computer systems. These techniques allow those systems to make sought after predictions based on the datasets they’re trained on. ML systems learn the patterns in these datasets and then extrapolate them to model a range of statistical likelihoods of future outcomes.

Algorithms are sets of instructions which, when run, perform functions such as searching, matching, sorting, and feeding the outputs of any of those processes back in on themselves, so that a system can learn from and refine itself. This feedback loop is what allows algorithmic machine learning systems to provide carefully curated search responses or newsfeed arrangements or facial recognition results to consumers like me and you and your friends and family and the police and the military. And while there are many different types of algorithms which can be used for the above purposes, they all remain sets of encoded instructions to perform a function.

And so, in these systems’ defense, it’s no surprise that they think the way they do: That’s exactly how we’ve told them to think.

[Image of Michael Emerson as Harold Finch, in season 2, episode 1 of the show Person of Interest, “The Contingency.” His face is framed by a box of dashed yellow lines, the words “Admin” to the top right, and “Day 1” in the lower right corner.]

Continue Reading

Hello Everyone.

Here is my prerecorded talk for the NC State R.L. Rabb Symposium on Embedding AI in Society.

There are captions in the video already, but I’ve also gone ahead and C/P’d the SRT text here, as well.
[2024 Note: Something in GDrive video hosting has broken the captions, but I’ve contacted them and hopefully they’ll be fixed soon.]

There were also two things I meant to mention, but failed to in the video:

1) The history of facial recognition and carceral surveillance being used against Black and Brown communities ties into work from Lundy Braun, Melissa N Stein, Seiberth et al., and myself on the medicalization and datafication of Black bodies without their consent, down through history. (Cf. Me, here: Fitting the description: historical and sociotechnical elements of facial recognition and anti-black surveillance”.)

2) Not only does GPT-3 fail to write about humanities-oriented topics with respect, it still can’t write about ISLAM AT ALL without writing in connotations of violence and hatred.

Also I somehow forgot to describe the slide with my email address and this website? What the hell Damien.

Anyway.

I’ve embedded the content of the resource slides in the transcript, but those are by no means all of the resources on this, just the most pertinent.

All of that begins below the cut.

 Black man with a mohawk and glasses, wearing a black button up shirt, a red paisley tie, a light grey check suit jacket, and black jeans, stands in front of two tall bookshelves full of books, one thin & red, one of wide untreated pine, and a large monitor with a printer and papers on the stand beneath it.

[First conference of the year; figured i might as well get gussied up.]

Continue Reading

[Cite as Williams, Damien P., Heavenly Bodies: Why It Matters That Cyborgs Have Always Been About Disability, Mental Health, and Marginalization (June 8, 2019). Available at SSRN: https://ssrn.com/abstract=3401342 or http://dx.doi.org/10.2139/ssrn.3401342]

 

INTRODUCTION

The history of biotechnological intervention on the human body has always been tied to conceptual frameworks of disability and mental health, but certain biases and assumptions have forcibly altered and erased the public awareness of that understanding. As humans move into a future of climate catastrophe, space travel, and constantly shifting understanding s of our place in the world, we will be increasingly confronted with concerns over who will be used as research subjects, concerns over whose stakeholder positions will be acknowledged and preferenced, and concerns over the kinds of changes that human bodies will necessarily undergo as they adapt to their changing environments, be they terrestrial or interstellar. Who will be tested, and how, so that we can better understand what kinds of bodyminds will be “suitable” for our future modes of existence?[1] How will we test the effects of conditions like pregnancy and hormone replacement therapy (HRT) in space, and what will happen to our bodies and minds after extended exposure to low light, zero gravity, high-radiation environments, or the increasing warmth and wetness of our home planet?

During the June 2018 “Decolonizing Mars” event at the Library of Congress in Washington, DC, several attendees discussed the fact that the bodyminds of disabled folx might be better suited to space life, already being oriented to pushing off of surfaces and orienting themselves to the world in different ways, and that the integration of body and technology wouldn’t be anything new for many people with disabilities. In that context, I submit that cyborgs and space travel are, always have been, and will continue to be about disability and marginalization, but that Western society’s relationship to disabled people has created a situation in which many people do everything they can to conceal that fact from the popular historical narratives about what it means for humans to live and explore. In order to survive and thrive, into the future, humanity will have to carefully and intentionally take this history up, again, and consider the present-day lived experience of those beings—human and otherwise—whose lives are and have been most impacted by the socioethical contexts in which we talk about technology and space.

[Image of Mars as seen from space, via JPL]

This paper explores some history and theories about cyborgs—humans with biotechnological interventions which allow them to regulate their own internal bodily process—and how those compare to the realities of how we treat and consider currently-living people who are physically enmeshed with technology. I’ll explore several ways in which the above-listed considerations have been alternately overlooked and taken up by various theorists, and some of the many different strategies and formulations for integrating these theories into what will likely become everyday concerns in the future. In fact, by exploring responses from disabilities studies scholars and artists who have interrogated and problematized the popular vision of cyborgs, the future, and life in space, I will demonstrate that our clearest path toward the future of living with biotechnologies is a reengagement with the everyday lives of disabled and other marginalized persons, today.

CYBORGS AND MENTAL HEALTH[2]

The idea of systematically using technological and biochemical interventions to help a human person regulate their bodily processes to adapt to life in space takes its start from the work of two men: Manfred E. Clynes and Nathan S. Kline. Even at a young age, Clynes’ work always seemed to engage with feedback and modulation and the interplay of different systems, and a correspondence with physicist Albert Einstein encouraged him and gave him the social cache to build a music career.[3] But in 1956, Clynes by chance met Dr. Nathan S. Kline, who was then the Director of the Research Center of Rockland State Hospital, where they both worked.[4] It was through this meeting that the two would become longtime collaborators and would come to coin one of the most resonant technoscientific imaginaries and conceptual tools of the twentieth and twenty-first centuries: The Cyborg.

Oddly, neither Kline’s New York Times (NYT) obituary, nor his biographies at The Nathan S. Kline Institute for Psychiatric Research and the International Network For The History Of Neuropsychopharmacology mention his role in coining the term “cyborg,” even though Clynes was drawn to Kline in large part due to the latter’s work on neurochemistry and the groundbreaking development of antidepressants.[5] Kline’s work on anti-depressants was specifically about the human use of exterior sources of neurochemicals to self-regulate their systems, and it was precisely this work which informed the later dream of unconscious chemical adaptation to any environment. Kline’s work on antidepressants stemmed from his recognition that certain neurotransmitters were not being produced by the brains of patients with certain types of depression. Kline sought a way to either induce the production of these chemicals, or to provide them from an outside source, in doses and at intervals which would allow them to more easily integrate into a human body.[6]  Kline envisioned this process as one of the intentional regulation of organic bodily systems, through specific chemical interventions.

Clynes and others saw much wider potential value in this work, especially as in the late 1950’s and early 1960, the United States of America (USA) and the Union of Soviet Socialist Republics (USSR) were in the throes of both the Cold War at the Space Race. Each superpower sought to become the first to put a human being into space, to claim it for their nations and the people therein, and it was in this context that Clynes and Kline wrote the paper “Cyborgs and Space,” and coined the term “Cyborg.”[7] In this paper, Clynes and Kline described a cyborg (a portmanteau of “cybernetic organism”) as a being which would have the means to regulate and alter previously autonomic bodily processes, through the use of chemical alterations, in a cybernetic feedback loop. The paper was largely a theoretical exploration of how we might use chemical biotechnological interventions to regulate autonomic nervous and pulmonary function, and also to make unconscious certain intentional functions and processes.[8] A cyborg would be able to survive the rigors of space travel—such as increased gravitational forces and radiation, long lightless stretches, and bodily degradation—by regulating the chemical processes of their body to adapt to each new situation, as necessary.

But as the Space Race wore on, and more and more humans actually went into space, there was an increasingly smaller focus on the alterations and adaptations that would be necessary to survive in space, and greater public emphasis placed on a narrative of the triumphalism of the human will and ingenuity. The narrative regarding humans in space became primarily about those who had “the right stuff,” rather than a question of what we would have to do in order to adapt and thrive, and so the image of the cyborg fell away and was altered. And a whole suite of possibilities for how we might have understood—and treated—different kinds of embodiment altered, along with it.

As Alison Kafer discusses in her book Feminist, Queer, Crip, feminist and ecological discourses in the 1970’s, 80’s, and 90’s gave rise to widely-read theorists such as Donna Haraway who use the “cyborg” concept in a rhetorical mode that locks it to particular ideals about bodily integrity and outdated notions like “Severe Handicaps.”[9] (Below, we’ll discuss how, in her Robo Sapiens Japanicus, Jennifer Robertson relates this to the Japanese concept of Gotai, or “Five Body.”)  Kafer links this discussion to the history of how Kline and Clynes’ work on neurochemical antidepressants at the Rockland Institute was very likely predicated on testing and treating patients—both via drugs and instrumental interventions—against their will. Additionally, throughout the 1940’s, 50’s and 60’s, Rockland was subject to multiple accusations of patient mistreatment and mismanagement, including physical abuse, malnourishment, and even rape.[10] While it certainly might not have be the case that any of those more horrifying things happened under Kline and Clynes’ direction, they did happen on their watch, and the culture of testing institutionalized patients without their consent was widespread in the United States, well into the 1970’s.[11]

[Portrait Image of Nathan S Kline: A monochrome photograph of an older white man with frizzy white hair and a white beard wearing glasses and a white lab coat, white shirt, and cross-hatch patterned tie]

Throughout the long history of eugenics in the United States, ideas about what constitutes the “right kind” of person—be that on the basis of ethnicity, gender, physical or mental ability, or all of the above—led to events in which people institutionalized against their will were forcibly sterilized due to claims of their reduced “fitness” and mental facilities. People with uteri were given forcible hysterectomies and people with testes were chemically or physically castrated, and certain people were simply put to death because they were seen as “unfit” to ever reintegrate with society. All of these things happened starting at very early ages and as one might guess, issues of race complicated every facet of them. As Harriet Washington notes in her book Medical Apartheid:

Unfortunately, a black child is more likely than a white one to have his parent completely removed from the informed-consent equation. Black children are far more likely than whites to be institutionalized, in which case the parents are often unable to consent freely or are not consulted at all.[12]

Often, and to this day, children judged as even possibly having a higher likelihood “mentally unfitness” are just aborted outright, as in the case of Iceland and the Netherlands’ use of in vitro imaging technologies to determine whether or not a child has Down Syndrome.[13]

Kafer’s discussion of the Rockland Institute’s depredations serves to illustrate that even some of the most foundational work and well-respected researchers have been party to monstrous practices in order to make their discoveries. Even as Clynes and Kline’s ideal of the cyborg developed out of a concern for mental health and recognition that the human body was not developed to fit the niche of outer space, they did their work in a context built upon the degradation and predation of disabled and forcibly institutionalized persons. This understanding of marginalized persons as resources to be used or situated embodiments to be emulated has been unfortunately persistent, and it has changed the way we think about what a cyborg ought to be. Rather than being about recognizing that, as researcher of the intersection of philosophy, technology, and disability Dr. Ashley Shew has put it, we would all be disabled in space, and so would all need some form of cybernetic system of interventions to survive, a myth of the elite, perfectible human took root.[14] There are number reasons why that is, and many implications for what it’s come to mean.

CYBORGS AND DISABILITY[15]

In the last three decades, as things like the internet captured the public imagination, more theorists explored the image of the human as increasingly entangled with technology. Theorists of biology, sociology, and scientific history, such as Donna Haraway, came to use the notion of the cyborg as a way to describe how human lives may become entangled with nonhuman entities and systems.[16] At the same time, the understanding of a cyborg as some superhuman fusion of human and machine was being taken up and reinforced by popular culture, creating a what Shew, above, calls “Technoableism” and what anthropologist of  robotics and  Japanese culture Jennifer Robertson calls “Cyborg-ableism.”[17] In Techno- or Cyborg-Ableism, technologized bodies are ostensibly lauded as “superior,” but only by still being marked out as other.

Shew’s paper “Up-Standing, Norms, Technology, and Disability” explores how ableism, expectations, and particularities of language serve to marginalize disabled bodies.[18] Shew takes her title from the fact that most technological “solutions” designed for people who don’t use their legs are intended to facilitate their engaging the world as if they did. Many if not most things in human societies are designed to be used within a certain range of height that assumes the user is standing; if your default mode is sitting, then your engagement with the vast majority of the world will be radically different. This is just one example of what is known as the social construction model of disability, which says that it’s not the physiological differences themselves which disable, but rather the ways that spaces, architectures, and simple basic societal assumptions limit how a person is expected to intersect with the world and what kind of bodymind they “should” have.[19] Shew notes that, while we tend to think of cyborgs as some seamless integration of technology and bodies, wheelchair and crutch users consider their chairs as fairly integral extensions and interventions, as a part of themselves. The problem is that the majority of societies assume different things about these different modes. Shew mentions a friend of hers:

She’s an amputee who no longer uses a prosthetic leg, but she uses forearm crutches and a wheelchair. (She has a hemipelvectomy, so prosthetics are a real pain for her to get a good fit and there aren’t a lot of options.) She talks about how people have these different perceptions of devices. When she uses her chair people treat her differently than when she uses her crutches, but the determination of which she uses has more to do with the activities she expects for the day, rather than her physical wellbeing.

But people tend to think she’s recovering from something when she moves from chair to sticks.

She has been an [amputee] for 18 years.

She has/is as recovered as she can get.[20]

Shew is one of many researchers who have discussed that a large number of paraplegics and other wheelchair users do not want exoskeletons, and that those fancy stair-climbing wheelchairs aren’t covered by health insurance, because they’re classed not as assistive devices, but as vehicles. Shew says what most people who don’t have use of their legs want is to have access to the same things that people who do have the use of their legs have. Because ultimately, in around the time it takes for Apple to come out with a new iPhone—around about eighteen months—a person who has developed a disability—lost the use of their legs, the use of their sight, the use of their hearing, the use of their arms, whatever—will come to engage and to adapt to that new lived physical reality as normal. Many societies think about disability as a life-altering, world-changing thing—something that lasts forever and nothing will ever be the same for you—but the fact the matter is that humans are plastic, adaptable, and malleable. We learn how to live around what we are, and we learn it very quickly.

All of this comes back down and around to the idea of biases ingrained into social institutions. Our expectations of what a “normal functioning body” is gets imposed from the collective society, as a whole, a placed as restrictions and demands on the bodies of those whom we deem to be “malfunctioning.” As Shew says, “There’s such a pressure to get the prosthesis as if that solves all the problems of maintenance and body and infrastructure. And the pressure is for very expensive tech at that.”

Humans became seen as those creatures which self-analyze and then alter and adapt themselves based upon said self-analysis. Many philosophers of technology have argued that we are always technologically mediated, and that that mediation shapes and is shaped by our physiological and sociocultural experiences, and elsewhere, I’ve explored the questions of identity that come along with Ship-of-Theseus-like questions of bodily integrity that do not quite fit into this work.[21] Suffice it to say that even as promises of becoming “more than” human have flooded the public imagination, they have been met with equally ardent cries of “but if you lose a part of your body, you’re not really you!” Either of these positions serves only to erase and marginalize the real lived experiences of disabled people, for the sake of some assumption about what the human bodymind “should” or even just might be. Even into the twenty-first century, cyborgologists such as Amber Case, a self-described “Cyborg Anthropologist,” have argued that, thanks to augmented reality, smart phone devices, and the generally ubiquitous integration of technology in to the daily life of the modern human being, “We Are All Cyborgs Now.”[22] But something crucial gets lost, here, when we obfuscate or elide the real experiences of people with disabilities from the conversation about cyborgs and cybernetics.

In her pieces “Dawn of the Tryborg” and “Common Cyborg,” Jillian Weise specifically hones in on a great deal of the  foundation for the modern mythology of cyborg experience, including that which comes out of perspectives like Haraway’s and Case’s.[23] The idea that anyone with a smartphone or with a particular conceptual relationship to the world is automatically a cyborg, Weise says, does violence to the very real lived experience of people with prosthetics or artificial organs or implants that keep them alive. Those latter interventions need maintenance to keep them functional in the face of damage, to prevent life-threatening infection, and to adjust them for day-to-day changes, and while they are not necessarily “sexy,” they are a truer example of what the term’s originators thought it would mean to be a cyborg. “Tryborgs,” on Weise’s view, are those people who want all the glitz and glory of being interconnected with technology, without any of the practical implications. They are the transhumanists who believe that we will all be able to upload our consciousnesses and change our shape, at will, with no muss and no fuss. They want to be the inspirational figures, without having to suffer any losses or do any of the messy upkeep and maintenance, to get there. And they exist in many cultures.

[Port-A-Cath Chemo Port (Images from Cancer.gov {Left} and MySamanthaJane.com {Right})]

Jennifer Robertson’s Robo Sapiens Japanicus consists of a close investigation of Japan’s historical cultural engagement with robots, and her sixth chapter, “Cyborg-Ableism Beyond The Uncanny (Valley),” deals specifically with Japanese notions of disability, mental health, and cyborg-ableism.[24] Though she doesn’t directly consider of the roots of the cyborg concept, beyond Haraway and back to Kline and Clynes, Robertson delves into things like the removal of disabled veterans from streets for 1964 Olympics, the creation of the first Paralympics in 1948, the fact that one out of six people in Asia and the Pacific is born with some form of disability, and that Japan only ratified the UN’s and drafted its own disability protection legislation after many years and a great deal of foreign pressure.[25] And even with that pressure, Robertson says, it was only with the 2016 enforcement of these laws that all governmental institutions and private-sector businesses were required to remove the social barriers for people with disabilities.[26] Before then, many disabled athletes weren’t allowed to train with able-bodied teammates, and had to raise their own money to purchase prostheses, at which point many of them, if they were successful, got accused of exploiting their disability for monetary gain.[27] In this way, Robertson highlights a cultural indifference to or dismissal of disabled people, even as governments and businesses focused on and developed robotic prostheses.[28]

In a cultural sense, the desires to either fit in or to use technology to become “more” and “better than” are what tend to drive cyborg-ableist concerns. Robertson discusses Tobin Siebers and the concept of able-bodied passing, comparing it to queer folx and “straight passing;” in each case there are transitive and intransitive forms of passing, where one is either actively effacing their difference/otherness, or merely benefitting from outside observers simply not recognizing said.[29] To that end, many may choose to make their disability (or their queerness, or both) unignorable by way of stylized prostheses; in fact, much in line with Shew’s assertions above, while people who’ve recently lose a limb may start off wanting a lifelike replacement, they tend to shift to wanting something that works and feels better, rather than just looking a particular way. [30] So are stylized prostheses better understood as empowering or distracting? On the one hand, there is something empowering about the use of a prosthetic to reshape and change the way the outside world can understand you; on the other hand, “prosthetics can divert attention from the disabled limb to its replacement.”[31] But this replacement, in itself, can be a source of discomfort for able-bodied folx.

In the section “What is (and is not) the uncanny valley?” Robertson explores Masahiro Mori’s concept of Bunkimi no tani which Robertson translates as “the valley of eerie feeling,” rather than the more familiar “uncanny valley.”[32] Paired with shinwakan no tani or “familiar feeling valley,” Mori describes this as a kind of suddenly and shockingly frustrated expectation, when one is in the process of encountering and reinforcing increasingly familiar things. This concept depends heavily on Mori’s assumptions about what would constitute an “average, healthy, person” and what Robertson labels his “almost callous indifference toward disabled persons.”[33] In Mori’s graphs and descriptions of the Valley, he includes sick disabled people as on the upward curve of the “eerie,” moving away from  corpses, zombies, and prosthetic hands.[34]

While many people have taken the uncanny valley as some kind of gospel law, Robertson contends we should, rather, expect that the constituency or even presence of an uncanny valley would be a highly subjective thing, based on factors such as “physical and cognitive abilities, age, sex, gender, sexuality, ethnicity, education, religion, and cultural background;” and, indeed, Mori himself has said that it was meant only as an “impressionistic” guide.[35] Humans can adjust to and come to accept and embrace the unfamiliar and designers can avoid the uncanny valley, and many people on earth live in situations where injury illness and death are not “sudden and unfamiliar” or “eerie,” but rather are unfortunately everyday occurrences. But Mori’s response, and much of what is seen in the Japanese exoskeleton market, is just another example of Gotai, the traditional Japanese understanding that a “whole” or “normal” body is made of five constituent parts in combination: either the head, two arms, and two legs, or the head, neck, torso, arms, and legs.[36] This theory holds that anything that breaks this form breaks the person, a perspective which firmly binds these notions of “completeness” to notions of mental health.

Hirotada Ototake’s book Gotai Fumanzoku or “incomplete/unsatisfactory body” (English title: “No One’s Perfect”) is an autobiography about his tetra-amelia syndrome which prevented his arms and legs from developing during his gestation; stressing his “Normality” and his desire to be treated equally.[37] But, Robertson notes, the kind of whole-body championed by the Japanese culture exoskeletons are not ways for people like Ototake to regain Gotai, and that there’s a difference between prosthetics that replace a limb and those that “enhance” an existing but disabled one.[38] Robertson, here, in a move similar to but not directly referential of Kafer, touches on Haraway’s use of cyborg as a metaphor for relationality and reflexivity, and, offers a critique of Haraway’s seeming to conceive of “disability” as a singular category rather than the multiform variable conditions that can be linked under this label.[39] This, along with transhumanists like Max More and Natasha Vita-More’s ableist notions of what the “perfect” body should be, feeds into narratives that comprise this vision of cyborgs as a somehow “perfected” humanity.

But cyborgs were conceived as a means for humans to live in space, a situation which, again, would be a combination of constantly-dangerous processes of keeping close track of minute changes in the bodyminds of the astronauts and their relationship to their environment—processes that are already well-known to, e.g., diabetics or people with peripheral neuropathy. For a person within those lived experiences, always being aware of the state, position, and integrity of their body is always already a life-or-death scenario, in ways that have to be learned and mimicked by people who are otherwise able-bodied. Had we maintained disabled people’s stories as a part of the mythology of the cyborg, from the beginning, Western societies might now have a better relationship with concepts of disability and mental health. This relationship might have easily arisen from the recognition that most if not all disabled people are cyborgs, just as all spacefaring humans must become cyborgs, and that this, as Clynes and Kline understood, is precisely because all spacefaring humans will become disabled by the very act of existing in space. Which means that, in essence, spacefaring humans currently do and will continue to experience the social construction of disability.

[Members of the Gallaudet Eleven chat in the zero-g aircraft that flew out of Naval Air Station in Pensacola, Fla.; Credits: U.S. Navy/Gallaudet University collection]

But since we have not, in fact, reinforced that chain of understanding, contemporary theorists would be well served to presently explore the situated and lived experiences of people with different configurations of bodyminds, and to listen to what they know about themselves. As Shew has noted, those people who have experience with orienting themselves to the world via pushing off of surfaces or using their arms as primary means of propulsion would be better positioned move in weightless environments and to teach others new strategies to do the same. Because, ultimately, people with disabilities are often already interwoven with their technologies, in ways idealized by technologists, but their lived experience is not recognized and appreciated for what it is. If we take these lived experiences and incorporate the people who embody them, in conjunction with the original intent of the notion of the cyborg, we might have the beginning of a system by which we can rehabilitate the notion of the cyborg—but overcoming the historical trends that have led us here will take a great deal of work.

CYBORGS AND MARGINALIZATION

While it has long been assumed that the future of humanity would have to adapt both its forms and conceptual relations to multiform and multimodal embodiments, through our explorations we have come to understand how the category of the cyborg, which should have made fertile grounds for this expanded understanding, has instead become a site of disenfranchisement. As we’ve seen, Kafer’s project in Feminist, Queer, Crip aims to reframe disabled people as cyborgs because of their political practices rather than their bodies,  that enframing of politics, embodiment, and biotechnological intervention has roots and mirrors in other persistent forms of marginalization. Those other roots of racism and misogyny give rise to several questions such as, “Whose bodies will we make subject to or deign to include in tests for space exploration?” More to the point, if we are meant to the cyborg in terms of people whose embodiments are already technopolitically mediated, then who can and should we understand as cyborgs, now? Because there is a crucial difference between a group of people who have “disnormalized” themselves, and group which has been othered by people who don’t know or understand their lived experience.

Again, there are multiple sites of marginalization which can be demonstrated as having a force-multiplying effect on how people with implants, prostheses, or biochemical injection or ingestion regimens are either accepted or disenfranchised by the society in which they live. We can borrow, here, the framework of Kimberlé Williams Crenshaw’s Intersectionality theory, to help make sense of this:

…problems of exclusion cannot be solved simply by including Black women within an already established analytical structure. Because the intersectional experience is greater than the sum of racism and sexism, any analysis that does not take intersectionality into account cannot sufficiently address the particular manner in which Black women are subordinated. (Emphasis added.)[40]

Crenshaw centers Black women, here, but this isn’t to say that only Black women can be intersectional subjects. Rather, she uses Black women as an example of how groups of people that have been cast as only one kind of identity (Black, Woman) would be far better understood as the center of an intersectional process. Might we think of trans folx who sit at the center of their identities, biomedical technologies such as hormone replacement therapies (HRT) or binders or packers, of societies expectations about how their bodies ought to present and behave, and public technologies such as airport scanners of as cyborgs?[41] If so, they would have vastly different valences of legibility and operation than, say, a diabetic with an insulin pump—though similar ones to a person with an ostomy bag.[42] If we work to understand people in an intersectional way, we can recognize the many vectors for different kinds of oppression, in the world, and understand that even those intersectional subjects with shared component roots will have different particularities of expression and avenues by which we might redress their needs—a recognition that has been sorely and consistently lacking in much of our public discourse, to date.

When we again explore the histories of eugenics and medicalization, we find that even up to this point in the 21st century, there are well-regarded researchers and even textbooks on biomedical ethics which barely touch on these issues, let alone on understanding them through a lens of intersectionality of oppression. For instance, Francis L. Macrina’s Scientific Integrity is in its fourth edition, and yet still seems to lack any substantive contextual discussion of changes made in the history of research ethics standards and practices—such as what actually happened in the Tuskegee syphilis trials. Macrina mentions that the trials took place, and even the nature of the population on which they were conducted, but he does not at any point mention the fact that researchers targeted the study’s population because they were Black, and were therefore conceptualized as resources.[43] While it is, perhaps, unfair to expect Macrina to touch on every nuanced concerns of every human subject trial, the assumption that social features are not worthy of discussion serves to reinforce a whole host of other assumptions about things like the objectivity of testing criteria or the clarity of explanations in gaining informed consent. These assumptions, if ever scrutinized at all, would simply not hold up. At the very least it is clear that the Tuskegee patients, like Henrietta Lacks, were not seen or understood as being worthy of clear explanations of what was being done to them. After all, if they understood, they might have said “no.”

Focusing on the history of biomedical experimentation on populations of the forcibly institutionalized or systemically disenfranchised, and African American or female-presenting bodies, in particular, would do wonders to highlight the fact that the long-term effects of the trials were more than just some blanket distrust of medical experimentation, throughout American society. The trials in Tuskegee, Alabama fit into a longstanding pattern of treating Black bodies as resources to be used and as objects to be othered, dehumanized, and intervened upon in whatever ways the dominant society at the time has happened to see fit. And Black bodies are not the only ones.  Imagine if textbook writers such as Macrina more often took the time to discuss and contextualize events like how the government and medical providers tricked Black people in Mississippi into receiving vaccinations, or the forced sterilization of Black women, or how the intersection of mental health and institutionalization of women in general led to them being experimented on and sterilized at higher rates, or the long-term ethical and social implications of classifying certain people as “morons.”

More and more, the effects of these kinds of historical objectification are understood as linked to lowered health outcomes, higher rates of chronic illness, and greater morbidity for Black people and women in the United States, and a longstanding history of thinking of the neurodivergent and people with mental disabilities as “less than.” The omission of these discussions from textbooks and other broad public discourse exemplifies a persistent failure to fully contextualize the history and implications of these events. That this failure presents in so many ethical sub-disciplines might help to explain how people have so often managed to convince themselves that testing on marginalized populations without their informed consent can be said to serve the “greater good.” More often than not, “professional ethics training” or any other kind of take on the humanities within business or the so-called hard sciences becomes synonymous with a particular understanding of how not to get sued. The perspectives that get passed along are those of experts in the field in question, be it business, technology, medicine, or what-have-you. Leaving the social science and humanities training of students to people who were only ever trained in this narrow, subdisciplinary fashion is precisely what leads to the continual dismissal of ethical, moral, and sociopolitical considerations, and said dismissal then, in turn, gives rise to Technoableism.


[“Bladerunner” Created by Oliver Wetter / Ars Fantasio]

If various groups want to change bodily forms and embodiments, or even just change the way that we all interact with the planet on which we currently live so that we might survive the next 30 years, then they will have to radically reconsider how our sociopolitical forces and the elements of our lived experience impact the decisions we make about the science we do and tools we create. The historical positioning of the lived experiences of marginalized people in terms of race, gender, disability, and so on has meant that while we are more than happy to test and degrade certain people for their embodiments, we have been less than willing to allow those same to shape and direct the technoscientific discourse of which they have forcibly been made a part. This distinction, though unarticulated, matters a great deal, and its effects and implications run rampant throughout every facet of our society.

CONCLUSION

If humans do manage a future in which they travel into and live in space, they will need to change the kinds of embodiments and relations they have in order to survive; to do this, they will need to think in vastly different ways about the nature of technological and scientific projects they undertake. Our societal future imaginings are rife with assumptions about what kind of people are best suited to exist and these have been shaped by the historical positioning and treatment of many marginalized groups. Left unexamined, these assumptions and precedents will likely mutate and iterate into each new environment into which humans spread, and affect every engagement of human and nonhuman relationships. But, if we bring a careful, thorough, and intentional consideration to bear on the project of weaving together biomedical, interpersonal, sociopolitical, and technomoral concerns, then we might be better suited to both do right by those we’ve previously oppressed and agilely adapt to the kinds of concerns that will face us, in the future.

As Haraway discusses in her (flawed but possibly still salvageable) “Cyborg Manifesto,” the language of the cybernetic feedback loop does not belong only to humanity as a way to describe its own processes—cybernetic theory and the myth of the cyborg are also frameworks which can be used to describe the cycles and processes of nature, as a whole.[44] Through this understanding, Haraway and others have argued that all of nature is involved in an integrated process of adaptation, augmentation, and implementation which, far from being a simple division between the biological and technological is, instead, a reflexive, co-productive process. Using the theorists and examples above, I’ve argued for an understanding of biotechnological intervention and integration as the truth of our existence with and within technology. Our bodies and minds are shaped by each other and exist as bodyminds, and those bodyminds dictate and are shaped by the technologies with which they interact.

In order to carefully construct and live within vastly complex systems, it will be crucial to understanding the lived experiences of those whose embodiments and bodyminds have placed them at a higher likelihood of being marginalized by those who demand a “right kind” of lived experience. Only by allowing them to create a world out of the lessons of their lived experience will we be better able to intentionally craft what this system and its components will learn and how they will develop. What should characterize our understanding of the cyborg, then, is the reflexive, adaptive relationship between the sociotechnical, sociopolitical, ethical, individual, symbolic, and philosophical valences of our various lived experiences.

The point in saying that “Cyborgs Have Always Been About Disability, Mental Health, and Marginalization” is not to say that the category of the cyborg should be Disclosed to cyborg anthropologists and philosophers who say “we have always been cyborgs.” Rather, it’s about highlighting the fact that a category which was invented specifically to address the lived experiences of marginalized and oppressed people has been co-opted and transformed into a tool by which to erase the experiences of those very same people. We can, and indeed should, still make use of the Harawayan cyborg, the metaphor for entanglement and enmeshment, both as individuals and communities, but we must do so in a way that honours both the original meaning and the evolution of the concept. We must recognize that disabled people, the neurodivergent, trans folx, Black lives, women, queer individuals, and those who sit at the intersection of any number of those components comprise individual lives and communities of experience which are already attuned to changing and adapting to suddenly hostile environments, and it is these kinds of lives which should stand at the vanguard of how we understand what it means to be a cyborg, moving forward. Because the concept of the cyborg was never about a perfectible ideal, it was always about survivability, about coming into a new relational mode with ourselves, our society, and our world.

[1] “Bodyminds” comes from Margaret Price’s “The Bodymind Problem and the Possibilities of Pain.” in Hypatia 30, 2015.

[2] Parts adapted from Williams, Damien Patrick, “A Brief Historical Overview of Cybernetics and Cyborgs,” written for History of STS, Spring 2018

[3] Clynes, Manfred. (1955-10-02). “Simple analytic method for linear feedback system dynamics”. Transactions of the American Institute for Electrical Engineers

[4] Madrigal, Alexis C.. “The Man Who First Said ‘Cyborg,’ 50 Years Later.”

[5] Gruson, Lindsey. “Nathan Kline, Developer of Antidepressants, Dies.” The New York Times. February 14, 1983. https://www.nytimes.com/1983/02/14/obituaries/nathan-kline-developer-of-antidepressants-dies.html; Blackwell, Barry. “Nathan S. Kline.” International Network For The History Of Neuropsychopharmacology, June 13, 2013. http://inhn.org/profiles/nathan-s-kline.html.

[6] Ibid.

[7] Clynes, Manfred E. and Kline, Nathan S. “Cyborgs and Space.” Astronautics (September 1960), 26-27, 74-76. http://web.mit.edu/digitalapollo/Documents/Chapter1/cyborgs.pdf

[8] Clynes and Kline “Cyborgs and Space.”

[9] Kafer, Allison. Feminist, Queer, Crip. Bloomington: Indiana University Press, 2013. pg. 105, 111—115; Haraway, Donna. “The Cyborg Manifesto: Science, Technology, And Socialist-feminism In The Late Twentieth Century.” Simians, Cyborgs And Women: The Reinvention Of Nature New York; Routledge. 1991.

[10] Kafer, Allison. Feminist, Queer, Crip. pg. 126—128

[11] Cf, Washington, Harriet. Medical Apartheid; Tuskegee University, “About the USPHS Syphilis Study.” https://www.tuskegee.edu/about-us/centers-of-excellence/bioethics-center/about-the-usphs-syphilis-study; Skloot, Rebecca. The Immortal Life of Henrietta Lacks; New York: Crown Publishers, 2010

[12] Washington, Harriet. Medical Apartheid: The Dark History of Medical Experimentation on Black Americans from Colonial Times to the Present. New York: Doubleday, 2006. pg. 293

[13] Verbeek, Peter-Paul. “Obstetric Ultrasound and the Technological Mediation of Morality: A Postphenomenological Analysis.” Human Studies, Vol. 31, No. 1, Postphenomenology Research (Mar., 2008), pp. 11-26; Springer. http://www.jstor.org/stable/40270638

[14] Shew, Ashley. “Technoableism, Cyborg Bodies, and Mars.” Technology and Disability. November 11, 2017. https://techanddisability.com/2017/11/11/technoableism-cyborg-bodies-and-mars/.

[15] Parts adapted from Williams, Damien Patrick, “Technology, Disability, & Human Augmentation,” https://afutureworththinkingabout.com/?p=5162; “On the Ins and Outs of Human Augmentation,” https://afutureworththinkingabout.com/?p=5087.

[16] Haraway, Donna. “The Cyborg Manifesto”

[17] Shew, Ashley. “Technoableism, Cyborg Bodies, and Mars”; Robertson, Jennifer. Robo Sapiens Japanicus; Robots, Gender, Family, and the Japanese Nation. Oakland, CA: University of California Press, 2018.

[18] Shew, Ashley. “Up-Standing Norms.” IEEE Conference on Ethics and Technology, 2016.

[19] See Rosenberger, Robert. “The Philosophy of Hostile Architecture: Spiked Ledges, Bench Armrests, Hydrant Locks, Restroom Stall Design, Etc.” 2018.

[20] Shew, Ashley, in correspondence, 2016.

[21] Cf. Don Ihde, Albert Borgmann, Peter-Paul Verbeek, Evan Selinger, and other post-phenomenologists; Williams, Damien Patrick “Technology, Disability, & Human Augmentation,” “On the Ins and Outs of Human Augmentation,” “Go Upgrade Yourself,” appearing in Futurama and Philosophy, Courtland D. Lewis ed.

[22] Case, Amber. “We are all cyborgs now.” TED Talks. December 2010. http://www.ted.com/talks/amber_case_we_are_all_cyborgs_now.html

[23] Weise, Jillian “The Dawn of the ‘Tryborg.’” November 30, 2016. NEW YORK TIMES. https://www.nytimes.com/2016/11/30/opinion/the-dawn-of-the-tryborg.html?_r=1#story-continues-1; “Common Cyborg.” Sep 24, 2018. GRANTA. https://granta.com/common-cyborg/; Also Cf. Joshua Earle’s “Cyborg Maintenance: A Phenomenology of Upkeep” presented at the 21st Conference of the Society for Philosophy and Technology.

[24] Robertson, Jennifer. Robo Sapiens Japanicus; Robots, Gender, Family, and the Japanese Nation. pg. 146—174

[25] Robertson. pg. 146

[26] Ibid. pg. 148

[27] Robertson. pg. 149

[28] Ibid.

[29] Ibid. pg. 150

[30] Ibid. pg. 152

[31] Robertson. pg. 150

[32] Ibid. pg. 153—154

[33] Ibid. pg. 155—156

[34] Ibid. pg. 157

[35] Ibid.

[36] Robertson. pg. 168—169

[37] Ototake, Hirotada. Gotai Fumanzoku (“Incomplete Body”). Tokyo: Kodansha. 1998; No One’s Perfect. Tokyo: Kodansha. 2003.

[38] Robertson. pg. 170—171

[39] Ibid.

[40] Crenshaw, Kimberlé Williams. “Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics.” https://philpapers.org/rec/CREDTI

[41] Hoffman, Anna Lauren. “Data, Technology, and Gender: Thinking About (and From) Trans Lives”

[42] Dowd, Maureen. “Stripped of Dignity.” New York Times. April 19, 2011. https://www.nytimes.com/2011/04/20/opinion/20dowd.html; Crawford, Alison. “Disabled passengers complain of treatment by airport security staff.” CBC News. Sept. 27, 2016.  https://www.cbc.ca/news/politics/catsa-airport-travellers-complaints-security-1.3779312.

[43] Macrina, Francis L. Scientific Integrity: Text and Cases in Responsible Conduct of Research. (Third Edition). Washington, D.C.: ASM Press, 2005. pg. 92.

[44] Haraway, Donna. “The Cyborg Manifesto.” Simians, Cyborgs And Women: The Reinvention Of Nature.

 

I have a review of Ashley Shew’s Animal Constructions and Technological Knowledge, over at the Social Epistemology Research and Reply Collective: “Deleting the Human Clause.”

From the essay:

Animal Constructions and Technological Knowledge is Ashley Shew’s debut monograph and in it she argues that we need to reassess and possibly even drastically change the way in which we think about and classify the categories of technology, tool use, and construction behavior. Drawing from the fields of anthropology, animal studies, and philosophy of technology and engineering, Shew demonstrates that there are several assumptions made by researchers in all of these fields—assumptions about intelligence, intentionality, creativity and the capacity for novel behavior…

Shew says that we consciously and unconsciously appended a “human clause” to all of our definitions of technology, tool use, and intelligence, and this clause’s presumption—that it doesn’t really “count” if humans aren’t the ones doing it—is precisely what has to change.

I am a huge fan of this book and of Shew’s work, in general. Click through to find out a little more about why.

Until Next Time.

So by now you’re likely to have encountered something about the NYT Op-Ed Piece calling for a field of study that focuses on the impact of AI and algorithmic systems, a stance that elides the existence of not only communications and media studies people who focus on this work, but the whole entire disciplines of Philosophy of Technology and STS (rendered variously as “Science and Technology Studies” or “Science Technology and Society,” depending on a number of factors, but if you talk about STS, you’ll get responses from all of the above, about the same topics). While Dr. O’Neil has since tried to reframe this editorial as a call for businesses, governments, and the public to pay more attention to those people and groups, many have observed that such an argument exists nowhere in the article itself. Instead what we have is lines like academics (seemingly especially those in the humanities) are “asleep at the wheel.”

Instead of “asleep at the wheel” try “painfully awake on the side of the road at 5am in a part of town lyft and uber won’t come to, trying to flag down a taxi driver or hitchhike or any damn thing just please let me make this meeting so they can understand some part of what needs to be done.”* The former ultimately frames the humanities’ and liberal arts’ lack of currency and access as “well why aren’t you all speaking up more.” The latter gets more to the heart of “I’m sorry we don’t fund your departments or engage with your research or damn near ever heed your recommendations that must be so annoying for you oh my gosh.”

But Dr O’Neil is not the only one to write or say something along these lines—that there is somehow no one, or should be someone out here doing the work of investigating algorithmic bias, or infrastructure/engineering ethics, or any number of other things that people in philosophy of technology and STS are definitely already out here talking about. So I figured this would be, at the least, a good opportunity to share with you something discussing the relationship between science and technology, STS practitioners’ engagement with the public, and the public’s engagement of technoscience. Part 1 of who knows how many.

[Cover of the journal Techné: Research in Philosophy and Technology]

The relationship between technology and science is one in which each intersects with, flows into, shapes, and affects the other. Not only this, but both science and technology shape and are shaped by the culture in which they arise and take part. Viewed through the lens of the readings we’ll discuss it becomes clear that many scientists and investigators at one time desired a clear-cut relationship between science and technology in which one flows from the other, with the properties of the subcategory being fully determined by those of the framing category, and sociocultural concerns play no part.

Many investigators still want this clarity and certainty, but in the time since sociologists, philosophers, historians, and other investigators from the humanities and so called soft sciences began looking at the history and contexts of the methods of science and technology, it has become clear that these latter activities do not work in an even and easily rendered way. When we look at the work of Sergio Sismondo, Trevor J. Pinch and Wiebe E. Bijker, Madeline Akrich, and Langdon Winner, we can see that the social dimensions and intersections of science, culture, technology, and politics are and always have been crucially entwined.

In Winner’s seminal “Do Artifacts Have Politics?”(1980), we can see what counts as a major step forward along the path toward a model which takes seriously the social construction of science and technology, and the way in which we go about embedding our values, beliefs, and politics into the systems we make. On page 127, Winner states,

The things we call “technologies” are ways of building order in our world… Consciously or not, deliberately or inadvertently, societies choose structures for technologies that influence how people are going to work, communicate, travel, consume, [etc.]… In the processes by which structuring decisions are made, different people … possess unequal degrees of power [and] levels of awareness.

By this, Winner means to say that everything we do in the construction of the culture of scientific discovery and technological development is modulated by the sociocultural considerations that get built into them, and those constructed things go on to influence the nature of society, in turn. As a corollary to this, we can see a frame in which the elements within the frame—including science and technology—will influence and modulate each other, in the process of generating and being generated by the sociopolitical frame. Science will be affected by the tools it uses to make its discoveries, and the tools we use will be modulated and refined as our understandings change.

Pinch and Bijker write very clearly about the multidirectional interactions of science, technology, and society in their 1987 piece, [The Social Construction of Technological Systems,] using the history of the bicycle as their object of study. Through their investigation of the messy history of bicycles, “safety bicycles,” inflated rubber tires, bicycle racing, and PR ad copy, Pinch and Bijker show that science and technology aren’t clearly distinguished anymore, if they ever were. They show how scientific studies of safety were less influential on bicycle construction and adoption than the social perception [of] the devices, meaning that politics and public perception play a larger role in what gets studied, created, and adopted than we used to admit.

They go on to highlight a kind of multidirectionality and interpretive flexibility, which they say we achieve by looking at the different social groups that intersect with the technology, and the ways in which they do so (pg. 34). When we do this, we will see that each component group is concerned with different problems and solutions, and that each innovation made to address these concerns alters the landscape of the problem space. How we define the problem dictates the methods we will use and the technology that we create to seek a solution to it.

[Black and white figures comparing the frames of a Whippet Spring Frame bicycle (left) and a Singer Xtraordinary bicycle (right), from “The Social Construction of Facts and Artifacts: Or How the Sociology of Science and the Sociology of Technology Might Benefit Each Other” by Trevor J. Binch and Wiebe E. Bijker, 1987]


Akrich’s 1997 “The De-Scription of Technical Objects” (published, perhaps unsurprisingly, in a volume coedited by Bijker), engages the moral valences of technological intervention, and the distance between intent in design and “on the ground” usage. In her investigation of how people in Burkina Faso, French Polynesia, and elsewhere make use of technology such as generators and light boxes, we again see a complex interplay between the development of a scientific or technological process and the public adoption of it. On page 221 Akrich notes, “…the conversion of sociotechnical facts into facts pure and simple depends on the ability to turn technical objects into black boxes. In other words, as they become indispensable, objects also have to efface themselves.” That is, in order for the public to accept the scientific or technological interventions, those interventions had to become an invisible part of the framework of the public’s lives. Only when the public no longer had to think about these interventions did they become paradoxically “seen,” understood, as “good” science and technology.

In Sismondo’s “Science and Technology Studies and an Engaged Program” (2008) he spends some time discussing the social constructivist position that we’ve begun laying out, above—the perspective that everything we do and all the results we obtain from the modality of “the sciences” are constructed in part by that mode. Again, this would mean that “constructed” would describe both the data we organize out of what we observe, and what we initially observe at all. From page 15, “Not only data but phenomena themselves are constructed in laboratories—laboratories are places of work, and what is found in them is not nature but rather the product of much human effort.”

But Sismondo also says that this is only one half of the picture, then going on to discuss the ways in which funding models, public participation, and regulatory concerns can and do alter the development and deployment of science and technology. On page 19 he discusses a model developed in Denmark in the 1980’s:

Experts and stakeholders have opportunities to present information to the panel, but the lay group has full control over its report. The consensus conference process has been deemed a success for its ability to democratize technical decision-making without obviously sacrificing clarity and rationality, and it has been extended to other parts of Europe, Japan, and the United States…

This all merely highlights the fact that, if the public is going to be engaged, then the public ought to be as clear and critical as possible in its understanding of the exchanges that give rise to the science and technology on which they are asked to comment.

The non-scientific general public’s understanding of the relationship between science and technology is often characterized much as I described at the beginning of this essay. That is, it is often said that the public sees the relationship as a clear and clean move from scientific discoveries or breakthroughs to a device or other application of those principles. However, this casting does not take into account the variety of things that the public will often call technology, such as the Internet, mobile phone applications, autonomous cars, and more.

While there are scientific principles at play within each of those technologies, it still seems a bit bizarre to cast them merely as “applied science.” They are not all devices or other single physical instantiations of that application, and even those that are singular are the applications of multiple sciences, and also concrete expressions of social functions. Those concretions have particular psychological impacts, and philosophical implications, which need to be understood by both their users and their designers. Every part affects every other part, and each of those parts is necessarily filtered through human perspectives.

The general public needs to understand that every technology humans create will necessarily carry within it the hallmarks of human bias. Regardless of whether there is an objective reality at which science points, the sociocultural and sociopolitical frameworks in which science gets done will influence what gets investigated. Those same sociocultural and sociopolitical frameworks will shape the tools and instruments and systems—the technology—used to do that science. What gets done will then become a part of the scientific and technological landscape to which society and politics will then have to react. In order for the public to understand this, we have to educate about the history of science, the nature of social scientific methods, and the impact of implicit bias.

My own understanding of the relationship between science and technology is as I have outlined: A messy, tangled, multivalent interaction in which each component influences and is influenced by every other component, in near simultaneity. This framework requires a willingness to engage multiple perspectives and disciplines, and to perhaps reframe the normative project of science and technology to one that appreciates and encourages a multiplicity of perspectives, and no single direction of influence between science, technology and society. Once people understand this—that science and technology generate each other while influencing and being influenced by society—we do the work of engaging them in a nuanced and mindful way, working together to prevent the most egregious depredations of technoscientific development, or at least to agilely respond to them, as they arise.

But to do this, researchers in the humanities need to be heeded. In order to be heeded, people need to know that we exist, and that we have been doing this work for a very, very long time. The named field of Philosophy of Technology has been around for 70 years, and it in large parta foregrounded the concerns taken up and explored by STS. Here are just a few names of people to look at in this extensive history: Martin Heidegger, Bruno Latour, Don Ihde, Ian Hacking, Joe Pitt, and more recently, Ashley Shew, Shannon Vallor, Robin Zebrowski, John P. Sullins, John Flowers, Matt Brown, Shannon Conley, Lee Vinsel, Jacques Ellul, Andrew Feenberg, Batya Friedman, Geoffrey C. Bowker and Susan Leigh Star, Rob Kling, Phil Agre, Lucy Suchman, Joanna Bryson, David Gunkel, so many others. Langdon Winner published “Do Artifacts Have Politics” 37 years ago. This episode of the You Are Not So Smart podcast, along with Shannon Vallor and Alistair Croll, has all of us talking about the public impact of the aforementioned.

What I’m saying is that many of us are trying to do the work, out here. Instead of pretending we don’t exist, try using large platforms (Like the NYT opinion page, and well read blogs) to highlight the very real work being attempted. I know for a fact the NYT has received submission articles about philosophy of tech and STS. Engage them. Discuss these topics in public, and know that there are many voices trying to grapple with and understand this world, and we have been, for a really damn long time.

So you see that we are still talking about learning and thinking in public. About how we go about getting people interested and engaged in the work of the technology that affects their lives. But there is a lot at the base of all this about what people think of as “science” or “expertise” and where they think that comes from, and what they think of those who engage in or have it. If we’re going to do this work, we have to be able to have conversations with people who not only don’t value what we do, but who think what we value is wrongheaded, or even evil. There is a lot going on in the world, right now, in regards to science and knowability. For instance, late last year there was a revelation about the widespread use of Dowsing by UK water firms (though if you ask anybody in the US, you’ll find it’s still in use, here, too).

And then this guy was trying to use systems of fluid dynamics and aeronautics to launch himself in a rocket to prove that the earth is flat and that science isn’t real. Yeah. And while there’s a much deeper conversation to be had here about whether the social construction of the category of “science” can be understood as distinct from a set of methodologies and formulae, but i really don’t think this guy is talking about having that conversation.

So let’s also think about the nature of how laboratory science is constructed, and what it can do for us.

In his 1983 “Give Me a Laboratory and I Will Move The World,” Bruno Latour makes the claim that labs have their own agency. What Latour is asserting, here, is that the forces which coalesce within the framework of a lab become active agents in their own right. They are not merely subject to the social and political forces that go into their creation, but they are now active participants in the framing and reframing of those forces. He believes that the nature of inscription—the combined processes of condensing, translating, and transmitting methods, findings, and pieces of various knowledges—is a large part of what gives the laboratory this power, and he highlights this when he says:

The strength gained in the laboratory is not mysterious. A few people much weaker than epidemics can become stronger if they change the scale of the two actors—making the microbes big, and the epizootic small—and others dominate the events through the inscription devices that make each of the steps readable. The change of scale entails an acceleration in the number of inscriptions you can get. …[In] a year Pasteur could multiply anthrax outbreaks. No wonder that he became stronger than veterinarians. For every statistic they had, he could mobilize ten of them. (pg. 163—164)

This process of inscription is crucial for Latour; not just for the sake of what the laboratory can do of its own volition, but also because it is the mechanism by which scientists may come to understand and translate the values and concerns of another, which is, for him, the utmost value of science. In rendering the smallest things such as microbes and diseases legible on a large scale, and making largescale patterns individually understandable and reproducible, the presupposed distinctions of “macro” and “micro” are shown to be illusory. Latour believes that it is only through laboratory engagement that we can come to fully understand the complexities of these relationships (pg. 149).

When Latour begins laying out his project, he says sociological methods can offer science the tools to more clearly translate human concerns into a format with which science can grapple. “He who is able to translate others’ interests into his own language carries the day.” (pg. 144). However, in the process of detailing what it is that Pasteurian laboratory scientists do in engaging the various stakeholders in farms, agriculture, and veterinary medicine, it seems that he is has only described half of the project. Rather than merely translating the interests of others into our own language, evidence suggests that we must also translate our interests back into the language of our interlocutor.

So perhaps we can recast Latour’s statement as, “whomsoever is able to translate others’ interests into their own language and is equally able to translate their own interests into the language of another, carries the day.” Thus we see that the work done in the lab should allow scientists and technicians to increase the public’s understanding both of what it is that technoscience actually does and why it does it, by presenting material that can speak to many sets of values.

Karin Knorr-Cetina’s assertion in her 1995 article “Laboratory Studies: The Cultural Approach to the Study of Science” is that laboratory is an “enhanced” environment. In many ways this follows directly from Latour’s conceptualization of labs. Knorr-Cetina says that the constructed nature of the lab ‘“improves upon” the natural order,’ because said natural order is, in itself, malleable, and capable of being understood and rendered in a multiplicity of ways (pg. 9). If laboratories are never engaging the objects they study “as they occur in nature,” this means that labs are always in the process of shaping what they study, in order to better study it (ibid). This framing of the engagement of laboratory science is clarified when she says:

Detailed description [such as that done in laboratories] deconstructs—not out of an interest in critique but because it cannot but observe the intricate labor that goes into the creation of a solid entity, the countless nonsolid ingredients from which it derives, the confusion and negotiation that often lie at its origin, and the continued necessity of stabilizing and congealing. Constructionist studies have revealed the ordinary working of things that are black-boxed as “objective” facts and “given” entities, and they have uncovered the mundane processes behind systems that appear monolithic, awe inspiring, inevitable. (pg. 12)

Thus, the laboratory is one place in which the irregularities and messiness of the “natural world” are ordered in such a ways as to be able to be studied at all. However, Knorr-Cetina clarifies that “nothing epistemically special” is happening, in a lab (pg. 16). That is, while a laboratory helps us to better recognize nonhuman agents (“actants”) and forces at play in the creation of science, this is merely a fact of construction; everything that a scientist does in a lab is open to scrutiny and capable of being understood. If this is the case, then the “enhancement” gained via the conditions of the laboratory environment is merely a change in degree, rather than a difference in kind, as Latour seems to assert.

[Stock photo image of hundreds of scallops and two scallop fishers on the deck of a boat in the St Brieuc Bay.]

[Stock photo image of hundreds of scallops and two scallop fishers on the deck of a boat in the St Brieuc Bay.]


In addition to the above explorations of what the field of laboratory studies has to offer, we can also look at the works of Michel Callon and Sharon Traweek. Though primarily concerned with describing the network of actors and their concerns in St Brieuc Bay scallop-fishing and -farming industries, Callon’s investigation can be seen as example of Latour’s principle of bringing the laboratory out in the world, both in terms of the subjects of Callon’s investigation and the methods of those subjects. While Callon himself might disagree with this characterization, we can trace the process of selection and enframing of subjects and the investigation of their translation procedures, which we can see on page 20, when he says,

We know that the ingredients of controversies are a mixture of considerations concerning both Society and Nature. For this reason we require the observer to use a single repertoire when they are described. The vocabulary chosen for these descriptions and explanations can be left to the discretion of the observer. He cannot simply repeat the analysis suggested by the actors he is studying. (Callon, 1984)

In this way, we can better understand how laboratory techniques have become a component even of the study and description of laboratories.

When we look at a work like Sharon Traweek’s Beamtimes and Lifetimes, we can see that she finds value in bringing ethnographic methodologies into laboratory studies, and perhaps even laboratory settings. She discusses the history of the laboratory’s influence, arcing back to WWI and WWII, where scientists were tasked with coming up with more and better weapons, with their successes being used to push an ever-escalating arms race. As this process continued, the characteristics of what made a “good lab scientist” were defined and then continually reinforced, as being “someone who did science like those people over there.” In the building of the laboratory community, certain traits and behaviours become seen as ideal, those who do not match those traits and expectations are regarded as necessarily doing inferior work. She says,

The field worker’s goal, then, is to find out what the community takes to be knowledge, sensible action, and morality, as well as how its members account for unpredictable information, disturbing actions, and troubling motives. In my fieldwork I wanted to discover the physicists’ “common sense” world view, what everyone in the community knows, and what every newcomer needs to learn in order to act in a sensible way, in order to be taken seriously. (pg. 8)

And this is also the danger of focusing too closely on the laboratory: the potential for myopia, for thinking that the best or perhaps even only way to study the work of scientists is to render that work through the lens of the lab.

While the lab is a fantastic tool and studies of it provide great insight, we must remember that we can learn a great deal about science and technology via contexts other than that of the lab. While Latour argues that laboratory science actually destabilizes the inside-the-lab/outside-the-lab distinction by showing that the tools and methods of the lab can be brought anywhere out into the world, it can be said that the distinction is reinstantiated by our focusing on laboratories as the sole path to understand scientists. Much the same can be said for the insistence that systems engineers are the sole best examples of how to engage technological development. Thinking that labs are the only resource we have means that we will miss the behavior of researchers at conferences, retreats, in journal articles, and other places where the norms of the scientific community are inscribed and reinforced. It might not be the case that scientists understand themselves as creating community rules, in these fora, but this does not necessarily mean that they are not doing so.

The kinds of understandings a group has about themselves will not always align with what observations and descriptions might be gleaned from another’s investigation of that group, but this doesn’t mean that one of those has to be “right” or “true” while the other is “wrong” and “false.” The interest in studying a discipline should come not from that group’s “power” to “correctly” describe the world, but from understanding more about what it is about whatever group is under investigation that makes it itself. Rather than seeking a single correct perspective, we should instead embrace the idea that a multiplicity of perspectives might all be useful and beneficial, and then ask “To What End?”

We’re talking about Values, here. We’re talking about the question of why whatever it is that matters to you, matters to you. And how you can understand that other people have different values from each other, and we can all learn to talk about what we care about in a way that helps us understand each other. That’s not neutral, though. Even that can be turned against us, when it’s done in bad faith. And we have to understand why someone would want to do that, too.

[Direct link to Mp3]

[09/22/17: This post has been updated with a transcript, courtesy of Open Transcripts]

Back on March 13th, 2017, I gave an invited guest lecture, titled:

TECHNOLOGY, DISABILITY, AND HUMAN AUGMENTATION

‘Please join Dr. Ariel Eisenberg’s seminar, “American Identities: Disability,” and [the] Interdisciplinary Studies Department for an hour-long conversation with Damien Williams on disability and the normalization of technology usage, “means-well” technological innovation, “inspiration porn,” and other topics related to disability and technology.’

It was kind of an extemporaneous riff on my piece “On the Ins and Outs of Human Augmentation,” and it gave me the opportunity to namedrop Ashley Shew, Natalie Kane, and Rose Eveleth.

The outline looked a little like this:

  • Foucault and Normalization
    • Tech and sociological pressures to adapt to the new
      • Starts with Medical tech but applies Everywhere; Facebook, Phones, Etc.
  • Zoltan Istvan: In the Transhumanist Age, We Should Be Repairing Disabilities Not Sidewalks
  • All Lead To: Ashley Shew’s “Up-Standing Norms
    • Listening to the Needs and Desires of people with disabilities.
      • See the story Shew tells about her engineering student, as related in the AFWTA Essay
    • Inspiration Porn: What is cast by others as “Triumphing” over “Adversity” is simply adapting to new realities.
      • Placing the burden on the disabled to be an “inspiration” is dehumanizing;
      • means those who struggle “have no excuse;”
      • creates conditions for a “who’s got it worse” competition
  • John Locke‘s Empiricism: Primary and Secondary Qualities
    • Primary qualities of biology and physiology lead to secondary qualities of society and culture
      • Gives rise to Racism and Ableism, when it later combines with misapplied Darwinism to be about the “Right Kinds” of bodies and minds.
        • Leads to Eugenics: Forced sterilization, medical murder, operating and experimenting on people without their knowledge or consent.
          • “Fixing” people to make them “normal, again”
  • Natalie Kane‘s “Means Well Technology
    • Design that doesn’t take into account the way that people will actually live with and use new tech.
      • The way tech normalizes is never precisely the way designers want it to
        • William Gibson’s quote “The street finds its own uses for things.”
  • Against Locke: Embrace Phenomenological Ethics and Epistemology (Feminist Epistemology and Ethics)
    • Lived Experience and embodiment as crucial
    • The interplay of Self and and Society
  • Ship of Theseus: Identity, mind, extensions, and augmentations change how we think of ourselves and how society thinks of us
    • See the story Shew tells about her friend with the hemipelvectomy, as related in the aforementioned AFWTA Essay

The whole thing went really well (though, thinking back, I’m not super pleased with my deployment of Dennett). Including Q&A, we got about an hour and forty minutes of audio, available at the embed and link above.

Also, I’m apparently the guy who starts off every talk with some variation on “This is a really convoluted interplay of ideas, but bear with me; it all comes together.”

The audio transcript is below the cut. Enjoy.

Continue Reading

There’s increasing reportage about IBM using Watson to correlate medical data. We’ve talked before about the potential hazards of this:

Do you know someone actually had the temerity to ask [something like] “What Does Google Having Access to Medical Records Mean For Patient Privacy?” [Here] Like…what the fuck do you think it means? Nothing good, you idiot!

Disclosures and knowledges can still make certain populations intensely vulnerable to both predation and to social pressures and judgements, and until that isn’t the case, anymore, we need to be very careful about the work we do to try to bring those patients’ records into a sphere where they’ll be accessed and scrutinized by people who don’t have to take an oath to hold that information in confidence. ‘

We are more and more often at the intersection of our biological humanity and our technological augmentation, and the integration of our mediated outboard memories only further complicates the matter. As it stands, we don’t quite yet know how to deal with the question posed by Motherboard, some time ago (“Is Harm to a Prosthetic Limb Property Damage or Personal Injury?”), but as we build on implantable technologies, advanced prostheses, and offloaded memories and augmented capacities we’re going to have to start blurring the line between our bodies, our minds, and our concept of our selves. That is, we’ll have to start intentionally blurring it, because the vast majority of us already blur it, without consciously realising that we do. At least, those without prostheses don’t realise it.

Dr Ashley Shew, out of Virginia Tech,  works at the intersection of philosophy, tech, and disability. I first encountered her work, at the 2016 IEEE Ethics Conference in Vancouver, where she presented her paper “Up-Standing, Norms, Technology, and Disability,” a discussion of how ableism, expectations, and language use marginalise disabled bodies. Dr Shew is, herself, disabled, having had her left leg removed due to cancer, and she gave her talk not on the raised dias, but at floor-level, directly in front of the projector. Her reason? “I don’t walk up stairs without hand rails, or stand on raised platforms without guards.”

Dr Shew notes that users of wheelchairs consider those to be fairly integral extensions and interventions. Wheelchair users, she notes, consider their chairs to be a part of them, and the kinds of lawsuits engaged when, for instance, airlines damage their chairs, which happens a great deal.  While we tend to think of the advents of technology allowing for the seamless integration of our technology and bodies, the fact is that well-designed mechanical prostheses, today, are capable becoming integrated into the personal morphic sphere of a person, the longer they use it. And this can extended sensing can be transferred from one device to another. Shew mentions a friend of hers:

She’s an amputee who no longer uses a prosthetic leg, but she uses forearm crutches and a wheelchair. (She has a hemipelvectomy, so prosthetics are a real pain for her to get a good fit and there aren’t a lot of options.) She talks about how people have these different perceptions of devices. When she uses her chair people treat her differently than when she uses her crutches, but the determination of which she uses has more to do with the activities she expects for the day, rather than her physical wellbeing.

But people tend to think she’s recovering from something when she moves from chair to sticks.

She has been an [amputee] for 18 years.

She has/is as recovered as she can get.

In her talk at IEEE, Shew discussed the fact that a large number of paraplegics and other wheelchair users do not want exoskeletons, and those fancy stair-climbing wheelchairs aren’t covered by health insurance. They’re classed as vehicles. She said that when she brought this up in the class she taught, one of the engineers left the room looking visibly distressed. He came back later and said that he’d gone home to talk to his brother with spina bifida, who was the whole reason he was working on exoskeletons. He asked his brother, “Do you even want this?” And the brother said, basically, “It’s cool that you’re into it but… No.” So, Shew asks, why are these technologies being developed? Transhumanists and the military. Framing this discussion as “helping our vets” makes it a noble cause, without drawing too much attention to the fact that they’ll be using them on the battlefield as well.

All of this comes back down and around to the idea of biases ingrained into social institutions. Our expectations of what a “normal functioning body” is gets imposed from the collective society, as a whole, a placed as restrictions and demands on the bodies of those whom we deem to be “malfunctioning.” As Shew says, “There’s such a pressure to get the prosthesis as if that solves all the problems of maintenance and body and infrastructure. And the pressure is for very expensive tech at that.”

So we are going to have to accept—in a rare instance where Robert Nozick is proven right about how property and personhood relate—that the answer is “You are damaging both property and person, because this person’s property is their person.” But this is true for reasons Nozick probably would not think to consider, and those same reasons put us on weirdly tricky grounds. There’s a lot, in Nozick, of the notion of property as equivalent to life and liberty, in the pursuance of rights, but those ideas don’t play out, here, in the same way as they do in conservative and libertarian ideologies.  Where those views would say that the pursuit of property is intimately tied to our worth as persons, in the realm of prosthetics our property is literally simultaneously our bodies, and if we don’t make that distinction, then, as Kirsten notes, we can fall into “money is speech” territory, very quickly, and we do not want that.

Because our goal is to be looking at quality of life, here—talking about the thing that allows a person to feel however they define “comfortable,” in the world. That is, the thing(s) that lets a person intersect with the world in the ways that they desire. And so, in damaging the property, you damage the person. This is all the more true if that person is entirely made of what we are used to thinking of as property.

And all of this is before we think about the fact implantable and bone-bonded tech will need maintenance. It will wear down and glitch out, and you will need to be able to access it, when it does.  This means that the range of ability for those with implantables? Sometimes it’s less than that of folks with more “traditional” prostheses. But because they’re inside, or more easily made to look like the “original” limb,  we observers are so much more likely to forget that there are crucial differences at play in the ownership and operation of these bodies.

There’s long been a fear that, the closer we get to being able to easily and cheaply modify humans, we’ll be more likely to think of humanity as “perfectable.” That the myth of progress—some idealized endpoint—will be so seductive as to become completely irresistible. We’ve seen this before, in the eugenics movement, and it’s reared its head in the transhumanist and H+ communities of the 20th and 21st centuries, as well. But there is the possibility that instead of demanding that there be some kind of universally-applicable “baseline,” we intently focused, instead, on recognizing the fact that just as different humans have different biochemical and metabolic needs, process, capabilities, preferences, and desires, different beings and entities which might be considered persons are drastically different than we, but no less persons?

Because human beings are different. Is there a general framework, a loosely-defined line around which we draw a conglomeration of traits, within which lives all that we mark out as “human”—a kind of species-wide butter zone? Of course. That’s what makes us a fucking species. But the kind of essentialist language and thinking towards which we tend, after that, is reductionist and dangerous. Our language choices matter, because connotative weight alters what people think and in what context, and, again, we have a habit of moving rapidly from talking about a generalized framework of humanness to talking about “The Right Kind Of Bodies,” and the “Right Kind Of Lifestyle.”

And so, again, again, again, we must address problems such as normalized expectations of “health” and “Ability.” Trying to give everyone access to what they might consider their “best” selves is a brilliant goal, sure, whatever, but by even forwarding the project, we run the risk of colouring an expectation of both what that “best” is and what we think it “Ought To” look like.

Some people need more protein, some people need less choline, some people need higher levels of phosphates, some people have echolocation, some can live to be 125, and every human population has different intestinal bacterial colonies from every other. When we combine all these variables, we will not necessarily find that each and every human being has the same molecular and atomic distribution in the same PPM/B ranges, nor will we necessarily find that our mixing and matching will ensure that everyone gets to be the best combination of everything. It would be fantastic if we could, but everything we’ve ever learned about our species says that “healthy human” is a constantly shifting target, and not a static one.

We are still at a place where the general public reacts with visceral aversion to technological advances and especially anything like an immediated technologically-augmented humanity, and this is at least in part because we still skirt the line of eugenics language, to this day. Because we talk about naturally occurring bio-physiological Facts as though they were in any way indicative of value, without our input. Because we’re still terrible at ethics, continually screwing up at 100mph, then looking back and going, “Oh. Should’ve factored that in. Oops.”

But let’s be clear, here: I am not a doctor. I’m not a physiologist or a molecular biologist. I could be wrong about how all of these things come together in the human body, and maybe there will be something more than a baseline, some set of all species-wide factors which, in the right configuration, say “Healthy Human.” But what I am is someone with a fairly detailed understanding of how language and perception affect people’s acceptance of possibilities, their reaction to new (or hauntingly-familiar-but-repackaged) ideas, and their long-term societal expectations and valuations of normalcy.

And so I’m not saying that we shouldn’t augment humanity, via either mediated or immediated means. I’m not saying that IBM’s Watson and Google’s DeepMind shouldn’t be tasked with the searching patient records and correlating data. But I’m also not saying that either of these is an unequivocal good. I’m saying that it’s actually shocking how much correlative capability is indicated by the achievements of both IBM and Google. I’m saying that we need to change the way we talk about and think about what it is we’re doing. We need to ask ourselves questions about informed patient consent, and the notions of opting into the use of data; about the assumptions we’re making in regards to the nature of what makes us humans, and the dangers of rampant, unconscious scientistic speciesism. Then, we can start to ask new questions about how to use these new tools we’ve developed.

With this new perspective, we can begin to imagine what would happen if we took Watson and DeepDream’s ability to put data into context—to turn around, in seconds, millions upon millions (billions? Trillions?) of permutations and combinations. And then we can ask them to work on tailoring genome-specific health solutions and individualized dietary plans. What if we asked these systems to catalogue literally everything we currently knew about every kind of disease presentation, in every ethnic and regional population, and the differentials for various types of people with different histories, risk factors, current statuses? We already have nanite delivery systems, so what if we used Google and IBM’s increasingly ridiculous complexity to figure out how to have those nanobots deliver a payload of perfectly-crafted medical remedies?

But this is fraught territory. If we step wrong, here, we are not simply going to miss an opportunity to develop new cures and devise interesting gadgets. No; to go astray, on this path, is to begin to see categories of people that “shouldn’t” be “allowed” to reproduce, or “to suffer.” A misapprehension of what we’re about, and why, is far fewer steps away from forced sterilization and medical murder than any of us would like to countenance. And so we need to move very carefully, indeed, always being aware of our biases, and remembering to ask those affected by our decisions what they need and what it’s like to be them. And remembering, when they provide us with their input, to believe them.