bioethics

All posts tagged bioethics

Previously, I told you about The Human Futures and Intelligent Machines Summit at Virginia Tech, and now that it’s over, I wanted to go ahead and put the full rundown of the events all in one place.

The goals for this summit were to start looking at the ways in which issues of algorithms, intelligent machine systems, human biotech, religion, surveillance, and more will intersect and affect us in the social, academic, political spheres. The big challenge in all of this was seen as getting better at dealing with this in the university and public policy sectors, in America, rather than the seeming worse we’ve gotten, so far.

Here’s the schedule. Full notes, below the cut.

Friday, June 8, 2018

  • Josh Brown on “the distinction between passive and active AI.”
  • Daylan Dufelmeier on “the potential ramifications of using advanced computing in the criminal justice arena…”
  • Mario Khreiche on the effects of automation, Amazon’s Mechanical Turk, and the Microlabor market.
  • Aaron Nicholson on how technological systems are used to support human social outcomes, specifically through the lens of policing  in the city of Atlanta
  • Ralph Hall on “the challenges society will face if current employment and income trends persist into the future.”
  • Jacob Thebault-Spieker on “how pro-urban and pro-wealth biases manifest in online systems, and  how this likely influences the ‘education’ of AI systems.”
  • Hani Awni on the sociopolitical of excluding ‘relational’ knowledge from AI systems.

Saturday, June 9, 2018

  • Chelsea Frazier on rethinking our understandings of race, biocentrism, and intelligence in relation to planetary sustainability and in the face of increasingly rapid technological advancement.
  • Ras Michael Brown on using the religions technologies of West Africa and the West African Diaspora to reframe how we think about “hybrid humanity.”
  • Damien Williams on how best to use interdisciplinary frameworks in the creation of machine intelligence and human biotechnological interventions.
  • Sara Mattingly-Jordan on the implications of the current global landscape in AI ethics regulation.
  • Kent Myers on several ways in which the intelligence community is engaging with human aspects of AI, from surveillance to sentiment analysis.
  • Emma Stamm on the idea that datafication of the self and what about us might be uncomputable.
  • Joshua Earle on “Morphological Freedom.”

Continue Reading

This weekend, Virginia Tech’s Center for the Humanities is hosting The Human Futures and Intelligent Machines Summit, and there is a link for the video cast of the events. You’ll need to Download and install Zoom, but it should be pretty straightforward, other than that.

You’ll find the full Schedule, below the cut.

Continue Reading

So, many of you may remember that back in June of 2016, I was invited to the Brocher Institute in Hermance, Switzerland, on the shores of Lake Geneva, to take part in the Frankenstein’s Shadow Symposium sponsored by Arizona State University’s Center for Science and the Imagination as part of their Frankenstein Bicentennial project.

While there, I and a great many other thinkers in art, literature, history, biomedical ethics, philosophy, and STS got together to discuss the history and impact of Mary Shelley’s Frankenstein. Since that experience, the ASU team compiled and released a book project: A version of Mary Shelley’s seminal work that is filled with annotations and essays, and billed as being “For Scientists, Engineers, and Creators of All Kinds.”

[Image of the cover of the 2017 edited, annotated edition of Mary Shelley’s Frankenstein, “Annotated for Scientists, Engineers, and Creators of All Kinds.”]

Well, a few months ago, I was approached by the organizers and asked to contribute to a larger online interactive version of the book—to provide an annotation on some aspect of the book I deemed crucial and important to understand. As of now, there is a full functional live beta version of the website, and you can see my contribution and the contributions of many others, there.

From the About Page:

Frankenbook is a collective reading and collaborative annotation experience of the original 1818 text of Frankenstein; or, The Modern Prometheus, by Mary Wollstonecraft Shelley. The project launched in January 2018, as part of Arizona State University’s celebration of the novel’s 200th anniversary. Even two centuries later, Shelley’s modern myth continues to shape the way people imagine science, technology, and their moral consequences. Frankenbook gives readers the opportunity to trace the scientific, technological, political, and ethical dimensions of the novel, and to learn more about its historical context and enduring legacy.

To learn more about Arizona State University’s celebration of Frankenstein’s bicentennial, visit frankenstein.asu.edu.

You’ll need to have JavaScript enabled and ad-blocks disabled to see the annotations, but it works quite well. Moving forward, there will be even more features added, including a series of videos. Frankenbook.org will be the place to watch for all updates and changes.

I am deeply honoured to have been asked to be a part of this amazing project, over the past two years, and I am so very happy that I get to share it with all of you, now. I really hope you enjoy it.

Until Next Time.

There’s increasing reportage about IBM using Watson to correlate medical data. We’ve talked before about the potential hazards of this:

Do you know someone actually had the temerity to ask [something like] “What Does Google Having Access to Medical Records Mean For Patient Privacy?” [Here] Like…what the fuck do you think it means? Nothing good, you idiot!

Disclosures and knowledges can still make certain populations intensely vulnerable to both predation and to social pressures and judgements, and until that isn’t the case, anymore, we need to be very careful about the work we do to try to bring those patients’ records into a sphere where they’ll be accessed and scrutinized by people who don’t have to take an oath to hold that information in confidence. ‘

We are more and more often at the intersection of our biological humanity and our technological augmentation, and the integration of our mediated outboard memories only further complicates the matter. As it stands, we don’t quite yet know how to deal with the question posed by Motherboard, some time ago (“Is Harm to a Prosthetic Limb Property Damage or Personal Injury?”), but as we build on implantable technologies, advanced prostheses, and offloaded memories and augmented capacities we’re going to have to start blurring the line between our bodies, our minds, and our concept of our selves. That is, we’ll have to start intentionally blurring it, because the vast majority of us already blur it, without consciously realising that we do. At least, those without prostheses don’t realise it.

Dr Ashley Shew, out of Virginia Tech,  works at the intersection of philosophy, tech, and disability. I first encountered her work, at the 2016 IEEE Ethics Conference in Vancouver, where she presented her paper “Up-Standing, Norms, Technology, and Disability,” a discussion of how ableism, expectations, and language use marginalise disabled bodies. Dr Shew is, herself, disabled, having had her left leg removed due to cancer, and she gave her talk not on the raised dias, but at floor-level, directly in front of the projector. Her reason? “I don’t walk up stairs without hand rails, or stand on raised platforms without guards.”

Dr Shew notes that users of wheelchairs consider those to be fairly integral extensions and interventions. Wheelchair users, she notes, consider their chairs to be a part of them, and the kinds of lawsuits engaged when, for instance, airlines damage their chairs, which happens a great deal.  While we tend to think of the advents of technology allowing for the seamless integration of our technology and bodies, the fact is that well-designed mechanical prostheses, today, are capable becoming integrated into the personal morphic sphere of a person, the longer they use it. And this can extended sensing can be transferred from one device to another. Shew mentions a friend of hers:

She’s an amputee who no longer uses a prosthetic leg, but she uses forearm crutches and a wheelchair. (She has a hemipelvectomy, so prosthetics are a real pain for her to get a good fit and there aren’t a lot of options.) She talks about how people have these different perceptions of devices. When she uses her chair people treat her differently than when she uses her crutches, but the determination of which she uses has more to do with the activities she expects for the day, rather than her physical wellbeing.

But people tend to think she’s recovering from something when she moves from chair to sticks.

She has been an [amputee] for 18 years.

She has/is as recovered as she can get.

In her talk at IEEE, Shew discussed the fact that a large number of paraplegics and other wheelchair users do not want exoskeletons, and those fancy stair-climbing wheelchairs aren’t covered by health insurance. They’re classed as vehicles. She said that when she brought this up in the class she taught, one of the engineers left the room looking visibly distressed. He came back later and said that he’d gone home to talk to his brother with spina bifida, who was the whole reason he was working on exoskeletons. He asked his brother, “Do you even want this?” And the brother said, basically, “It’s cool that you’re into it but… No.” So, Shew asks, why are these technologies being developed? Transhumanists and the military. Framing this discussion as “helping our vets” makes it a noble cause, without drawing too much attention to the fact that they’ll be using them on the battlefield as well.

All of this comes back down and around to the idea of biases ingrained into social institutions. Our expectations of what a “normal functioning body” is gets imposed from the collective society, as a whole, a placed as restrictions and demands on the bodies of those whom we deem to be “malfunctioning.” As Shew says, “There’s such a pressure to get the prosthesis as if that solves all the problems of maintenance and body and infrastructure. And the pressure is for very expensive tech at that.”

So we are going to have to accept—in a rare instance where Robert Nozick is proven right about how property and personhood relate—that the answer is “You are damaging both property and person, because this person’s property is their person.” But this is true for reasons Nozick probably would not think to consider, and those same reasons put us on weirdly tricky grounds. There’s a lot, in Nozick, of the notion of property as equivalent to life and liberty, in the pursuance of rights, but those ideas don’t play out, here, in the same way as they do in conservative and libertarian ideologies.  Where those views would say that the pursuit of property is intimately tied to our worth as persons, in the realm of prosthetics our property is literally simultaneously our bodies, and if we don’t make that distinction, then, as Kirsten notes, we can fall into “money is speech” territory, very quickly, and we do not want that.

Because our goal is to be looking at quality of life, here—talking about the thing that allows a person to feel however they define “comfortable,” in the world. That is, the thing(s) that lets a person intersect with the world in the ways that they desire. And so, in damaging the property, you damage the person. This is all the more true if that person is entirely made of what we are used to thinking of as property.

And all of this is before we think about the fact implantable and bone-bonded tech will need maintenance. It will wear down and glitch out, and you will need to be able to access it, when it does.  This means that the range of ability for those with implantables? Sometimes it’s less than that of folks with more “traditional” prostheses. But because they’re inside, or more easily made to look like the “original” limb,  we observers are so much more likely to forget that there are crucial differences at play in the ownership and operation of these bodies.

There’s long been a fear that, the closer we get to being able to easily and cheaply modify humans, we’ll be more likely to think of humanity as “perfectable.” That the myth of progress—some idealized endpoint—will be so seductive as to become completely irresistible. We’ve seen this before, in the eugenics movement, and it’s reared its head in the transhumanist and H+ communities of the 20th and 21st centuries, as well. But there is the possibility that instead of demanding that there be some kind of universally-applicable “baseline,” we intently focused, instead, on recognizing the fact that just as different humans have different biochemical and metabolic needs, process, capabilities, preferences, and desires, different beings and entities which might be considered persons are drastically different than we, but no less persons?

Because human beings are different. Is there a general framework, a loosely-defined line around which we draw a conglomeration of traits, within which lives all that we mark out as “human”—a kind of species-wide butter zone? Of course. That’s what makes us a fucking species. But the kind of essentialist language and thinking towards which we tend, after that, is reductionist and dangerous. Our language choices matter, because connotative weight alters what people think and in what context, and, again, we have a habit of moving rapidly from talking about a generalized framework of humanness to talking about “The Right Kind Of Bodies,” and the “Right Kind Of Lifestyle.”

And so, again, again, again, we must address problems such as normalized expectations of “health” and “Ability.” Trying to give everyone access to what they might consider their “best” selves is a brilliant goal, sure, whatever, but by even forwarding the project, we run the risk of colouring an expectation of both what that “best” is and what we think it “Ought To” look like.

Some people need more protein, some people need less choline, some people need higher levels of phosphates, some people have echolocation, some can live to be 125, and every human population has different intestinal bacterial colonies from every other. When we combine all these variables, we will not necessarily find that each and every human being has the same molecular and atomic distribution in the same PPM/B ranges, nor will we necessarily find that our mixing and matching will ensure that everyone gets to be the best combination of everything. It would be fantastic if we could, but everything we’ve ever learned about our species says that “healthy human” is a constantly shifting target, and not a static one.

We are still at a place where the general public reacts with visceral aversion to technological advances and especially anything like an immediated technologically-augmented humanity, and this is at least in part because we still skirt the line of eugenics language, to this day. Because we talk about naturally occurring bio-physiological Facts as though they were in any way indicative of value, without our input. Because we’re still terrible at ethics, continually screwing up at 100mph, then looking back and going, “Oh. Should’ve factored that in. Oops.”

But let’s be clear, here: I am not a doctor. I’m not a physiologist or a molecular biologist. I could be wrong about how all of these things come together in the human body, and maybe there will be something more than a baseline, some set of all species-wide factors which, in the right configuration, say “Healthy Human.” But what I am is someone with a fairly detailed understanding of how language and perception affect people’s acceptance of possibilities, their reaction to new (or hauntingly-familiar-but-repackaged) ideas, and their long-term societal expectations and valuations of normalcy.

And so I’m not saying that we shouldn’t augment humanity, via either mediated or immediated means. I’m not saying that IBM’s Watson and Google’s DeepMind shouldn’t be tasked with the searching patient records and correlating data. But I’m also not saying that either of these is an unequivocal good. I’m saying that it’s actually shocking how much correlative capability is indicated by the achievements of both IBM and Google. I’m saying that we need to change the way we talk about and think about what it is we’re doing. We need to ask ourselves questions about informed patient consent, and the notions of opting into the use of data; about the assumptions we’re making in regards to the nature of what makes us humans, and the dangers of rampant, unconscious scientistic speciesism. Then, we can start to ask new questions about how to use these new tools we’ve developed.

With this new perspective, we can begin to imagine what would happen if we took Watson and DeepDream’s ability to put data into context—to turn around, in seconds, millions upon millions (billions? Trillions?) of permutations and combinations. And then we can ask them to work on tailoring genome-specific health solutions and individualized dietary plans. What if we asked these systems to catalogue literally everything we currently knew about every kind of disease presentation, in every ethnic and regional population, and the differentials for various types of people with different histories, risk factors, current statuses? We already have nanite delivery systems, so what if we used Google and IBM’s increasingly ridiculous complexity to figure out how to have those nanobots deliver a payload of perfectly-crafted medical remedies?

But this is fraught territory. If we step wrong, here, we are not simply going to miss an opportunity to develop new cures and devise interesting gadgets. No; to go astray, on this path, is to begin to see categories of people that “shouldn’t” be “allowed” to reproduce, or “to suffer.” A misapprehension of what we’re about, and why, is far fewer steps away from forced sterilization and medical murder than any of us would like to countenance. And so we need to move very carefully, indeed, always being aware of our biases, and remembering to ask those affected by our decisions what they need and what it’s like to be them. And remembering, when they provide us with their input, to believe them.

The Nature

Ted Hand recently linked me to this piece by Steven Pinker, in which Pinker claims that, in contemporary society, the only job of Bioethics—and by, following his argument to its conclusion, technological ethics, as a whole—is to “get out of the way” of progress. You can read the whole exchange between Ted, myself, and others by clicking through that link, if you want, and the Journal Nature also has a pretty good breakdown of some of the arguments against Pinker, if you want to check them out, but I’m going to take some time to break it all down and expound upon it, here.

Because the fact of the matter is we have to find some third path between the likes of Pinker saying “No limits! WOO!” and Hawking saying “Never do anything! BOOOO!”—a Middle Way of Augmented Personhood, if you will. As Deb Chachra said, “It doesn’t have to be a dichotomy.”

But the problem is that, while I want to blend the best and curtail the worst of both both impulses, I have all this vitriol, here. Like, sure, Dr Pinker, it’s not like humans ever met a problem we couldn’t immediately handle, right? We’ll just sort it all out when we get there! We’ve got this global warming thing completely in hand and we know exactly how to regard the status of the now-enhanced humans we previously considered “disabled,” and how to respect the alterity of autistic/neuroatypical minds! Or even just differently-pigmented humans! Yeah, no, that’s all perfectly sorted, and we did it all in situ!

So no need to worry about what it’ll be like as we further immediate and integrate biotechnological advances! SCIENCE’LL FIX THAT FOR US WHEN IT HAPPENS! Why bother figuring out how to get a wider society to think about what “enhancement” means to them, BEFORE they begin to normalize upgrading to the point that other modes of existence are processed out, entirely? Those phenomenological models can’t have anything of VALUE to teach us, otherwise SCIENCE would’ve figured it all out and SHOWN it to us, by now!

Science would’ve told us what benefit blindness may be. Science would’ve TOLD us if we could learn new ways of thinking and understanding by thinking about a thing BEFORE it comes to be! After all, this isn’t some set of biased and human-created Institutions and Modalities, here, folks! It’s SCIENCE!

…And then I flip 37 tables. In a row.

The Lessons
“…Johns Hopkins, syphilis, and Guatemala. Everyone *believes* they are doing right.” —Deb Chachra

As previously noted in “Object Lessons in Freedom,” there is no one in the history of the world who has undertaken a path for anything other than reasons they value. We can get into ideas of meta-valuation and second-order desires, later, but for the sake of having a short hand, right now: Your motivations motivate you, and whatever you do, you do because you are motivated to do it. You believe that you’re either doing the right thing, or the wrong thing for the right reasons, which is ultimately the same thing. This process has not exactly always brought us to the best of outcomes.

From Tuskegee, to Thalidomide (also here) to dozens of other cases, there have always been instances where people who think they know what’s in the public’s best interest loudly lobby (or secretly conspire) to be allowed to do whatever they want, without oversight or restriction. In a sense, the abuse of persons in the name of “progress” is synonymous with the history of the human species, and so a case might be made that we wouldn’t be where and what we are, right now, if we didn’t occasionally (often) disregard ethics and just do what “needed doing.” But let’s put that another way:

We wouldn’t be where and what we are, if we didn’t occasionally (often) disregard ethics and just do what “needed doing.”

As species, we are more often shortsighted than not, and much ink has been spilled, and many more pixels have been formed in the effort to interrogate that fact. We tend to think about a very small group of people connected to ourselves, and we focus our efforts how to make sure that we and they survive. And so competition becomes selected for, in the face of finite resources, and is tied up with a pleasurable sense of “Having More Than.” But this is just a descriptor of what is, not of the way things “have to be.” We’ve seen where we get when we work together, and we’ve seen where we get when we compete, but the evolutionarily- and sociologically-ingrained belief that we can and will “win” keeps us doing he later over the former, even though this competition is clearly fucking us all into the ground.

…And then having the descendants of whatever survives digging up that ground millions of years later in search of the kinds of resources that can only be renewed one way: by time and pressure crushing us all to paste.

The Community: Head and Heart

Keeping in mind the work we do, here, I think it can be taken as read that I’m not one for a policy of “gently-gently, slowly-slowly,” when it comes to technological advances, but when basic forethought is equated with Luddism—that is, when we’re told that “PROGRESS Is The Only Way!”™—when long-term implications and unintended consequences are no bother ‘t’all, Because Science, and when people place the fonts of this dreck as the public faces of the intersections of Philosophy and Science? Well then, to put it politely, we are All Fucked.

If we had Transmetropolitan-esque Farsight Reservations, then I would 100% support the going to there and doing of that, but do you know what it takes to get to Farsight? It takes planning and (funnily enough) FORESIGHT. We have to do the work of thinking through the problems, implications, dangers, and literal existential risks of what it is we’re trying to make.

And then we have to take all of what we’ve thought through, and decide to figure out a way to do it all anyway. What I’m saying is that some of this shit can’t be Whoopsed through—we won’t survive it to learn a post hoc lesson. But that doesn’t mean we shouldn’t be trying. This is about saying, “Yeah, let’s DO this, but let’s have thought about it, first.” And to achieve that, we’ll need to be thinking faster and more thoroughly. Many of us have been trying to have this conversation—the basic framework and complete implications of all of this—for over a decade now; the wider conversation’s just now catching up.

But it seems that Steven Pinker wants to drive forward without ever actually learning the principles of driving (though some do propose that we could learn the controls as we go), and Stephen Hawking never wants us to get in the car at all. Neither of these is particularly sustainable, in the long term. Our desires to see a greater field of work done, and for biomedical advancements to be made, for the sake of increasing all of our options, and to the benefit of the long-term health of our species, and the unfucking of our relationship with the planet, all of these possibilities make many of us understandably impatient, and in some cases, near-desperately anxious to get underway. But that doesn’t mean that we have to throw ethical considerations out the window.

Starting from either place of “YES ALWAYS DO ALL THE SCIENCE” or “NO NEVER DO THESE SCIENCES” doesn’t get us to the point of understanding why we’re doing the science we’re doing, and what we hope to achieve by it (“increased knowledge” an acceptable answer, but be prepared to show your work), and what we’ll do if we accidentally start Eugenics-ing all up in this piece, again. Tech and Biotech ethics isn’t about stopping us from exploring. It’s about asking why we want to explore at all, and coming to terms with the real and often unintended consequences that exploration might have on our lives and future generations.

This is a Propellerheads and Shirley Bassey Reference

In an ideal timeline, we’ll have already done all of this thinking in advance (again: what do you think this project is?), but even if not, then we can at least stay a few steps ahead of the tumult.

I feel like I spend a lot of time repeating myself, these days, but if it means we’re mindful and aware of our works, before and as we undertake them, rather than flailing-ly reacting to our aftereffects, then it’s ultimately pretty worth it. We can place ourselves into the kind of mindset that seeks to be constantly considering the possibilities inherent in each new instance.

We don’t engage in ethics to prevent us from acting. We do ethics in order to make certain that, when we do act, it’s because we understand what it means to act and we still want to. Not just driving blindly forward because we literally cannot conceive of any other way.