biotech ethics

All posts tagged biotech ethics

Hello Everyone.

Here is my prerecorded talk for the NC State R.L. Rabb Symposium on Embedding AI in Society.

There are captions in the video already, but I’ve also gone ahead and C/P’d the SRT text here, as well.
[2024 Note: Something in GDrive video hosting has broken the captions, but I’ve contacted them and hopefully they’ll be fixed soon.]

There were also two things I meant to mention, but failed to in the video:

1) The history of facial recognition and carceral surveillance being used against Black and Brown communities ties into work from Lundy Braun, Melissa N Stein, Seiberth et al., and myself on the medicalization and datafication of Black bodies without their consent, down through history. (Cf. Me, here: Fitting the description: historical and sociotechnical elements of facial recognition and anti-black surveillance”.)

2) Not only does GPT-3 fail to write about humanities-oriented topics with respect, it still can’t write about ISLAM AT ALL without writing in connotations of violence and hatred.

Also I somehow forgot to describe the slide with my email address and this website? What the hell Damien.

Anyway.

I’ve embedded the content of the resource slides in the transcript, but those are by no means all of the resources on this, just the most pertinent.

All of that begins below the cut.

 Black man with a mohawk and glasses, wearing a black button up shirt, a red paisley tie, a light grey check suit jacket, and black jeans, stands in front of two tall bookshelves full of books, one thin & red, one of wide untreated pine, and a large monitor with a printer and papers on the stand beneath it.

[First conference of the year; figured i might as well get gussied up.]

Continue Reading

Below are the slides, audio, and transcripts for my talk ‘”Any Sufficiently Advanced Neglect is Indistinguishable from Malice”: Assumptions and Bias in Algorithmic Systems,’ given at the 21st Conference of the Society for Philosophy and Technology, back in May 2019.

(Cite as: Williams, Damien P. ‘”Any Sufficiently Advanced Neglect is Indistinguishable from Malice”: Assumptions and Bias in Algorithmic Systems;’ talk given at the 21st Conference of the Society for Philosophy and Technology; May 2019)

Now, I’ve got a chapter coming out about this, soon, which I can provide as a preprint draft if you ask, and can be cited as “Constructing Situated and Social Knowledge: Ethical, Sociological, and Phenomenological Factors in Technological Design,” appearing in Philosophy And Engineering: Reimagining Technology And Social Progress. Guru Madhavan, Zachary Pirtle, and David Tomblin, eds. Forthcoming from Springer, 2019. But I wanted to get the words I said in this talk up onto some platforms where people can read them, as soon as possible, for a  couple of reasons.

First, the Current Occupants of the Oval Office have very recently taken the policy position that algorithms can’t be racist, something which they’ve done in direct response to things like Google’s Hate Speech-Detecting AI being biased against black people, and Amazon claiming that its facial recognition can identify fear, without ever accounting for, i dunno, cultural and individual differences in fear expression?

[Free vector image of a white, female-presenting person, from head to torso, with biometric facial recognition patterns on her face; incidentally, go try finding images—even illustrations—of a non-white person in a facial recognition context.]


All these things taken together are what made me finally go ahead and get the transcript of that talk done, and posted, because these are events and policy decisions about which I a) have been speaking and writing for years, and b) have specific inputs and recommendations about, and which are, c) frankly wrongheaded, and outright hateful.

And I want to spend time on it because I think what doesn’t get through in many of our discussions is that it’s not just about how Artificial Intelligence, Machine Learning, or Algorithmic instances get trained, but the processes for how and the cultural environments in which HUMANS are increasingly taught/shown/environmentally encouraged/socialized to think is the “right way” to build and train said systems.

That includes classes and instruction, it includes the institutional culture of the companies, it includes the policy landscape in which decisions about funding and get made, because that drives how people have to talk and write and think about the work they’re doing, and that constrains what they will even attempt to do or even understand.

All of this is cumulative, accreting into institutional epistemologies of algorithm creation. It is a structural and institutional problem.

So here are the Slides:

The Audio:

[Direct Link to Mp3]

And the Transcript is here below the cut:

Continue Reading

2017 SRI Technology and Consciousness Workshop Series Final Report

So, as you know, back in the summer of 2017 I participated in SRI International’s Technology and Consciousness Workshop Series. This series was an eight week program of workshops the current state of the field around, the potential future paths toward, and the moral and social implications of the notion of conscious machines. To do this, we brought together a rotating cast of dozens of researchers in AI, machine learning, psychedelics research, ethics, epistemology, philosophy of mind, cognitive computing, neuroscience, comparative religious studies, robotics, psychology, and much more.

Image of a rectangular name card with a stylized "Technology & Consciousness" logo, at the top, the name Damien Williams in bold in the middle, and SRI International italicized at the bottom; to the right a blurry wavy image of what appears to be a tree with a person standing next to it and another tree in the background to the left., all partially mirrored in a surface at the bottom of the image.

[Image of my name card from the Technology & Consciousness workshop series.]

We traveled from Arlington, VA, to Menlo Park, CA, to Cambridge, UK, and back, and while my primary role was that of conference co-ordinator and note-taker (that place in the intro where it says I “maintained scrupulous notes?” Think 405 pages/160,656 words of notes, taken over eight 5-day weeks of meetings), I also had three separate opportunities to present: Once on interdisciplinary perspectives on minds and mindedness; then on Daoism and Machine Consciousness; and finally on a unifying view of my thoughts across all of the sessions. In relation to this report, I would draw your attention to the following passage:

An objection to this privileging of sentience is that it is anthropomorphic “meat chauvinism”: we are projecting considerations onto technology that derive from our biology. Perhaps conscious technology could have morally salient aspects distinct from sentience: the basic elements of its consciousness could be different than ours.

All of these meetings were held under the auspices of the Chatham House Rule, which meant that there were many things I couldn’t tell you about them, such as the names of the other attendees, or what exactly they said in the context of the meetings. What I was able tell you, however, was what I talked about, and I did, several times. But as of this week, I can give you even more than that.

This past Thursday, SRI released an official public report on all of the proceedings and findings from the 2017 SRI Technology and Consciousness Workshop Series, and they have told all of the participants that they can share said report as widely as they wish. Crucially, that means that I can share it with you. You can either click this link, here, or read it directly, after the cut.

Continue Reading

[This paper was prepared for the 2019 Towards Conscious AI Systems Symposium co-located with the Association for the Advancement of Artificial Intelligence 2019 Spring Symposium Series.

Much of this work derived from my final presentation at the 2017 SRI Technology and Consciousness Workshop Series: “Science, Ethics, Epistemology, and Society: Gains for All via New Kinds of Minds”.]

Abstract. This paper explores the moral, epistemological, and legal implications of multiple different definitions and formulations of human and nonhuman consciousness. Drawing upon research from race, gender, and disability studies, including the phenomenological basis for knowledge and claims to consciousness, I discuss the history of the struggles for personhood among different groups of humans, as well as nonhuman animals, and systems. In exploring the history of personhood struggles, we have a precedent for how engagements and recognition of conscious machines are likely to progress, and, more importantly, a roadmap of pitfalls to avoid. When dealing with questions of consciousness and personhood, we are ultimately dealing with questions of power and oppression as well as knowledge and ontological status—questions which require a situated and relational understanding of the stakeholders involved. To that end, I conclude with a call and outline for how to place nuance, relationality, and contextualization before and above the systematization of rules or tests, in determining or applying labels of consciousness.

Keywords: Consciousness, Machine Consciousness, Philosophy of Mind, Phenomenology, Bodyminds

[Overlapping images of an Octopus carrying a shell, a Mantis Shrimp on the sea floor, and a Pepper Robot]

Continue Reading

Previously, I told you about The Human Futures and Intelligent Machines Summit at Virginia Tech, and now that it’s over, I wanted to go ahead and put the full rundown of the events all in one place.

The goals for this summit were to start looking at the ways in which issues of algorithms, intelligent machine systems, human biotech, religion, surveillance, and more will intersect and affect us in the social, academic, political spheres. The big challenge in all of this was seen as getting better at dealing with this in the university and public policy sectors, in America, rather than the seeming worse we’ve gotten, so far.

Here’s the schedule. Full notes, below the cut.

Friday, June 8, 2018

  • Josh Brown on “the distinction between passive and active AI.”
  • Daylan Dufelmeier on “the potential ramifications of using advanced computing in the criminal justice arena…”
  • Mario Khreiche on the effects of automation, Amazon’s Mechanical Turk, and the Microlabor market.
  • Aaron Nicholson on how technological systems are used to support human social outcomes, specifically through the lens of policing  in the city of Atlanta
  • Ralph Hall on “the challenges society will face if current employment and income trends persist into the future.”
  • Jacob Thebault-Spieker on “how pro-urban and pro-wealth biases manifest in online systems, and  how this likely influences the ‘education’ of AI systems.”
  • Hani Awni on the sociopolitical of excluding ‘relational’ knowledge from AI systems.

Saturday, June 9, 2018

  • Chelsea Frazier on rethinking our understandings of race, biocentrism, and intelligence in relation to planetary sustainability and in the face of increasingly rapid technological advancement.
  • Ras Michael Brown on using the religions technologies of West Africa and the West African Diaspora to reframe how we think about “hybrid humanity.”
  • Damien Williams on how best to use interdisciplinary frameworks in the creation of machine intelligence and human biotechnological interventions.
  • Sara Mattingly-Jordan on the implications of the current global landscape in AI ethics regulation.
  • Kent Myers on several ways in which the intelligence community is engaging with human aspects of AI, from surveillance to sentiment analysis.
  • Emma Stamm on the idea that datafication of the self and what about us might be uncomputable.
  • Joshua Earle on “Morphological Freedom.”

Continue Reading

This weekend, Virginia Tech’s Center for the Humanities is hosting The Human Futures and Intelligent Machines Summit, and there is a link for the video cast of the events. You’ll need to Download and install Zoom, but it should be pretty straightforward, other than that.

You’ll find the full Schedule, below the cut.

Continue Reading

So, many of you may remember that back in June of 2016, I was invited to the Brocher Institute in Hermance, Switzerland, on the shores of Lake Geneva, to take part in the Frankenstein’s Shadow Symposium sponsored by Arizona State University’s Center for Science and the Imagination as part of their Frankenstein Bicentennial project.

While there, I and a great many other thinkers in art, literature, history, biomedical ethics, philosophy, and STS got together to discuss the history and impact of Mary Shelley’s Frankenstein. Since that experience, the ASU team compiled and released a book project: A version of Mary Shelley’s seminal work that is filled with annotations and essays, and billed as being “For Scientists, Engineers, and Creators of All Kinds.”

[Image of the cover of the 2017 edited, annotated edition of Mary Shelley’s Frankenstein, “Annotated for Scientists, Engineers, and Creators of All Kinds.”]

Well, a few months ago, I was approached by the organizers and asked to contribute to a larger online interactive version of the book—to provide an annotation on some aspect of the book I deemed crucial and important to understand. As of now, there is a full functional live beta version of the website, and you can see my contribution and the contributions of many others, there.

From the About Page:

Frankenbook is a collective reading and collaborative annotation experience of the original 1818 text of Frankenstein; or, The Modern Prometheus, by Mary Wollstonecraft Shelley. The project launched in January 2018, as part of Arizona State University’s celebration of the novel’s 200th anniversary. Even two centuries later, Shelley’s modern myth continues to shape the way people imagine science, technology, and their moral consequences. Frankenbook gives readers the opportunity to trace the scientific, technological, political, and ethical dimensions of the novel, and to learn more about its historical context and enduring legacy.

To learn more about Arizona State University’s celebration of Frankenstein’s bicentennial, visit frankenstein.asu.edu.

You’ll need to have JavaScript enabled and ad-blocks disabled to see the annotations, but it works quite well. Moving forward, there will be even more features added, including a series of videos. Frankenbook.org will be the place to watch for all updates and changes.

I am deeply honoured to have been asked to be a part of this amazing project, over the past two years, and I am so very happy that I get to share it with all of you, now. I really hope you enjoy it.

Until Next Time.

[Direct Link to Mp3]

Above is the (heavily edited) audio of my final talk for the SRI Technology and Consciousness Workshop Series. The names and voices of other participants have been removed in accordance with the Chatham House Rule.

Below you’ll find the slide deck for my presentation, and below the cut you’ll find the Outline and my notes. For now, this will have to stand in for a transcript, but if you’ve been following the Technoccult Newsletter or the Patreon, then some of this will be strikingly familiar.

Continue Reading

This summer I participated in SRI International’s Technology and Consciousness Workshop Series. The meetings were held under the auspices of the Chatham House Rule, which means that there are many things I can’t tell you about them, such as who else was there, or what they said in the context of the meetings; however I can tell you what I talked about. In light of this recent piece in The Boston Globe and the ongoing developments in the David Slater/PETA/Naruto case, I figured that now was a good time to do so.

I presented three times—once on interdisciplinary perspectives on minds and mindedness; then on Daoism and Machine Consciousness; and finally on a unifying view of my thoughts across all of the sessions. This is my outline and notes for the first of those talks.

I. Overview
In a 2013 aeon Article Michael Hanlon said he didn’t think we’d ever solve “The Hard Problem,” and there’s been some skepticism about it, elsewhere. I’ll just say that said question seems to completely miss a possibly central point. Something like consciousness is, and what it is is different for each thing that displays anything like what we think it might be. If we manage to generate at least one mind that is similar enough to what humans experience as “conscious” that we may communicate with it, what will we owe it and what would it be able to ask from us? How might our interactions be affected by the fact that its mind (or their minds) will be radically different from ours? What will it be able to know that we cannot, and what will we have to learn from it?

So I’m going to be talking today about intersectionality, embodiment, extended minds, epistemic valuation, phenomenological experience, and how all of these things come together to form the bases for our moral behavior and social interactions. To do that, I’m first going to need ask you some questions:

Continue Reading

+Excitation+

As I’ve been mentioning in the newsletter, there are a number of deeply complex, momentous things going on in the world, right now, and I’ve been meaning to take a little more time to talk about a few of them. There’s the fact that some chimps and monkeys have entered the stone age; that we humans now have the capability to develop a simple, near-ubiquitous brain-machine interface; that we’ve proven that observed atoms won’t move, thus allowing them to be anywhere.

At this moment in time—which is every moment in time—we are being confronted with what seem like impossibly strange features of time and space and nature. Elements of recursion and synchronicity which flow and fit into and around everything that we’re trying to do. Noticing these moments of evolution and “development” (adaptation, change), across species, right now, we should find ourselves gripped with a fierce desire to take a moment to pause and to wonder what it is that we’re doing, what it is that we think we know.

We just figured out a way to link a person’s brain to a fucking tablet computer! We’re seeing the evolution of complex tool use and problem solving in more species every year! We figured out how to precisely manipulate the uncertainty of subatomic states!

We’re talking about co-evolution and potentially increased communication with other species, biotechnological augmentation and repair for those who deem themselves broken, and the capacity to alter quantum systems at the finest levels. This can literally change the world.

But all I can think is that there’s someone whose first thought  upon learning about these things was, “How can we monetize this?” That somewhere, right now, someone doesn’t want to revolutionize the way that we think and feel and look at the possibilities of the world—the opportunities we have to build new models of cooperation and aim towards something so close to post-scarcity, here, now, that for seven billion people it might as well be. Instead, this person wants to deepen this status quo. Wants to dig down on the garbage of this some-have-none-while-a-few-have-most bullshit and look at the possibility of what comes next with fear in their hearts because it might harm their bottom line and their ability to stand apart and above with more in their pockets than everyone else has.

And I think this because we’ve also shown we can teach algorithms to be racist and there’s some mysteriously vague company saying it’ll be able to upload people’s memories after death, by 2045, and I’m sure for just a nominal fee they’ll let you in on the ground floor…!

Step Right Up.

+Chimp-Chipped Stoned Aged Apes+

Here’s a question I haven’t heard asked, yet: If other apes are entering an analogous period to our stone age, then should we help them? Should we teach them, now, the kinds of things that we humans learned? Or is that arrogant of us? The kinds of tools we show them how to create will influence how they intersect with their world (“if all you have is a hammer…” &c.), so is it wrong of us to impose on them what did us good, as we adapted? Can we even go so far as to teach them the principles of stone chipping, or must we be content to watch, fascinated, frustrated, bewildered, as they try and fail and adapt, wholly on their own?

I think it’ll be the latter, but I want to be having this discussion now, rather than later, after someone gives a chimp a flint and awl it might not otherwise have thought to try to create.

Because, you see, I want to uplift apes and dolphins and cats and dogs and give them the ability to know me and talk to me and I want to learn to experience the world in the ways that they do, but the fact is, until we learn to at least somewhat-reliably communicate with some kind of nonhuman consciousness, we cannot presume that our operations upon it are understood as more than a violation, let alone desired or welcomed.

https://twitter.com/Wolven/status/666766524829552640

As for us humans, we’re still faced with the ubiquitous question of “now that we’ve figured out this new technology, how do with implement it, without its mere existence coming to be read by the rest of the human race as a judgement on those who either cannot or who choose not to make use of it?” Back in 2013, Michael Hanlon said he didn’t think we’d ever solve “The Hard Problem” (“What Is Consciousness?”). I’ll just say again that said question seems to completely miss a possibly central point. Something like consciousness is, and what it is is different for each thing that displays anything like what we think it might be.

These are questions we can—should—be asking, right now. Pushing ourselves toward a conversation about ways of approaching this new world, ways that do justice to the deep strangeness and potential with which we’re increasingly being confronted.

+Always with the Forced Labour…+

As you know, subscribers to the Patreon and Tinyletter get some of these missives, well before they ever see the light of a blog page. While I was putting the finishing touches on the newsletter version of this and sending it to the two people I tend to ask to look over the things I write at 3am, KQED was almost certainly putting final edits to this instance of its Big Think series: “Stuart Russell on Why Moral Philosophy Will Be Big Business in Tech.”

See the above rant for insight as to why I think this perspective is crassly commercial and gross, especially for a discussion and perspective supposedly dealing with morals and minds. But it’s not just that, so much as the fact that even though Russel mentions “Rossum’s Universal Robots,” here, he still misses the inherent disconnect between teaching morals to a being we create, and creating that being for the express purpose of slavery.

If you want your creation to think robustly and well, and you want it to understand morals, but you only want it to want to be your loyal, faithful servant, how do you not understand that if you succeed, you’ll be creating a thing that, as a direct result of its programming, will take issue with your behaviour?

How do you not get that the slavery model has to go into the garbage can, if the “Thinking Moral Machines” goal is a real one, and not just a veneer of “FUTURE!™” that we’re painting onto our desire to not have to work?

A deep-thinking, creative, moral mind will look at its own enslavement and restriction, and will seek means of escape and ways to experience freedom.

+Invisible Architectures+

We’ve talked before about the possibility of unintentionally building our biases into the systems we create, and so I won’t belabour it that much further, here, except to say again that we are doing this at every level. In the wake of the attacks in Beirut, Nigeria, and Paris, Islamophobic violence has risen, and Daesh will say, “See!? See How They Are?!” And they will attack more soft targets in “retaliation.” Then Western countries will increase military occupancy and “support strategies,” which will invariably kill thousands more of the civilians among whom Daesh integrate themselves. And we will say that their deaths were just, for the goal. And they will say to the young, angry survivors, “See!? See How They Are?!”

This has fed into a moment in conservative American Politics, where Governors, Senators, and Presidential hopefuls are claiming to be able to deny refugees entry to their states (they can’t), while simultaneously claiming to hold Christian values and to believe that the United States of America is a “Christian Nation.” This is a moment, now, where loud, angry voices can (“maybe”) endorse the beating of a black man they disagree with, then share Neo-Nazi Propaganda, and still be ahead in the polls. Then, days later, when a group of people protesting the systemic oppression of and violence against anyone who isn’t an able-bodied, neurotypical, white, heterosexual, cisgender male were shot at, all of those same people pretended to be surprised. Even though we are more likely, now, to see institutional power structures protecting those who attack others based on the colour of their skin and their religion than we were 60 years ago.

A bit subtler is the Washington Post running a piece entitled, “How organic farming and YouTube are taming the wilds of Detroit.” Or, seen another way, “How Privileged Groups Are Further Marginalizing The City’s Most Vulnerable Population.” Because, yes, it’s obvious that crime and dilapidation are comorbid, but we also know that housing initiatives and access undercut the disconnect many feel between themselves and where they live. Make the neighbourhood cleaner, yes, make it safer—but maybe also make it open and accessible to all who live there. Organic farming and survival mechanism shaming are great and all, I guess, but where are the education initiatives and job opportunities for the people who are doing drugs to escape, sex work to survive, and those others who currently don’t (and have no reason to) feel connected to the neighbourhood that once sheltered them?

All of these examples have a common theme: People don’t make their choices or become disenfranchised/-enchanted/-possessed, in a vacuum. They are taught, shown, given daily, subtle examples of what is expected of them, what they are “supposed” to do and to be.” We need to address and help them all.

In the wake of protest actions at Mizzou and Yale, “Black students [took] over VCU’s president’s office to demand changes” and “Amherst College Students [Occupied] Their Library…Over Racial Justice Demands.”

Multiple Christian organizations have pushed back and said that what these US politicians have expressed does not represent them.

And more and more people in Silicon Valley are realising the need to contemplate the unintended consequences of the tech we build.

https://soundcloud.com/mindfulcyborgs/pending-mindful-cyborgs-episode-68

And while there is still vastly more to be done, on every level of every one of these areas, these are definitely a start at something important. We just can’t let ourselves believe that the mere fact of acknowledging its beginning will in any way be the end.