ethics

All posts tagged ethics

Previously, I told you about The Human Futures and Intelligent Machines Summit at Virginia Tech, and now that it’s over, I wanted to go ahead and put the full rundown of the events all in one place.

The goals for this summit were to start looking at the ways in which issues of algorithms, intelligent machine systems, human biotech, religion, surveillance, and more will intersect and affect us in the social, academic, political spheres. The big challenge in all of this was seen as getting better at dealing with this in the university and public policy sectors, in America, rather than the seeming worse we’ve gotten, so far.

Here’s the schedule. Full notes, below the cut.

Friday, June 8, 2018

  • Josh Brown on “the distinction between passive and active AI.”
  • Daylan Dufelmeier on “the potential ramifications of using advanced computing in the criminal justice arena…”
  • Mario Khreiche on the effects of automation, Amazon’s Mechanical Turk, and the Microlabor market.
  • Aaron Nicholson on how technological systems are used to support human social outcomes, specifically through the lens of policing  in the city of Atlanta
  • Ralph Hall on “the challenges society will face if current employment and income trends persist into the future.”
  • Jacob Thebault-Spieker on “how pro-urban and pro-wealth biases manifest in online systems, and  how this likely influences the ‘education’ of AI systems.”
  • Hani Awni on the sociopolitical of excluding ‘relational’ knowledge from AI systems.

Saturday, June 9, 2018

  • Chelsea Frazier on rethinking our understandings of race, biocentrism, and intelligence in relation to planetary sustainability and in the face of increasingly rapid technological advancement.
  • Ras Michael Brown on using the religions technologies of West Africa and the West African Diaspora to reframe how we think about “hybrid humanity.”
  • Damien Williams on how best to use interdisciplinary frameworks in the creation of machine intelligence and human biotechnological interventions.
  • Sara Mattingly-Jordan on the implications of the current global landscape in AI ethics regulation.
  • Kent Myers on several ways in which the intelligence community is engaging with human aspects of AI, from surveillance to sentiment analysis.
  • Emma Stamm on the idea that datafication of the self and what about us might be uncomputable.
  • Joshua Earle on “Morphological Freedom.”

Continue Reading

I talked with Hewlett Packard Enterprise’s Curt Hopkins, for their article “4 obstacles to ethical AI (and how to address them).” We spoke about the kinds of specific tools and techniques by which people who populate or manage artificial intelligence design teams can incorporate expertise from the humanities and social sciences. We also talked about compelling reasons why they should do this, other than the fact that they’re just, y’know, very good ideas.

From the Article:

To “bracket out” bias, Williams says, “I have to recognize how I create systems and code my understanding of the world.” That means making an effort early on to pay attention to the data entered. The more diverse the group, the less likely an AI system is to reinforce shared bias. Those issues go beyond gender and race; they also encompass what you studied, the economic group you come from, your religious background, all of your experiences.

That becomes another reason to diversify the technical staff, says Williams. This is not merely an ethical act. The business strategy may produce more profit because the end result may be a more effective AI. “The best system is the one that best reflects the wide range of lived experiences and knowledge in the world,” he says.

[Image of two blank, white, eyeless faces, partially overlapping each other.]

To be clear, this is an instance in which I tried to find capitalist reasons that would convince capitalist people to do the right thing. To that end, you should imagine that all of my sentences start with “Well if we’re going to continue to be stuck with global capitalism until we work to dismantle it…” Because they basically all did.

I get how folx might think that framing would be a bit of a buzzkill for a tech industry audience, but I do want to highlight and stress something: Many of the ethical problems we’re concerned with mitigating or ameliorating are direct products of the capitalist system in which we are making these choices and building these technologies.

All of that being said, I’m not the only person there with something interesting to say, and you should go check out the rest of my and other people’s comments.

Until Next Time.

Last week, I talked to The Atlantic’s Ed Yong about new research in crowd sentiment tipping points, how it could give hope and dread for those working for social change, and how it might be used by bad actors to create/enhance already-extant sentiment-manipulation factories.

From the article:

…“You see this clump of failures below 25 percent and this clump of successes above 25 percent,” Centola says. “Mathematically, we predicted that, but seeing it in a real population was phenomenal.”

“What I think is happening at the threshold is that there’s a pretty high probability that a noncommitted actor”—a person who can be swayed in any direction—“will encounter a majority of committed minority actors, and flip to join them,” says Pamela Oliver, a sociologist at the University of Wisconsin at Madison. “There is therefore a good probability that enough non-committed actors will all flip at the same time that the whole system will flip.”

We talked about a lot, and much of it didn’t make it into the article, but one of the things that matters most about all of this is that we’re going to have to be increasingly mindful and intentional about the information we take in. We now know that we have the ability to move the needle of conversation, with not too much effort, and with this knowledge we can make progressive social change. We can use this to fight against the despair that can so easily creep into this work of spreading compassion and trying to create a world where we can all flourish.

https://upload.wikimedia.org/wikipedia/commons/thumb/6/65/Argentina_-_Mt_Tronador_Ascent_-_65_-_Casa%C3%B1o_Overa_glacier_%286834408616%29.jpg/640px-Argentina_-_Mt_Tronador_Ascent_-_65_-_Casa%C3%B1o_Overa_glacier_%286834408616%29.jpg

[Argentina’s Mt Tronador Casaño Overa glacier, by McKay Savage]

But we have to know that there will also be those who see this as a target number to hit so that they might better disrupt and destabilize groups and beliefs. We already know that many such people are hard at work, trying to sow doubt and mistrust. We already have evidence that these actors will make other people’s lives unpleasant for the sake of it. With this new research, they’ll be encouraged, as well. As I said to Ed Yong:

“There are already a number of people out there who are gaming group dynamics in careful ways… If they know what target numbers they have to hit, it’s easy to see how they could take this information and create [or increase the output of the existing] sentiment-manipulation factory.”

The infiltration of progressive groups to move them toward chaos and internal strife is not news, just like the infiltration (and origin) of police and military groups by white supremacists is not news.

And so, while I don’t want to add to a world in which people feel like they have to continually mistrust each other, we do have to be intentional about the work we do, and how we do it, and we have to be mindful of who is trying to get us to believe what, and why they want us to believe it. Especially if we want to get others to believe and value as we do.

This research gives us a useful set of tools and a good to place to start.

Until Next Time.

This weekend, Virginia Tech’s Center for the Humanities is hosting The Human Futures and Intelligent Machines Summit, and there is a link for the video cast of the events. You’ll need to Download and install Zoom, but it should be pretty straightforward, other than that.

You’ll find the full Schedule, below the cut.

Continue Reading

My piece “Cultivating Technomoral Interrelations,” a review of Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting has been up over at the Social Epistemology Research and Reply Collective for a few months, now, so I figured I should post something about it, here.

As you’ll read, I was extremely taken with Vallor’s book, and think it is a part of some very important work being done. From the piece:

Additionally, her crucial point seems to be that through intentional cultivation of the self and our society, or that through our personally grappling with these tasks, we can move the world, a stance which leaves out, for instance, notions of potential socioeconomic or political resistance to these moves. There are those with a vested interest in not having a more mindful and intentional technomoral ethos, because that would undercut how they make their money. However, it may be that this is Vallor’s intent.

The audience and goal for this book seems to be ethicists who will be persuaded to become philosophers of technology, who will then take up this book’s understandings and go speak to policy makers and entrepreneurs, who will then make changes in how they deal with the public. If this is the case, then there will already be a shared conceptual background between Vallor and many of the other scholars whom she intends to make help her to do the hard work of changing how people think about their values. But those philosophers will need a great deal more power, oversight authority, and influence to effectively advocate for and implement what Vallor suggests, here, and we’ll need sociopolitical mechanisms for making those valuative changes, as well.

[Image of the front cover of Shannon Vallor’s TECHNOLOGY AND THE VIRTUES. Circuit pathways in the shapes of trees.]

This is, as I said, one part of a larger, crucial project of bringing philosophy, the humanities, and social sciences into wide public conversation with technoscientific fields and developers. While there have always been others doing this work, it is increasingly the case that these folks are being both heeded and given institutional power and oversight authority.

As we continue the work of building these systems, and in the wake of all these recent events, more and more like this will be necessary.

Shannon Vallor’s Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting is out in paperback, June 1st, 2018. Read the rest of “Cultivating Technomoral Interrelations: A Review of Shannon Vallor’s Technology and the Virtues at the Social Epistemology Review and Reply Collective.

A few weeks ago I had a conversation with David McRaney of the You Are Not So Smart podcast, for his episode on Machine Bias. As he says on the blog:

Now that algorithms are everywhere, helping us to both run and make sense of the world, a strange question has emerged among artificial intelligence researchers: When is it ok to predict the future based on the past? When is it ok to be biased?

“I want a machine-learning algorithm to learn what tumors looked like in the past, and I want it to become biased toward selecting those kind of tumors in the future,” explains philosopher Shannon Vallor at Santa Clara University.  “But I don’t want a machine-learning algorithm to learn what successful engineers and doctors looked like in the past and then become biased toward selecting those kinds of people when sorting and ranking resumes.”

We talk about this,  sentencing algorithms, the notion of how to raise and teach our digital offspring, and more. You can listen to all it here:

[Direct Link to the Mp3 Here]

If and when it gets a transcript, I will update this post with a link to that.

Until Next Time.

[Direct Link to Mp3]

Above is the (heavily edited) audio of my final talk for the SRI Technology and Consciousness Workshop Series. The names and voices of other participants have been removed in accordance with the Chatham House Rule.

Below you’ll find the slide deck for my presentation, and below the cut you’ll find the Outline and my notes. For now, this will have to stand in for a transcript, but if you’ve been following the Technoccult Newsletter or the Patreon, then some of this will be strikingly familiar.

Continue Reading

[Direct link to Mp3]

My second talk for the SRI International Technology and Consciousness Workshop Series was about how nonwestern philosophies like Buddhism, Hinduism, and Daoism can help mitigate various kinds of bias in machine minds and increase compassion by allowing programmers and designers to think from within a non-zero-sum matrix of win conditions for all living beings, meaning engaging multiple tokens and types of minds, outside of the assumed human “default” of straight, white, cis, ablebodied, neurotypical male. I don’t have a transcript, yet, and I’ll update it when I make one. But for now, here are my slides and some thoughts.

A Discussion on Daoism and Machine Consciousness (Slides as PDF)

(The translations of the Daoist texts referenced in the presentation are available online: The Burton Watson translation of the Chuang Tzu and the Robert G. Hendricks translation of the Tao Te Ching.)

A zero-sum system is one in which there are finite resources, but more than that, it is one in which what one side gains, another loses. So by “A non-zero-sum matrix of win conditions” I mean a combination of all of our needs and wants and resources in such a way that everyone wins. Basically, we’re talking here about trying to figure out how to program a machine consciousness that’s a master of wu-wei and limitless compassion, or metta.

The whole week was about phenomenology and religion and magic and AI and it helped me think through some problems, like how even the framing of exercises like asking Buddhist monks to talk about the Trolley Problem will miss so much that the results are meaningless. That is, the trolley problem cases tend to assume from the outset that someone on the tracks has to die, and so they don’t take into account that an entire other mode of reasoning about sacrifice and death and “acceptable losses” would have someone throw themselves under the wheels or jam their body into the gears to try to stop it before it got that far. Again: There are entire categories of nonwestern reasoning that don’t accept zero-sum thought as anything but lazy, and which search for ways by which everyone can win, so we’ll need to learn to program for contradiction not just as a tolerated state but as an underlying component. These systems assume infinitude and non-zero-sum matrices where every being involved can win.

Continue Reading

This summer I participated in SRI International’s Technology and Consciousness Workshop Series. The meetings were held under the auspices of the Chatham House Rule, which means that there are many things I can’t tell you about them, such as who else was there, or what they said in the context of the meetings; however I can tell you what I talked about. In light of this recent piece in The Boston Globe and the ongoing developments in the David Slater/PETA/Naruto case, I figured that now was a good time to do so.

I presented three times—once on interdisciplinary perspectives on minds and mindedness; then on Daoism and Machine Consciousness; and finally on a unifying view of my thoughts across all of the sessions. This is my outline and notes for the first of those talks.

I. Overview
In a 2013 aeon Article Michael Hanlon said he didn’t think we’d ever solve “The Hard Problem,” and there’s been some skepticism about it, elsewhere. I’ll just say that said question seems to completely miss a possibly central point. Something like consciousness is, and what it is is different for each thing that displays anything like what we think it might be. If we manage to generate at least one mind that is similar enough to what humans experience as “conscious” that we may communicate with it, what will we owe it and what would it be able to ask from us? How might our interactions be affected by the fact that its mind (or their minds) will be radically different from ours? What will it be able to know that we cannot, and what will we have to learn from it?

So I’m going to be talking today about intersectionality, embodiment, extended minds, epistemic valuation, phenomenological experience, and how all of these things come together to form the bases for our moral behavior and social interactions. To do that, I’m first going to need ask you some questions:

Continue Reading

(This was originally posted over at Medium, [well parts were originally posted in the newslettter, but], but I wanted it somewhere I could more easily manage.)


Hey.

I just wanna say (and you know who you are): I get you were scared of losing your way of life — the status quo was changing all around you. Suddenly it wasn’t okay anymore to say or do things that the world previously told you were harmless. People who didn’t “feel” like you were suddenly loudly everywhere, and no one just automatically believed what you or those you believed in had to say, anymore. That must have been utterly terrifying.

But here’s the thing: People are really scared now. Not just of obsolescence, or of being ignored. They’re terrified for their lives. They’re not worried about “the world they knew.” They’re worried about whether they’ll be rounded up and put in camps or shot or beaten in the street. Because, you see, many of the people who voted for this, and things like it around the world, see many of us — women, minorities, immigrants, LGBTQIA folks, disabled folks, neurodivergent folks — as less than “real” people, and want to be able to shut us up using whatever means they deem appropriate, including death.

The vice president elect thinks gay people can be “retrained,” and that we should attempt it via the same methods that make us side-eye dog owners. The man tapped to be a key advisor displays and has cultivated an environment of white supremacist hatred. The president-elect is said to be “mulling over” a registry for Muslim people in the country. A registry. Based on your religion.

My own cousin had food thrown at her in a diner, right before the election. And things haven’t exactly gotten better, since then.

Certain hateful elements want many of us dead or silent and “in our place,” now, just as much as ever. And all we want and ask for is equal respect, life, and justice.

I said it on election night and I’ll say it again: there’s no take-backsies, here. I’m speaking to those who actively voted for this, or didn’t actively plant yourselves against it (and you know who you are): You did this. You cultivated it. And I know you did what you thought you had to, but people you love are scared, because their lives are literally in danger, so it’s time to wake up now. It’s time to say “No.”

We’re all worried about jobs and money and “enough,” because that’s what this system was designed to make us worry about. Your Muslim neighbour, your gay neighbour, your trans neighbour, your immigrant neighbour, your NEIGHBOUR IS NOT YOUR ENEMY. The system that tells you to hate and fear them is. And if you bought into that system because you couldn’t help being afraid then I’m sorry, but it’s time to put it down and Wake Up. Find it in yourself to ask forgiveness of yourself and of those you’ve caused mortal terror. If you call yourself Christian, that should ring really familiar. But other faiths (and nonfaiths) know it too.

We do better together. So it’s time to gather up, together, work, together, and say “No,” together.

So snap yourself out of it, and help us. If you’re in the US, please call your representatives, federal and local. Tell them what you want, tell them why you’re scared. Tell them that these people don’t represent our values and the world we wish to see:
http://www.house.gov/representatives/find/
http://www.senate.gov/senators/contact/

Because this, right here, is the fundamental difference between fearing the loss of your way of life, and the fear of losing your literal life.

Be with the people you love. Be by their side and raise their voices if they can’t do it for themselves, for whatever reason. Listen to them, and create a space where they feel heard and loved, and where others will listen to them as well.

And when you come around, don’t let your pendulum swing so far that you fault those who can’t move forward, yet. Please remember that there is a large contingent of people who, for many various reasons, cannot be out there protesting. Shaming people who have anxiety, depression, crippling fear of their LIVES, or are trying to not get arrested so their kids can, y’know, EAT FOOD? Doesn’t help.

So show some fucking compassion. Don’t shame those who are tired and scared and just need time to collect themselves. Urge and offer assistance where you can, and try to understand their needs. Just do what you can to help us all believe that we can get through this. We may need to lean extra hard on each other for a while, but we can do this.

You know who you are. We know you didn’t mean to. But this is where we are, now. Shake it off. Start again. We can do this.


If you liked this article, consider dropping something into the A Future Worth Thinking About Tip Jar