We do a lot of work and have a lot of conversations around here with people working on the social implications of technology, but some folx sometimes still don’t quite get what I mean when I say that our values get embedded in our technological systems, and that the values of most internet companies, right now, are capitalist brand engagement and marketing. To that end, I want to take a minute to talk to you about something that happened, this week and just a heads-up, this conversation is going to mention sexual assault and the sexual predatory behaviour of men toward young girls.
Continue Reading
algorithmic bias
All posts tagged algorithmic bias
As you already know, we went to the second Juvet A.I. Retreat, back in September. If you want to hear several of us talk about what we got up to at the then you’re in luck because here are several conversations conducted by Ben Byford of the Machine Ethics Podcast.
I am deeply grateful to Ben Byford for asking me to sit down and talk about this with him. I talk a great deal, and am surprisingly able to (cogently?) get on almost all of my bullshit—technology and magic and the occult, nonhuman personhood, the sham of gender and race and other social constructions of expected lived categories, the invisible architecture of bias, neurodiversity, and philosophy of mind—in a rather short window of time.
So that’s definitely something…
Kirsten and I spent the week between the 17th and the 21st of September with 18 other utterly amazing people having Chatham House Rule-governed conversations about the Future of Artificial Intelligence.
We were in Norway, in the Juvet Landscape Hotel, which is where they filmed a lot of the movie Ex Machina, and it is even more gorgeous in person. None of the rooms shown in the film share a single building space. It’s astounding as a place of both striking architectural sensibility and also natural integration as they built every structure in the winter to allow the dormancy cycles of the plants and animals to dictate when and where they could build, rather than cutting anything down.
And on our first full day here, Two Ravens flew directly over my and Kirsten’s heads.
Yes.

[Image of a rainbow rising over a bend in a river across a patchy overcast sky, with the river going between two outcropping boulders, trees in the foreground and on either bank and stretching off into the distance, and absolutely enormous mountains in the background]
I am a fortunate person. I am a person who has friends and resources and a bloody-minded stubbornness that means that when I determine to do something, it will more likely than not get fucking done, for good or ill.
I am a person who has been given opportunities to be in places many people will never get to see, and have conversations with people who are often considered legends in their fields, and start projects that could very well alter the shape of the world on a massive scale.
Yeah, that’s a bit of a grandiose statement, but you’re here reading this, and so you know where I’ve been and what I’ve done.
I am a person who tries to pay forward what I have been given and to create as many spaces for people to have the opportunities that I have been able to have.
I am not a monetarily wealthy person, measured against my society, but my wealth and fortune are things that strike me still and make me take stock of it all and what it can mean and do, all over again, at least once a week, if not once a day, as I sit in tension with who I am, how the world perceives me, and what amazing and ridiculous things I have had, been given, and created the space to do, because and in violent spite of it all.
So when I and others come together and say we’re going to have to talk about how intersectional oppression and the lived experiences of marginalized peoples affect, effect, and are affected and effected BY the wider techoscientific/sociotechnical/sociopolitical/socioeconomic world and what that means for how we design, build, train, rear, and regard machine minds, then we are going to have to talk about how intersectional oppression and the lived experiences of marginalized peoples affect, effect, and are affected and effected by the wider techoscientific/sociotechnical/sociopolitical/socioeconomic world and what that means for how we design, build, train, rear, and regard machine minds.
So let’s talk about what that means.
Previously, I told you about The Human Futures and Intelligent Machines Summit at Virginia Tech, and now that it’s over, I wanted to go ahead and put the full rundown of the events all in one place.
The goals for this summit were to start looking at the ways in which issues of algorithms, intelligent machine systems, human biotech, religion, surveillance, and more will intersect and affect us in the social, academic, political spheres. The big challenge in all of this was seen as getting better at dealing with this in the university and public policy sectors, in America, rather than the seeming worse we’ve gotten, so far.
Here’s the schedule. Full notes, below the cut.
Friday, June 8, 2018
- Josh Brown on “the distinction between passive and active AI.”
- Daylan Dufelmeier on “the potential ramifications of using advanced computing in the criminal justice arena…”
- Mario Khreiche on the effects of automation, Amazon’s Mechanical Turk, and the Microlabor market.
- Aaron Nicholson on how technological systems are used to support human social outcomes, specifically through the lens of policing in the city of Atlanta
- Ralph Hall on “the challenges society will face if current employment and income trends persist into the future.”
- Jacob Thebault-Spieker on “how pro-urban and pro-wealth biases manifest in online systems, and how this likely influences the ‘education’ of AI systems.”
- Hani Awni on the sociopolitical of excluding ‘relational’ knowledge from AI systems.
Saturday, June 9, 2018
- Chelsea Frazier on rethinking our understandings of race, biocentrism, and intelligence in relation to planetary sustainability and in the face of increasingly rapid technological advancement.
- Ras Michael Brown on using the religions technologies of West Africa and the West African Diaspora to reframe how we think about “hybrid humanity.”
- Damien Williams on how best to use interdisciplinary frameworks in the creation of machine intelligence and human biotechnological interventions.
- Sara Mattingly-Jordan on the implications of the current global landscape in AI ethics regulation.
- Kent Myers on several ways in which the intelligence community is engaging with human aspects of AI, from surveillance to sentiment analysis.
- Emma Stamm on the idea that datafication of the self and what about us might be uncomputable.
- Joshua Earle on “Morphological Freedom.”
I talked with Hewlett Packard Enterprise’s Curt Hopkins, for their article “4 obstacles to ethical AI (and how to address them).” We spoke about the kinds of specific tools and techniques by which people who populate or manage artificial intelligence design teams can incorporate expertise from the humanities and social sciences. We also talked about compelling reasons why they should do this, other than the fact that they’re just, y’know, very good ideas.
From the Article:
To “bracket out” bias, Williams says, “I have to recognize how I create systems and code my understanding of the world.” That means making an effort early on to pay attention to the data entered. The more diverse the group, the less likely an AI system is to reinforce shared bias. Those issues go beyond gender and race; they also encompass what you studied, the economic group you come from, your religious background, all of your experiences.
…
That becomes another reason to diversify the technical staff, says Williams. This is not merely an ethical act. The business strategy may produce more profit because the end result may be a more effective AI. “The best system is the one that best reflects the wide range of lived experiences and knowledge in the world,” he says.

[Image of two blank, white, eyeless faces, partially overlapping each other.]
I get how folx might think that framing would be a bit of a buzzkill for a tech industry audience, but I do want to highlight and stress something: Many of the ethical problems we’re concerned with mitigating or ameliorating are direct products of the capitalist system in which we are making these choices and building these technologies.
All of that being said, I’m not the only person there with something interesting to say, and you should go check out the rest of my and other people’s comments.
Until Next Time.
This weekend, Virginia Tech’s Center for the Humanities is hosting The Human Futures and Intelligent Machines Summit, and there is a link for the video cast of the events. You’ll need to Download and install Zoom, but it should be pretty straightforward, other than that.
You’ll find the full Schedule, below the cut.
My piece “Cultivating Technomoral Interrelations,” a review of Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting has been up over at the Social Epistemology Research and Reply Collective for a few months, now, so I figured I should post something about it, here.
As you’ll read, I was extremely taken with Vallor’s book, and think it is a part of some very important work being done. From the piece:
Additionally, her crucial point seems to be that through intentional cultivation of the self and our society, or that through our personally grappling with these tasks, we can move the world, a stance which leaves out, for instance, notions of potential socioeconomic or political resistance to these moves. There are those with a vested interest in not having a more mindful and intentional technomoral ethos, because that would undercut how they make their money. However, it may be that this is Vallor’s intent.
The audience and goal for this book seems to be ethicists who will be persuaded to become philosophers of technology, who will then take up this book’s understandings and go speak to policy makers and entrepreneurs, who will then make changes in how they deal with the public. If this is the case, then there will already be a shared conceptual background between Vallor and many of the other scholars whom she intends to make help her to do the hard work of changing how people think about their values. But those philosophers will need a great deal more power, oversight authority, and influence to effectively advocate for and implement what Vallor suggests, here, and we’ll need sociopolitical mechanisms for making those valuative changes, as well.

[Image of the front cover of Shannon Vallor’s TECHNOLOGY AND THE VIRTUES. Circuit pathways in the shapes of trees.]
As we continue the work of building these systems, and in the wake of all these recent events, more and more like this will be necessary.
Shannon Vallor’s Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting is out in paperback, June 1st, 2018. Read the rest of “Cultivating Technomoral Interrelations: A Review of Shannon Vallor’s Technology and the Virtues” at the Social Epistemology Review and Reply Collective.
Earlier this month I was honoured to have the opportunity to sit and talk to Douglas Rushkoff on his TEAM HUMAN podcast. If you know me at all, you know this isn’t by any means the only team for which I play, or even the only way I think about the construction of our “teams,” and that comes up in our conversation. We talk a great deal about algorithms, bias, machine consciousness, culture, values, language, and magick, and the ways in which the nature of our categories deeply affect how we treat each other, human and nonhuman alike. It was an absolutely fantastic time.
From the page:
In this episode, Williams and Rushkoff look at the embedded biases of technology and the values programed into our mediated lives. How has a conception of technology as “objective” blurred our vision to the biases normalized within these systems? What ethical interrogation might we apply to such technology? And finally, how might alternative modes of thinking, such as magick, the occult, and the spiritual help us to bracket off these systems for pause and critical reflection? This conversation serves as a call to vigilance against runaway systems and the prejudices they amplify.
As I put it in the conversation: “Our best interests are at best incidental to [capitalist systems] because they will keep us alive long enough to for us to buy more things from them.” Following from that is the fact that we build algorithmic systems out of those capitalistic principles, and when you iterate out from there—considering all attendant inequalities of these systems on the merely human scale—we’re in deep trouble, fast.
Check out the rest of this conversation to get a fuller understanding of how it all ties in with language and the occult. It’s a pretty great ride, and I hope you enjoy it.
Until Next Time.
A few weeks ago I had a conversation with
Now that algorithms are everywhere, helping us to both run and make sense of the world, a strange question has emerged among artificial intelligence researchers: When is it ok to predict the future based on the past? When is it ok to be biased?
“I want a machine-learning algorithm to learn what tumors looked like in the past, and I want it to become biased toward selecting those kind of tumors in the future,” explains philosopher Shannon Vallor at Santa Clara University. “But I don’t want a machine-learning algorithm to learn what successful engineers and doctors looked like in the past and then become biased toward selecting those kinds of people when sorting and ranking resumes.”
We talk about this, sentencing algorithms, the notion of how to raise and teach our digital offspring, and more. You can listen to all it here:
If and when it gets a transcript, I will update this post with a link to that.
Until Next Time.
Above is the (heavily edited) audio of my final talk for the SRI Technology and Consciousness Workshop Series. The names and voices of other participants have been removed in accordance with the Chatham House Rule.
Below you’ll find the slide deck for my presentation, and below the cut you’ll find the Outline and my notes. For now, this will have to stand in for a transcript, but if you’ve been following the Technoccult Newsletter or the Patreon, then some of this will be strikingly familiar.