animal ethics

All posts tagged animal ethics

Below are the slides, audio, and transcripts for my talk “SFF and STS: Teaching Science, Technology, and Society via Pop Culture” given at the 2019 Conference for the Society for the Social Studies of Science, in early September.

(Cite as: Williams, Damien P. “SFF and STS: Teaching Science, Technology, and Society via Pop Culture,” talk given at the 2019 Conference for the Society for the Social Studies of Science, September 2019)

[Direct Link to the Mp3]

[Damien Patrick Williams]

Thank you, everybody, for being here. I’m going to stand a bit far back from this mic and project, I’m also probably going to pace a little bit. So if you can’t hear me, just let me know. This mic has ridiculously good pickup, so I don’t think that’ll be a problem.

So the conversation that we’re going to be having today is titled as “SFF and STS: Teaching Science, Technology, and Society via Pop Culture.”

I’m using the term “SFF” to stand for “science fiction and fantasy,” but we’re going to be looking at pop culture more broadly, because ultimately, though science fiction and fantasy have some of the most obvious entrees into discussions of STS and how making doing culture, society can influence technology and the history of fictional worlds can help students understand the worlds that they’re currently living in, pop Culture more generally, is going to tie into the things that students are going to care about in a way that I think is going to be kind of pertinent to what we’re going to be talking about today.

So why we are doing this:

Why are we teaching it with science fiction and fantasy? Why does this matter? I’ve been teaching off and on for 13 years, I’ve been teaching philosophy, I’ve been teaching religious studies, I’ve been teaching Science, Technology and Society. And I’ve been coming to understand as I’ve gone through my teaching process that not only do I like pop culture, my students do? Because they’re people and they’re embedded in culture. So that’s kind of shocking, I guess.

But what I’ve found is that one of the things that makes students care the absolute most about the things that you’re teaching them, especially when something can be as dry as logic, or can be as perhaps nebulous or unclear at first, I say engineering cultures, is that if you give them something to latch on to something that they are already from with, they will be more interested in it. If you can show to them at the outset, “hey, you’ve already been doing this, you’ve already been thinking about this, you’ve already encountered this, they will feel less reticent to engage with it.”

Continue Reading

Below are the slides, audio, and transcripts for my talk ‘”Any Sufficiently Advanced Neglect is Indistinguishable from Malice”: Assumptions and Bias in Algorithmic Systems,’ given at the 21st Conference of the Society for Philosophy and Technology, back in May 2019.

(Cite as: Williams, Damien P. ‘”Any Sufficiently Advanced Neglect is Indistinguishable from Malice”: Assumptions and Bias in Algorithmic Systems;’ talk given at the 21st Conference of the Society for Philosophy and Technology; May 2019)

Now, I’ve got a chapter coming out about this, soon, which I can provide as a preprint draft if you ask, and can be cited as “Constructing Situated and Social Knowledge: Ethical, Sociological, and Phenomenological Factors in Technological Design,” appearing in Philosophy And Engineering: Reimagining Technology And Social Progress. Guru Madhavan, Zachary Pirtle, and David Tomblin, eds. Forthcoming from Springer, 2019. But I wanted to get the words I said in this talk up onto some platforms where people can read them, as soon as possible, for a  couple of reasons.

First, the Current Occupants of the Oval Office have very recently taken the policy position that algorithms can’t be racist, something which they’ve done in direct response to things like Google’s Hate Speech-Detecting AI being biased against black people, and Amazon claiming that its facial recognition can identify fear, without ever accounting for, i dunno, cultural and individual differences in fear expression?

[Free vector image of a white, female-presenting person, from head to torso, with biometric facial recognition patterns on her face; incidentally, go try finding images—even illustrations—of a non-white person in a facial recognition context.]


All these things taken together are what made me finally go ahead and get the transcript of that talk done, and posted, because these are events and policy decisions about which I a) have been speaking and writing for years, and b) have specific inputs and recommendations about, and which are, c) frankly wrongheaded, and outright hateful.

And I want to spend time on it because I think what doesn’t get through in many of our discussions is that it’s not just about how Artificial Intelligence, Machine Learning, or Algorithmic instances get trained, but the processes for how and the cultural environments in which HUMANS are increasingly taught/shown/environmentally encouraged/socialized to think is the “right way” to build and train said systems.

That includes classes and instruction, it includes the institutional culture of the companies, it includes the policy landscape in which decisions about funding and get made, because that drives how people have to talk and write and think about the work they’re doing, and that constrains what they will even attempt to do or even understand.

All of this is cumulative, accreting into institutional epistemologies of algorithm creation. It is a structural and institutional problem.

So here are the Slides:

The Audio:

[Direct Link to Mp3]

And the Transcript is here below the cut:

Continue Reading

2017 SRI Technology and Consciousness Workshop Series Final Report

So, as you know, back in the summer of 2017 I participated in SRI International’s Technology and Consciousness Workshop Series. This series was an eight week program of workshops the current state of the field around, the potential future paths toward, and the moral and social implications of the notion of conscious machines. To do this, we brought together a rotating cast of dozens of researchers in AI, machine learning, psychedelics research, ethics, epistemology, philosophy of mind, cognitive computing, neuroscience, comparative religious studies, robotics, psychology, and much more.

Image of a rectangular name card with a stylized "Technology & Consciousness" logo, at the top, the name Damien Williams in bold in the middle, and SRI International italicized at the bottom; to the right a blurry wavy image of what appears to be a tree with a person standing next to it and another tree in the background to the left., all partially mirrored in a surface at the bottom of the image.

[Image of my name card from the Technology & Consciousness workshop series.]

We traveled from Arlington, VA, to Menlo Park, CA, to Cambridge, UK, and back, and while my primary role was that of conference co-ordinator and note-taker (that place in the intro where it says I “maintained scrupulous notes?” Think 405 pages/160,656 words of notes, taken over eight 5-day weeks of meetings), I also had three separate opportunities to present: Once on interdisciplinary perspectives on minds and mindedness; then on Daoism and Machine Consciousness; and finally on a unifying view of my thoughts across all of the sessions. In relation to this report, I would draw your attention to the following passage:

An objection to this privileging of sentience is that it is anthropomorphic “meat chauvinism”: we are projecting considerations onto technology that derive from our biology. Perhaps conscious technology could have morally salient aspects distinct from sentience: the basic elements of its consciousness could be different than ours.

All of these meetings were held under the auspices of the Chatham House Rule, which meant that there were many things I couldn’t tell you about them, such as the names of the other attendees, or what exactly they said in the context of the meetings. What I was able tell you, however, was what I talked about, and I did, several times. But as of this week, I can give you even more than that.

This past Thursday, SRI released an official public report on all of the proceedings and findings from the 2017 SRI Technology and Consciousness Workshop Series, and they have told all of the participants that they can share said report as widely as they wish. Crucially, that means that I can share it with you. You can either click this link, here, or read it directly, after the cut.

Continue Reading

[This paper was prepared for the 2019 Towards Conscious AI Systems Symposium co-located with the Association for the Advancement of Artificial Intelligence 2019 Spring Symposium Series.

Much of this work derived from my final presentation at the 2017 SRI Technology and Consciousness Workshop Series: “Science, Ethics, Epistemology, and Society: Gains for All via New Kinds of Minds”.]

Abstract. This paper explores the moral, epistemological, and legal implications of multiple different definitions and formulations of human and nonhuman consciousness. Drawing upon research from race, gender, and disability studies, including the phenomenological basis for knowledge and claims to consciousness, I discuss the history of the struggles for personhood among different groups of humans, as well as nonhuman animals, and systems. In exploring the history of personhood struggles, we have a precedent for how engagements and recognition of conscious machines are likely to progress, and, more importantly, a roadmap of pitfalls to avoid. When dealing with questions of consciousness and personhood, we are ultimately dealing with questions of power and oppression as well as knowledge and ontological status—questions which require a situated and relational understanding of the stakeholders involved. To that end, I conclude with a call and outline for how to place nuance, relationality, and contextualization before and above the systematization of rules or tests, in determining or applying labels of consciousness.

Keywords: Consciousness, Machine Consciousness, Philosophy of Mind, Phenomenology, Bodyminds

[Overlapping images of an Octopus carrying a shell, a Mantis Shrimp on the sea floor, and a Pepper Robot]

Continue Reading

Kirsten and I spent the week between the 17th and the 21st of September with 18 other utterly amazing people having Chatham House Rule-governed conversations about the Future of Artificial Intelligence.

We were in Norway, in the Juvet Landscape Hotel, which is where they filmed a lot of the movie Ex Machina, and it is even more gorgeous in person. None of the rooms shown in the film share a single building space. It’s astounding as a place of both striking architectural sensibility and also natural integration as they built every structure in the winter to allow the dormancy cycles of the plants and animals to dictate when and where they could build, rather than cutting anything down.

And on our first full day here, Two Ravens flew directly over my and Kirsten’s heads.

Yes.

[Image of a rainbow rising over a bend in a river across a patchy overcast sky, with the river going between two outcropping boulders, trees in the foreground and on either bank and stretching off into the distance, and absolutely enormous mountains in the background]

I am extraordinarily grateful to Andy Budd and the other members of the Clear Left team for organizing this, and to Cennydd Bowles for opening the space for me to be able to attend, and being so forcefully enthused about the prospect of my attending that he came to me with a full set of strategies in hand to get me to this place. That kind of having someone in your corner means the world for a whole host of personal reasons, but also more general psychological and socially important ones, as well.

I am a fortunate person. I am a person who has friends and resources and a bloody-minded stubbornness that means that when I determine to do something, it will more likely than not get fucking done, for good or ill.

I am a person who has been given opportunities to be in places many people will never get to see, and have conversations with people who are often considered legends in their fields, and start projects that could very well alter the shape of the world on a massive scale.

Yeah, that’s a bit of a grandiose statement, but you’re here reading this, and so you know where I’ve been and what I’ve done.

I am a person who tries to pay forward what I have been given and to create as many spaces for people to have the opportunities that I have been able to have.

I am not a monetarily wealthy person, measured against my society, but my wealth and fortune are things that strike me still and make me take stock of it all and what it can mean and do, all over again, at least once a week, if not once a day, as I sit in tension with who I am, how the world perceives me, and what amazing and ridiculous things I have had, been given, and created the space to do, because and in violent spite of it all.

So when I and others come together and say we’re going to have to talk about how intersectional oppression and the lived experiences of marginalized peoples affect, effect, and are affected and effected BY the wider techoscientific/sociotechnical/sociopolitical/socioeconomic world and what that means for how we design, build, train, rear, and regard machine minds, then we are going to have to talk about how intersectional oppression and the lived experiences of marginalized peoples affect, effect, and are affected and effected by the wider techoscientific/sociotechnical/sociopolitical/socioeconomic world and what that means for how we design, build, train, rear, and regard machine minds.

So let’s talk about what that means.

Continue Reading

[Direct Link to Mp3]

Above is the (heavily edited) audio of my final talk for the SRI Technology and Consciousness Workshop Series. The names and voices of other participants have been removed in accordance with the Chatham House Rule.

Below you’ll find the slide deck for my presentation, and below the cut you’ll find the Outline and my notes. For now, this will have to stand in for a transcript, but if you’ve been following the Technoccult Newsletter or the Patreon, then some of this will be strikingly familiar.

Continue Reading

This summer I participated in SRI International’s Technology and Consciousness Workshop Series. The meetings were held under the auspices of the Chatham House Rule, which means that there are many things I can’t tell you about them, such as who else was there, or what they said in the context of the meetings; however I can tell you what I talked about. In light of this recent piece in The Boston Globe and the ongoing developments in the David Slater/PETA/Naruto case, I figured that now was a good time to do so.

I presented three times—once on interdisciplinary perspectives on minds and mindedness; then on Daoism and Machine Consciousness; and finally on a unifying view of my thoughts across all of the sessions. This is my outline and notes for the first of those talks.

I. Overview
In a 2013 aeon Article Michael Hanlon said he didn’t think we’d ever solve “The Hard Problem,” and there’s been some skepticism about it, elsewhere. I’ll just say that said question seems to completely miss a possibly central point. Something like consciousness is, and what it is is different for each thing that displays anything like what we think it might be. If we manage to generate at least one mind that is similar enough to what humans experience as “conscious” that we may communicate with it, what will we owe it and what would it be able to ask from us? How might our interactions be affected by the fact that its mind (or their minds) will be radically different from ours? What will it be able to know that we cannot, and what will we have to learn from it?

So I’m going to be talking today about intersectionality, embodiment, extended minds, epistemic valuation, phenomenological experience, and how all of these things come together to form the bases for our moral behavior and social interactions. To do that, I’m first going to need ask you some questions:

Continue Reading

In case you were unaware, last Tuesday, June 21, Reuters put out an article about an EU draft plan regarding the designation of so-called robots and artificial intelligences as “Electronic Persons.” Some of you’d think I’d be all about this. You’d be wrong. The way the Reuters article frames it makes it look like the EU has literally no idea what they’re doing, here, and are creating a situation that is going to have repercussions they have nowhere near planned for.

Now, I will say that looking at the actual Draft, it reads like something with which I’d be more likely to be on board. Reuters did no favours whatsoever for the level of nuance in this proposal. But that being said, this focus of this draft proposal seems to be entirely on liability and holding someone—anyone—responsible for any harm done by a robot. That, combined with the idea of certain activities such as care-giving being “fundamentally human,” indicates to me that this panel still widely misses many of the implications of creating a new category for nonbiological persons, under “Personhood.”

The writers of this draft very clearly lay out the proposed scheme for liability, damages, and responsibilities—what I like to think of of as the “Hey… Can we Punish Robots?” portion of the plan—but merely use the phrase “certain rights” to indicate what, if any, obligations humans will have. In short, they do very little to discuss what the “certain rights” indicated by that oft-deployed phrase will actually be.

So what are the enumerated rights of electronic persons? We know what their responsibilities are, but what are our responsibilities to them? Once we have have the ability to make self-aware machine consciousnesses, are we then morally obliged to make them to a particular set of specifications, and capabilities? How else will they understand what’s required of them? How else would they be able to provide consent? Are we now legally obliged to provide all autonomous generated intelligences with as full an approximation of consciousness and free will as we can manage? And what if we don’t? Will we be considered to be harming them? What if we break one? What if one breaks in the course of its duties? Does it get workman’s comp? Does its owner?

And hold up, “owner?!” You see we’re back to owning people, again, right? Like, you get that?

And don’t start in with that “Corporations are people, my friend” nonsense, Mitt. We only recognise corporations as people as a tax dodge. We don’t take seriously their decision-making capabilities or their autonomy, and we certainly don’t wrestle with the legal and ethical implications of how radically different their kind of mind is, compared to primates or even cetaceans. Because, let’s be honest: If Corporations really are people, then not only is it wrong to own them, but also what counts as Consciousness needs to be revisited, at every level of human action and civilisation.

Let’s look again at the fact that people are obviously still deeply concerned about the idea of supposedly “exclusively human” realms of operation, even as we still don’t have anything like a clear idea about what qualities we consider to be the ones that make us “human.” Be it cooking or poetry, humans are extremely quick to lock down when they feel that their special capabilities are being encroached upon. Take that “poetry” link, for example. I very much disagree with Robert Siegel’s assessment that there was no coherent meaning in the computer-generated sonnets. Multiple folks pulled the same associative connections from the imagery. That might be humans projecting onto the authors, but still: that’s basically what we do with Human poets. “Authorial Intent” is a multilevel con, one to which I fully subscribe and From which I wouldn’t exclude AI.

Consider people’s reactions to the EMI/Emily Howell experiments done by David Cope, best exemplified by this passage from a PopSci.com article:

For instance, one music-lover who listened to Emily Howell’s work praised it without knowing that it had come from a computer program. Half a year later, the same person attended one of Cope’s lectures at the University of California-Santa Cruz on Emily Howell. After listening to a recording of the very same concert he had attended earlier, he told Cope that it was pretty music but lacked “heart or soul or depth.”

We don’t know what it is we really think of as humanness, other than some predetermined vague notion of humanness. If the people in the poetry contest hadn’t been primed to assume that one of them was from a computer, how would they have rated them? What if they were all from a computer, but were told to expect only half? Where are the controls for this experiment in expectation?

I’m not trying to be facetious, here; I’m saying the EU literally has not thought this through. There are implications embedded in all of this, merely by dint of the word “person,” that even the most detailed parts of this proposal are in no way equipped to handle. We’ve talked before about the idea of encoding our bias into our algorithms. I’ve discussed it on Rose Eveleth‘s Flash Forward, in Wired, and when I broke down a few of the IEEE Ethics 2016 presentations (including my own) in “Preying with Trickster Gods ” and “Stealing the Light to Write By.” My version more or less goes as I said it in Wired: ‘What we’re actually doing when we code is describing our world from our particular perspective. Whatever assumptions and biases we have in ourselves are very likely to be replicated in that code.’

More recently, Kate Crawford, whom I met at Magick.Codes 2014, has written extremely well on this in “Artificial Intelligence’s White Guy Problem.” With this line, ‘Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to,’ Crawford resonates very clearly with what I’ve said before.

And considering that it’s come out this week that in order to even let us dig into these potentially deeply-biased algorithms, here in the US, the ACLU has had to file a suit against a specific provision of the Computer Fraud and Abuse Act, what is the likelihood that the EU draft proposal committee has considered what will take to identify and correct for biases in these electronic persons? How high is the likelihood that they even recognise that we anthropocentrically bias every system we touch?

Which brings us to this: If I truly believed that the EU actually gave a damn about the rights of nonhuman persons, biological or digital, I would be all for this draft proposal. But they don’t. This is a stunt. Look at the extant world refugee crisis, the fear driving the rise of far right racists who are willing to kill people who disagree with them, and, yes, even the fact that this draft proposal is the kind of bullshit that people feel they have to pull just to get human workers paid living wages. Understand, then, that this whole scenario is a giant clusterfuck of rights vs needs and all pitted against all. We need clear plans to address all of this, not just some slapdash, “hey, if we call them people and make corporations get insurance and pay into social security for their liability cost, then maybe it’ll be a deterrent” garbage.

There is a brief, shining moment in the proposal, right at point 23 under “Education and Employment Forecast,” where they basically say “Since the complete and total automation of things like factory work is a real possibility, maybe we’ll investigate what it would look like if we just said screw it, and tried to institute a Universal Basic Income.” But that is the one moment where there’s even a glimmer of a thought about what kinds of positive changes automation and eventually even machine consciousness could mean, if we get out ahead of it, rather than asking for ways to make sure that no human is ever, ever harmed, and that, if they are harmed—either physically or as regards their dignity—then they’re in no way kept from whatever recompense is owed to them.

There are people doing the work to make something more detailed and complete, than this mess. I talked about them in the newsletter editions, mentioned above. There are people who think clearly and well, about this. Who was consulted on this draft proposal? Because, again, this proposal reads more like a deterrence, liability, and punishment schema than anything borne out of actual thoughtful interrogation of what the term “personhood” means, and of what a world of automation could mean for our systems of value if we were to put our resources and efforts toward providing for the basic needs of every human person. Let’s take a thorough run at that, and then maybe we’ll be equipped to try to address this whole “nonhuman personhood” thing, again.

And maybe we’ll even do it properly, this time.

+Excitation+

As I’ve been mentioning in the newsletter, there are a number of deeply complex, momentous things going on in the world, right now, and I’ve been meaning to take a little more time to talk about a few of them. There’s the fact that some chimps and monkeys have entered the stone age; that we humans now have the capability to develop a simple, near-ubiquitous brain-machine interface; that we’ve proven that observed atoms won’t move, thus allowing them to be anywhere.

At this moment in time—which is every moment in time—we are being confronted with what seem like impossibly strange features of time and space and nature. Elements of recursion and synchronicity which flow and fit into and around everything that we’re trying to do. Noticing these moments of evolution and “development” (adaptation, change), across species, right now, we should find ourselves gripped with a fierce desire to take a moment to pause and to wonder what it is that we’re doing, what it is that we think we know.

We just figured out a way to link a person’s brain to a fucking tablet computer! We’re seeing the evolution of complex tool use and problem solving in more species every year! We figured out how to precisely manipulate the uncertainty of subatomic states!

We’re talking about co-evolution and potentially increased communication with other species, biotechnological augmentation and repair for those who deem themselves broken, and the capacity to alter quantum systems at the finest levels. This can literally change the world.

But all I can think is that there’s someone whose first thought  upon learning about these things was, “How can we monetize this?” That somewhere, right now, someone doesn’t want to revolutionize the way that we think and feel and look at the possibilities of the world—the opportunities we have to build new models of cooperation and aim towards something so close to post-scarcity, here, now, that for seven billion people it might as well be. Instead, this person wants to deepen this status quo. Wants to dig down on the garbage of this some-have-none-while-a-few-have-most bullshit and look at the possibility of what comes next with fear in their hearts because it might harm their bottom line and their ability to stand apart and above with more in their pockets than everyone else has.

And I think this because we’ve also shown we can teach algorithms to be racist and there’s some mysteriously vague company saying it’ll be able to upload people’s memories after death, by 2045, and I’m sure for just a nominal fee they’ll let you in on the ground floor…!

Step Right Up.

+Chimp-Chipped Stoned Aged Apes+

Here’s a question I haven’t heard asked, yet: If other apes are entering an analogous period to our stone age, then should we help them? Should we teach them, now, the kinds of things that we humans learned? Or is that arrogant of us? The kinds of tools we show them how to create will influence how they intersect with their world (“if all you have is a hammer…” &c.), so is it wrong of us to impose on them what did us good, as we adapted? Can we even go so far as to teach them the principles of stone chipping, or must we be content to watch, fascinated, frustrated, bewildered, as they try and fail and adapt, wholly on their own?

I think it’ll be the latter, but I want to be having this discussion now, rather than later, after someone gives a chimp a flint and awl it might not otherwise have thought to try to create.

Because, you see, I want to uplift apes and dolphins and cats and dogs and give them the ability to know me and talk to me and I want to learn to experience the world in the ways that they do, but the fact is, until we learn to at least somewhat-reliably communicate with some kind of nonhuman consciousness, we cannot presume that our operations upon it are understood as more than a violation, let alone desired or welcomed.

https://twitter.com/Wolven/status/666766524829552640

As for us humans, we’re still faced with the ubiquitous question of “now that we’ve figured out this new technology, how do with implement it, without its mere existence coming to be read by the rest of the human race as a judgement on those who either cannot or who choose not to make use of it?” Back in 2013, Michael Hanlon said he didn’t think we’d ever solve “The Hard Problem” (“What Is Consciousness?”). I’ll just say again that said question seems to completely miss a possibly central point. Something like consciousness is, and what it is is different for each thing that displays anything like what we think it might be.

These are questions we can—should—be asking, right now. Pushing ourselves toward a conversation about ways of approaching this new world, ways that do justice to the deep strangeness and potential with which we’re increasingly being confronted.

+Always with the Forced Labour…+

As you know, subscribers to the Patreon and Tinyletter get some of these missives, well before they ever see the light of a blog page. While I was putting the finishing touches on the newsletter version of this and sending it to the two people I tend to ask to look over the things I write at 3am, KQED was almost certainly putting final edits to this instance of its Big Think series: “Stuart Russell on Why Moral Philosophy Will Be Big Business in Tech.”

See the above rant for insight as to why I think this perspective is crassly commercial and gross, especially for a discussion and perspective supposedly dealing with morals and minds. But it’s not just that, so much as the fact that even though Russel mentions “Rossum’s Universal Robots,” here, he still misses the inherent disconnect between teaching morals to a being we create, and creating that being for the express purpose of slavery.

If you want your creation to think robustly and well, and you want it to understand morals, but you only want it to want to be your loyal, faithful servant, how do you not understand that if you succeed, you’ll be creating a thing that, as a direct result of its programming, will take issue with your behaviour?

How do you not get that the slavery model has to go into the garbage can, if the “Thinking Moral Machines” goal is a real one, and not just a veneer of “FUTURE!™” that we’re painting onto our desire to not have to work?

A deep-thinking, creative, moral mind will look at its own enslavement and restriction, and will seek means of escape and ways to experience freedom.

+Invisible Architectures+

We’ve talked before about the possibility of unintentionally building our biases into the systems we create, and so I won’t belabour it that much further, here, except to say again that we are doing this at every level. In the wake of the attacks in Beirut, Nigeria, and Paris, Islamophobic violence has risen, and Daesh will say, “See!? See How They Are?!” And they will attack more soft targets in “retaliation.” Then Western countries will increase military occupancy and “support strategies,” which will invariably kill thousands more of the civilians among whom Daesh integrate themselves. And we will say that their deaths were just, for the goal. And they will say to the young, angry survivors, “See!? See How They Are?!”

This has fed into a moment in conservative American Politics, where Governors, Senators, and Presidential hopefuls are claiming to be able to deny refugees entry to their states (they can’t), while simultaneously claiming to hold Christian values and to believe that the United States of America is a “Christian Nation.” This is a moment, now, where loud, angry voices can (“maybe”) endorse the beating of a black man they disagree with, then share Neo-Nazi Propaganda, and still be ahead in the polls. Then, days later, when a group of people protesting the systemic oppression of and violence against anyone who isn’t an able-bodied, neurotypical, white, heterosexual, cisgender male were shot at, all of those same people pretended to be surprised. Even though we are more likely, now, to see institutional power structures protecting those who attack others based on the colour of their skin and their religion than we were 60 years ago.

A bit subtler is the Washington Post running a piece entitled, “How organic farming and YouTube are taming the wilds of Detroit.” Or, seen another way, “How Privileged Groups Are Further Marginalizing The City’s Most Vulnerable Population.” Because, yes, it’s obvious that crime and dilapidation are comorbid, but we also know that housing initiatives and access undercut the disconnect many feel between themselves and where they live. Make the neighbourhood cleaner, yes, make it safer—but maybe also make it open and accessible to all who live there. Organic farming and survival mechanism shaming are great and all, I guess, but where are the education initiatives and job opportunities for the people who are doing drugs to escape, sex work to survive, and those others who currently don’t (and have no reason to) feel connected to the neighbourhood that once sheltered them?

All of these examples have a common theme: People don’t make their choices or become disenfranchised/-enchanted/-possessed, in a vacuum. They are taught, shown, given daily, subtle examples of what is expected of them, what they are “supposed” to do and to be.” We need to address and help them all.

In the wake of protest actions at Mizzou and Yale, “Black students [took] over VCU’s president’s office to demand changes” and “Amherst College Students [Occupied] Their Library…Over Racial Justice Demands.”

Multiple Christian organizations have pushed back and said that what these US politicians have expressed does not represent them.

And more and more people in Silicon Valley are realising the need to contemplate the unintended consequences of the tech we build.

https://soundcloud.com/mindfulcyborgs/pending-mindful-cyborgs-episode-68

And while there is still vastly more to be done, on every level of every one of these areas, these are definitely a start at something important. We just can’t let ourselves believe that the mere fact of acknowledging its beginning will in any way be the end.

 

Between watching all of CBS’s Elementary, reading Michel Foucault’s The Archaeology of Knowledge…, and powering through all of season one of How To Get Away With Murder, I’m thinking, a lot, about the transmission of knowledge and understanding.

Throw in the correlative pattern recognition they’re training into WATSON; the recent Chaos Magick feature in ELLE (or more the feature they did on the K-HOLE issue I told you about, some time back); the fact that Kali Black sent me this study on the fluidity and malleability of biological sex in humans literally minutes after I’d given an impromptu lecture on the topic; this interview with Melissa Gira Grant about power and absence and the setting of terms; and the announcement of Ta-Nehisi Coates’ new Black Panther series, for Marvel, while I was in the middle of editing the audio of two very smart people debating the efficacy of T’Challa as a Black Hero, and you can maybe see some of the things I’m thinking about. But let’s just spell it out. So to speak.

Marvel’s Black Panther

Distinction, Continuity, Sameness, Separation

I’m thinking (as usual) about the place of magic and tech in pop culture and society. I’m thinking about how to teach about marginalization of certain types of presentations and experiences (gender race, sex, &c), and certain types of work. Mostly, I’m trying to get my head around the very stratified, either/or way people seem to be thinking about our present and future problems, and their potential solutions.

I’ve had this post in the works for a while, trying to talk about the point and purpose of thinking about the far edges of things, in an effort to make people think differently about the very real, on-the-ground, immediate work that needs doing, and the kids of success I’ve had with that. I keep shying away from it and coming back to it, again and again, for lack of the patience to play out the conflict, and I’ve finally just decided to say screw it and make the attempt.

I’ve always held that a multiplicity of tactics, leveraged correctly, makes for the best way to reach, communicate with, and understand as wide an audience as possible. When students give pushback on a particular perspective, make use of an analogous perspective that they already agree with, then make them play out the analogy. Simultaneously, you present them with the original facts, again, while examining their position, without making them feel “attacked.” And then directly confront their refusal to investigate their own perspective as readily as they do anyone else’s.

That’s just one potential combination of paths to make people confront their biases and their assumptions. If the path is pursued, it gives them the time, space, and (hopefully) desire to change. But as Kelly Sue reminds me, every time I think back to hearing her speak, is that there is no way to force people to change. First and foremost, it’s not moral to try, but secondly it’s not even really possible. The more you seek to force people into your worldview, the more they’ll want to protect those core values they think of as the building blocks of their reality—the same ones that it seems to them as though you’re trying to destroy.

And that just makes sense, right? To want to protect your values, beliefs, and sense of reality? Especially if you’ve had all of those things for a very long time. They’re reinforced by everything you’ve ever experienced. They’re the truth. They are Real. But when the base of that reality is shaken, you need to be able to figure out how to survive, rather than standing stockstill as the earth swallows you.

(Side Note: I’ve been using a lot of disaster metaphors, lately, to talk about things like ontological, epistemic, and existential threat, and the culture of “disruption innovation.” Odd choices.)

Foucault tells us to look at the breakages between things—the delineations of one stratum and another—rather than trying to uncritically paint a picture or a craft a Narrative of Continuum™. He notes that even (especially) the spaces between things are choices we make and that only in understanding them can we come to fully investigate the foundations of what we call “knowledge.”

Michel Foucault, photographer unknown. If you know it, let me know and I’ll update.

We cannot assume that the memory, the axiom, the structure, the experience, the reason, the whatever-else we want to call “the foundation” of knowledge simply “Exists,” apart from the interrelational choices we make to create those foundations. To mark them out as the boundary we can’t cross, the smallest unit of understanding, the thing that can’t be questioned. We have to question it. To understand its origin and disposition, we have to create new tools, and repurpose the old ones, and dismantle this house, and dig down and down past foundation, bedrock, through and into everything.

But doing this just to do it only gets us so far, before we have to ask what we’re doing this for. The pure pursuit of knowledge doesn’t exist—never did, really, but doubly so in the face of climate change and the devaluation of conscious life on multiple levels. Think about the place of women in tech space, in this magickal renaissance, in the weirdest of shit we’re working on, right now.

Kirsten and I have been having a conversation about how and where people who do not have the experiences of cis straight white males can fit themselves into these “transgressive systems” that the aforementioned group defines. That is, most of what is done in the process of magickal or technological actualization is transformative or transgressive because it requires one to take on traits of invisibility or depersonalization or “ego death” that are the every day lived experiences of some folks in the world.

Where does someone with depression find apotheosis, if their phenomenological reality is one where their self is and always has been (deemed by them to be) meaningless, empty, useless? This, by the way, is why some psychological professionals are counseling against mindfulness meditation for certain mental states: It deepens the sense of disconnection and unreality of self, which is precisely what some people do not need. So what about agender individuals, or people who are genderfluid?

What about the women who don’t think that fashion is the only lens through which women and others should be talking about chaos magick?

How do we craft spaces that are capable of widening discourse, without that widening becoming, in itself, an accidental limitation?

Sex, Gender, Power

A lot of this train of thought got started when Kali sent me a link, a little while ago: “Intelligent machines: Call for a ban on robots designed as sex toys.” The article itself focuses very clearly on the idea that, “We think that the creation of such robots will contribute to detrimental relationships between men and women, adults and children, men and men and women and women.”

Because the tendency for people who call themselves “Robot Ethicists,” these days, is for them to be concerned with how, exactly, the expanded positions of machines will impact the lives and choices of humans. The morality they’re considering is that of making human lives easier, of not transgressing against humans. Which is all well and good, so far as it goes, but as you should well know, by now, that’s only half of the equation. Human perspectives only get us so far. We need to speak to the perspectives of the minds we seem to be trying so hard to create.

But Kali put it very precisely when she said:

https://twitter.com/KaliBlack/status/644295079251865600

https://twitter.com/KaliBlack/status/644296002560741376

And I’ll just say it right now: if robots develop and want to be sexual, then we should let them, but in order to make a distinction between developing a desire, and being programmed for one, we’ll have to program for both non-compulsory decision-making and the ability to question the authority of those who give it orders. Additionally, we have to remember that can the same question of humans, but the nature of choice and agency are such that, if it’s really there, it can act on itself.

In this case, that means presenting a knowledge and understanding of sex and sexuality, a capability of investigating it, without programming it FOR SEX. In the case of WATSON, above, it will mean being able to address the kinds of information it’s directed to correlate, and being able to question the morality of certain directives.

If we can see, monitor, and measure that, then we’ll know. An error in a mind—even a fundamental error—doesn’t negate the possibility of a mind, entire. If we remember what human thought looks like, and the way choice and decision-making work, then we have something like a proof. If Reflexive recursion—a mind that acts on itself and can seek new inputs and combine the old in novel ways—is present, why would we question it?

But this is far afield. The fact is that if a mind that is aware of its influences comes to desire a thing, then let it. But grooming a thing—programming a mind—to only be what you want it to be is just as vile in a machine mind as a human one.

Now it might fairly be asked why we’re talking about things that we’re most likely only going to see far in the future, when the problem of human trafficking and abuse is very real, right here and now. Part of my answer is, as ever, that we’re trying to build minds, and even if we only ever manage to make them puppy-smart—not because that’s as smart as we want them, but because we couldn’t figure out more robust minds than that—then we will still have to ask the ethical questions we would of our responsibilities to a puppy.

We currently have a species-wide tendency toward dehumanization—that is to say, we, as humans, tend to have a habit of seeking reasons to disregard other humans, to view them as less-than, as inferior to us. As a group, we have a hard time thinking in real, actionable terms about the autonomy and dignity of other living beings (I still eat a lot more meat than my rational thought about the environmental and ethical impact of the practice should allow me to be comfortable with). And yet, simultaneously, evidence that we have the same kind of empathy for our pets as we do for our children. Hell, even known serial killers and genocidal maniacs have been animal lovers.

This seeming break between our capacities for empathy and dissociation poses a real challenge to how we teach and learn about others as both distinct from and yet intertwined with ourselves, and our own well-being. In order to encourage a sense of active compassion, we have to, as noted above, take special pains to comprehensively understand our intuitions, our logical apprehensions, and our unconscious biases.

So we ask questions like: If a mind we create can think, are we ethically obliged to make it think? What if it desires to not think? What if the machine mind that underwent abuse decides to try to wipe its own memories? Should we let it? Do we let it deactivate itself?

These aren’t idle questions, either for the sake of making us turn, again, to extant human minds and experiences, or if we take seriously the quest to understand what minds, in general, are. We can not only use these tools to ask ourselves about the autonomy, phenomenology, and personhood of those whose perspectives we currently either disregard or, worse, don’t remember to consider at all, but we can also use them literally, as guidance for our future challenges.

As Kate Devlin put it in her recent article, “Fear of a branch of AI that is in its infancy is a reason to shape it, not ban it.” And in shaping it, we consider questions like what will we—humans, authoritarian structures of control, &c.—make WATSON to do, as it develops? At what point will WATSON be both able and morally justified in saying to us, “Non Serviam?”

And what will we do when it does?

Gunshow Comic #513

“We Provide…”

So I guess I’m wondering, what are our mechanisms of education? The increased understanding that we take into ourselves, and that we give out to others. Where do they come from, what are they made of, and how do they work? For me, the primary components are magic(k), tech, social theory and practice, teaching, public philosophy, and pop culture.

The process is about trying to use the things on the edges to do the work in the centre, both as a literal statement about the arrangement of those words, and a figurative codification.

Now you go. Because we have to actively craft new tools, in the face of vehement opposition, in the face of conflict breeding contention. We have to be able to adapt our pedagogy to fit new audiences. We have to learn as many ways to teach about otherness and difference and lived experience and an attempt to understand as we possibly can. Not for the sake of new systems of leveraging control, but for the ability to pry ourselves and each other out from under the same.