A Future Worth Thinking About

All posts tagged A Future Worth Thinking About

(Direct Link to the Mp3)

On Friday, I needed to do a thread of a thing, so if you hate threads and you were waiting until I collected it, here it is.

But this originally needed to be done in situ. It needed to be a serialized and systematized intervention and imposition into the machinery of that particular flow of time. That day…

There is a principle within many schools of magical thought known as “shielding.” In practice and theory, it’s closely related to the notion of “grounding” and the notion of “centering.” (If you need to think of magical praxis as merely a cipher for psychological manipulation toward particular behaviours or outcomes, these all still scan.)

When you ground, when you centre, when you shield, you are anchoring yourself in an awareness of a) your present moment, your temporality; b) your Self and all emotions and thoughts; and c) your environment. You are using your awareness to carve out a space for you to safely inhabit while in the fullness of that awareness. It’s a way to regroup, breathe, gather yourself, and know what and where you are, and to know what’s there with you.

You can shield your self, your home, your car, your group of friends, but moving parts do increase the complexity of what you’re trying to hold in mind, which may lead to anxiety or frustration, which kind of opposes the exercise’s point. (Another sympathetic notion, here, is that of “warding,” though that can be said to be more for objects,not people.)

So what is the point?

The point is that many of us are being drained, today, this week, this month, this year, this life, and we need to remember to take the time to regroup and recharge. We need to shield ourselves, our spaces, and those we love, to ward them against those things that would sap us of strength and the will to fight. We know we are strong. We know that we are fierce, and capable. But we must not lash out wildly, meaninglessly. We mustn’t be lured into exhausting ourselves. We must collect ourselves, protect ourselves, replenish ourselves, and by “ourselves” I also obviously mean “each other.”

Mutual support and endurance will be crucial.

…So imagine that you’ve built a web out of all the things you love, and all of the things you love are connected to each other and the strands between them vibrate when touched. And you touch them all, yes?

And so you touch them all and they all touch you and the energy you generate is cyclically replenished, like ocean currents and gravity. And you use what you build—that thrumming hum of energy—to blanket and to protect and to energize that which creates it.

And we’ll do this every day. We’ll do this like breathing. We’ll do this like the way our muscles and tendons and bones slide across and pull against and support each other. We’ll do this like heartbeats. Cyclical. Mutually supporting. The burden on all of us, together, so that it’s never on any one of us alone.

So please take some time today, tomorrow, very soon to build your shields. Because, soon, we’re going to need you to deploy them more and more.

Thank you, and good luck.


The audio and text above are modified versions of this Twitter thread. This isn’t the first time we’ve talked about the overlap of politics, psychology, philosophy, and magic, and if you think it’ll be the last, then you haven’t been paying attention.

Sometimes, there isn’t much it feels like we can do, but we can support  and shield each other. We have to remember that, in the days, weeks, month, and years to come. We should probably be doing our best to remember it forever.

Anyway, I hope this helps.

Until Next Time

As a part of my alt-ac career, I do a lot of thinking and writing in a lot of diverse areas. I write about human augmentation, artificial intelligence, philosophy of mind, and the occult, and I work with great people to put together conferences on pop culture and academia, all while trying to make a clear case for how important it is to look at the intersection of all of those things. As a result of my wide array of interests, there are always numerous conferences happening in my fields, every year, to which I should be submitting and which I should anyways attempt to attend. Conferences are places to make friends, develop contacts, and hear and respond to new perspectives within our fields. And I would really love to attend even a fraction of these conferences, but the fact is that I am not able to afford them. The cruel irony of most University System structures is that they offer the least travel funding assistance to those faculty members who need it most.

To my mind, the equation should be pretty simple: Full-Time Pay > Part-Time Pay. The fact that someone with a full time position at an institution makes more money means that while any travel assistance they receive is nice, they are less likely to need it as much as someone who is barely subsisting as an adjunct. For adjuncts who are working on at least two revenue streams, a little extra assistance in the form of the University System arranging their rules to provide adjuncts with the necessary funding for conference and research travel, could make all the difference between that conference being attended or that research being completed, and… not. But if it does get done, then the work done by those adjuncts would more likely be attributed to their funding institutions.

Think: If my paper is good enough to get accepted to a long-running international and peer-reviewed conference, don’t you want me thanking one of your University System’s Institutions for getting me there? Wouldn’t that do more to raise the profile of the University System than my calling myself an “Independent Scholar,” or “Unaffiliated?” Because, for an adjunct with minimal support from the University System, scrabbling to find a way to make registration, plane tickets, and accommodations like childcare, there is really no incentive whatsoever to thank a University System that didn’t do much at all to help with those costs. Why should they even mention them in their submission, at all?

But if an adjunct gets that assistance… Well then they’d feel welcome, wouldn’t they? Then they’d feel appreciated, wouldn’t they? And from that point on, they’re probably much more willing and likely to want everyone they talk to at that conference or research institution to know the name of the institution and system that took care of them. Aren’t They?

My job is great, by the way, and the faculty and administrative staff in my department are wonderful. They have contributed to my professional development in every way they possibly can, and I have seen them do the same for many other adjuncts. Opportunities like temporary full-time positions provide extra income every so often, as well as a view to the workings (and benefits) of full-time faculty life. But at the end of the day we are adjuncts, and there is, in every institution where I’ve studied or worked, a stark dichotomy between what rules and allowances are made for full time employees (many) and those which are made for adjuncts (few). This dichotomy isn’t down to any one department, or any one college, or even in fact any one University. It’s down the University System; it is down to how that system is administered; and it is down to the culture of University Systems Administration, Worldwide.

So if you’re reading this, and you’re a part of that culture, let me just say to you, right now: There are a lot of good people toiling away in poverty, people doing work that is of a high enough quality to get them into conferences or get them published or get them interviewed for comment in national publications. There are good people working for you who can’t (or who are simply disinclined to) raise the profile of your universities, because the funding system has never been arranged to even the playing field for them. They would be far more inclined to sing your praises, if you would just give them a little boost into the choir box.

Simply put, by not valuing and helping your adjuncts, you are actively hurting yourselves.

If you are an administrator or a tenured or tenure-track professor, do know that there is something that you can do: Use your position and power as leverage to fight for greater equality of University System support. Recognize that your adjunct faculty is no longer only focused on teaching, without the responsibilities and requirements of a research-oriented career. Many of them are trying to write, to speak, to teach, and to engage our wider cultural discourse, and they are trying to do it while working for you.


If you like what you read, here, and want to see more like it, then please consider becoming either a recurring Patreon subscriber or making a one-time donation to the Tip Jar, it would be greatly appreciated. A large part of how I support myself in the endeavor to think in public is via those mechanisms.
And thank you.

2016 is ending.

Celebrate the fact that you lived to see it.

2016 is ending.

Mourn the ones we lost along the way.

2016 is ending.

We’ve talked before about how the passage of time and the transition from one year to another are, in a very real sense, things that humans made up, but there’s always more to be said around here about narrative and myth and how the stories we tell ourselves make and shape us. We build and spell out what we desire to be in ideals and words and deeds and we carry our shifting constructions and foundational fictions in us, always, so that they may impact how we feel and how we think and what we do.

This 366 days as we humans in the west mark them mean nothing to the lifespan of the universe, to the turning of suns and black holes, to the diamond hearts of gas giants orbiting distant stars, to the weft and weave of geological and cosmological forces around us and in us. These days are how a portion of one species tries to grapple with the seeming inevitability of change and death. But so is literally everything we do.

2016 is not a real year, in any meaningful sense. It’s where we are from where we started counting from a few decent guesses, and if we wanted to take seriously the “reality” of that, then we’d have to be okay with the notions that Popes have the power to literally erase days from the record of time. We struggle with perceiving rates of change, and so we make up and define and refine time. And when it suits you—when you want to seem aloof, or above it, or disaffected, or too cool for the room—you remember that. You say things like “don’t blame a year for people dying,” or “why do you think the New Year is gonna suddenly make your problems disappear?”

But you know why. It’s a concentration of will, a focal point of belief and intention. It’s a cultural crux. It is a moment for all of us to stand together and reflect on what we want and what we need and what we will build and do, in the New Year. And more often than not, it works. At least for a little while. And that is very good, because yes, Time and Separation are illusions, but so is a desert mirage, and that can sure as hell kill you if you misunderstand what you’re perceiving.

So today let’s each of us use Time. Use distance. Use loss and pain. Use the memory and the impact of them to do what we can to make this communal hallucination of temporal transition resonate with a little more light and joy.

Give a stranger a kind word. Tell someone you love that you love them, even if you think it might be weird. If you go out tonight, resolve to be the easiest, kindest person your server has to deal with, all night, because they will have many more of the opposite. Do not drive while intoxicated.

2016 is ending. For many of us, it has already ended.

2016 is ending. This bounded moment, this name around a series of events, this collective noun for all the things that have harmed us.

2016 is ending. So remember that we don’t want to feel anymore as we so often felt this year. Death is still inevitable and change is our only constant, but we do not have to lose so much, all at once, nor allow our fear of difference to make us cold and hard and small.

On this final day of 2016, as the arc of our home star around the curve of our planet heralds the first moments of our next made up year, be kind. Be good. Help each other. Look out for each other. Strive to be a better person than you ever thought you could be.

It’s gonna be difficult and frustrating and maddening, but—if we stick together—joyous. Enthralling. Beautiful.

2016 is ending. But 2017 won’t be any better unless we do what we can to make it be.

And we can make it be.

Happy New Year.

(This was originally posted over at Medium, [well parts were originally posted in the newslettter, but], but I wanted it somewhere I could more easily manage.)


Hey.

I just wanna say (and you know who you are): I get you were scared of losing your way of life — the status quo was changing all around you. Suddenly it wasn’t okay anymore to say or do things that the world previously told you were harmless. People who didn’t “feel” like you were suddenly loudly everywhere, and no one just automatically believed what you or those you believed in had to say, anymore. That must have been utterly terrifying.

But here’s the thing: People are really scared now. Not just of obsolescence, or of being ignored. They’re terrified for their lives. They’re not worried about “the world they knew.” They’re worried about whether they’ll be rounded up and put in camps or shot or beaten in the street. Because, you see, many of the people who voted for this, and things like it around the world, see many of us — women, minorities, immigrants, LGBTQIA folks, disabled folks, neurodivergent folks — as less than “real” people, and want to be able to shut us up using whatever means they deem appropriate, including death.

The vice president elect thinks gay people can be “retrained,” and that we should attempt it via the same methods that make us side-eye dog owners. The man tapped to be a key advisor displays and has cultivated an environment of white supremacist hatred. The president-elect is said to be “mulling over” a registry for Muslim people in the country. A registry. Based on your religion.

My own cousin had food thrown at her in a diner, right before the election. And things haven’t exactly gotten better, since then.

Certain hateful elements want many of us dead or silent and “in our place,” now, just as much as ever. And all we want and ask for is equal respect, life, and justice.

I said it on election night and I’ll say it again: there’s no take-backsies, here. I’m speaking to those who actively voted for this, or didn’t actively plant yourselves against it (and you know who you are): You did this. You cultivated it. And I know you did what you thought you had to, but people you love are scared, because their lives are literally in danger, so it’s time to wake up now. It’s time to say “No.”

We’re all worried about jobs and money and “enough,” because that’s what this system was designed to make us worry about. Your Muslim neighbour, your gay neighbour, your trans neighbour, your immigrant neighbour, your NEIGHBOUR IS NOT YOUR ENEMY. The system that tells you to hate and fear them is. And if you bought into that system because you couldn’t help being afraid then I’m sorry, but it’s time to put it down and Wake Up. Find it in yourself to ask forgiveness of yourself and of those you’ve caused mortal terror. If you call yourself Christian, that should ring really familiar. But other faiths (and nonfaiths) know it too.

We do better together. So it’s time to gather up, together, work, together, and say “No,” together.

So snap yourself out of it, and help us. If you’re in the US, please call your representatives, federal and local. Tell them what you want, tell them why you’re scared. Tell them that these people don’t represent our values and the world we wish to see:
http://www.house.gov/representatives/find/
http://www.senate.gov/senators/contact/

Because this, right here, is the fundamental difference between fearing the loss of your way of life, and the fear of losing your literal life.

Be with the people you love. Be by their side and raise their voices if they can’t do it for themselves, for whatever reason. Listen to them, and create a space where they feel heard and loved, and where others will listen to them as well.

And when you come around, don’t let your pendulum swing so far that you fault those who can’t move forward, yet. Please remember that there is a large contingent of people who, for many various reasons, cannot be out there protesting. Shaming people who have anxiety, depression, crippling fear of their LIVES, or are trying to not get arrested so their kids can, y’know, EAT FOOD? Doesn’t help.

So show some fucking compassion. Don’t shame those who are tired and scared and just need time to collect themselves. Urge and offer assistance where you can, and try to understand their needs. Just do what you can to help us all believe that we can get through this. We may need to lean extra hard on each other for a while, but we can do this.

You know who you are. We know you didn’t mean to. But this is where we are, now. Shake it off. Start again. We can do this.


If you liked this article, consider dropping something into the A Future Worth Thinking About Tip Jar

So I’m quoted in this article in The Atlantic on the use of technology in leveraging sociological dynamics to combat online harassment: “Why Online Allies Matter in Fighting Harassment.”

An experiment by Kevin Munger used bots to test which groups white men responded to when being called out on their racist harassment online. Findings largely unsurprising (Powerful white men; they responded favourably to powerful white men), save for the fact that anonimity INCREASED effectiveness of treatment, and visible identity decreased it. That one was weird. But it’s still nice to see all of this codified.

Good to see use of Bertrand & Mullainathan’s “Are Emily and Greg more employable than Lakisha and Jamal?” as the idea of using “Black Sounding Names” to signal purported ethnicity of bot thus clearly models what he thought those he expected to be racist would think, rather than indicating his own belief. (However, it could be asked whether there’s a meaningful difference, here, as he still had to choose the names he thought would “sound black.”)

The Reactance study Munger discusses—the one that shows that people double down on factually incorrect prejudices—is the same one I used in “On The Invisible Architecture of Bias

A few things Ed Yong and I talked about that didn’t get into the article, due to space:

-Would like to see this experimental model applied to other forms of prejudice (racist, sexist, homophobic, transphobic, ableist, etc language), and was thus very glad to see the footnote about misogynist harassment.

-I take some exception to the use of Dovidio/Gaertner and Crandall et al definitions of racism, as those leave out the sociological aspects of power dynamics (“Racism/Sexism/Homophobia/Transphobia/Ableism= Prejudice + Power”) which seem crucial to understanding the findings of Munger’s experiment. He skirts close to this when he discusses the greater impact of “high status” individuals, but misses the opportunity to lay out the fact that:
–Institutionalised power dynamics as related to the interplay of in-group and out-group behaviour are pretty clearly going to affect why white people are more likely to listen to those they perceive as powerful white men, because
–The interplay of Power and status, interpersonally, is directly related to power and status institutionally.

-Deindividuation (loss of sense of self in favour of group identity) as a key factor and potential solution is very interesting.

Something we didn’t get to talk about but which I think is very important is the question of how we keep this from being used as a handbook. That is, what do we do in the face of people understand these mechanisms and who wish to use them to sow division and increase acceptance of racist, sexist, homophobic, transphobic, ableist, etc ideals? Do we, then, become engaged in some kind of rolling arms race of sociological pressure?

…Which, I guess, has pretty much always been true, and we call it “civilization.”

Anyway, hope you enjoy it.

There’s increasing reportage about IBM using Watson to correlate medical data. We’ve talked before about the potential hazards of this:

Do you know someone actually had the temerity to ask [something like] “What Does Google Having Access to Medical Records Mean For Patient Privacy?” [Here] Like…what the fuck do you think it means? Nothing good, you idiot!

Disclosures and knowledges can still make certain populations intensely vulnerable to both predation and to social pressures and judgements, and until that isn’t the case, anymore, we need to be very careful about the work we do to try to bring those patients’ records into a sphere where they’ll be accessed and scrutinized by people who don’t have to take an oath to hold that information in confidence. ‘

We are more and more often at the intersection of our biological humanity and our technological augmentation, and the integration of our mediated outboard memories only further complicates the matter. As it stands, we don’t quite yet know how to deal with the question posed by Motherboard, some time ago (“Is Harm to a Prosthetic Limb Property Damage or Personal Injury?”), but as we build on implantable technologies, advanced prostheses, and offloaded memories and augmented capacities we’re going to have to start blurring the line between our bodies, our minds, and our concept of our selves. That is, we’ll have to start intentionally blurring it, because the vast majority of us already blur it, without consciously realising that we do. At least, those without prostheses don’t realise it.

Dr Ashley Shew, out of Virginia Tech,  works at the intersection of philosophy, tech, and disability. I first encountered her work, at the 2016 IEEE Ethics Conference in Vancouver, where she presented her paper “Up-Standing, Norms, Technology, and Disability,” a discussion of how ableism, expectations, and language use marginalise disabled bodies. Dr Shew is, herself, disabled, having had her left leg removed due to cancer, and she gave her talk not on the raised dias, but at floor-level, directly in front of the projector. Her reason? “I don’t walk up stairs without hand rails, or stand on raised platforms without guards.”

Dr Shew notes that users of wheelchairs consider those to be fairly integral extensions and interventions. Wheelchair users, she notes, consider their chairs to be a part of them, and the kinds of lawsuits engaged when, for instance, airlines damage their chairs, which happens a great deal.  While we tend to think of the advents of technology allowing for the seamless integration of our technology and bodies, the fact is that well-designed mechanical prostheses, today, are capable becoming integrated into the personal morphic sphere of a person, the longer they use it. And this can extended sensing can be transferred from one device to another. Shew mentions a friend of hers:

She’s an amputee who no longer uses a prosthetic leg, but she uses forearm crutches and a wheelchair. (She has a hemipelvectomy, so prosthetics are a real pain for her to get a good fit and there aren’t a lot of options.) She talks about how people have these different perceptions of devices. When she uses her chair people treat her differently than when she uses her crutches, but the determination of which she uses has more to do with the activities she expects for the day, rather than her physical wellbeing.

But people tend to think she’s recovering from something when she moves from chair to sticks.

She has been an [amputee] for 18 years.

She has/is as recovered as she can get.

In her talk at IEEE, Shew discussed the fact that a large number of paraplegics and other wheelchair users do not want exoskeletons, and those fancy stair-climbing wheelchairs aren’t covered by health insurance. They’re classed as vehicles. She said that when she brought this up in the class she taught, one of the engineers left the room looking visibly distressed. He came back later and said that he’d gone home to talk to his brother with spina bifida, who was the whole reason he was working on exoskeletons. He asked his brother, “Do you even want this?” And the brother said, basically, “It’s cool that you’re into it but… No.” So, Shew asks, why are these technologies being developed? Transhumanists and the military. Framing this discussion as “helping our vets” makes it a noble cause, without drawing too much attention to the fact that they’ll be using them on the battlefield as well.

All of this comes back down and around to the idea of biases ingrained into social institutions. Our expectations of what a “normal functioning body” is gets imposed from the collective society, as a whole, a placed as restrictions and demands on the bodies of those whom we deem to be “malfunctioning.” As Shew says, “There’s such a pressure to get the prosthesis as if that solves all the problems of maintenance and body and infrastructure. And the pressure is for very expensive tech at that.”

So we are going to have to accept—in a rare instance where Robert Nozick is proven right about how property and personhood relate—that the answer is “You are damaging both property and person, because this person’s property is their person.” But this is true for reasons Nozick probably would not think to consider, and those same reasons put us on weirdly tricky grounds. There’s a lot, in Nozick, of the notion of property as equivalent to life and liberty, in the pursuance of rights, but those ideas don’t play out, here, in the same way as they do in conservative and libertarian ideologies.  Where those views would say that the pursuit of property is intimately tied to our worth as persons, in the realm of prosthetics our property is literally simultaneously our bodies, and if we don’t make that distinction, then, as Kirsten notes, we can fall into “money is speech” territory, very quickly, and we do not want that.

Because our goal is to be looking at quality of life, here—talking about the thing that allows a person to feel however they define “comfortable,” in the world. That is, the thing(s) that lets a person intersect with the world in the ways that they desire. And so, in damaging the property, you damage the person. This is all the more true if that person is entirely made of what we are used to thinking of as property.

And all of this is before we think about the fact implantable and bone-bonded tech will need maintenance. It will wear down and glitch out, and you will need to be able to access it, when it does.  This means that the range of ability for those with implantables? Sometimes it’s less than that of folks with more “traditional” prostheses. But because they’re inside, or more easily made to look like the “original” limb,  we observers are so much more likely to forget that there are crucial differences at play in the ownership and operation of these bodies.

There’s long been a fear that, the closer we get to being able to easily and cheaply modify humans, we’ll be more likely to think of humanity as “perfectable.” That the myth of progress—some idealized endpoint—will be so seductive as to become completely irresistible. We’ve seen this before, in the eugenics movement, and it’s reared its head in the transhumanist and H+ communities of the 20th and 21st centuries, as well. But there is the possibility that instead of demanding that there be some kind of universally-applicable “baseline,” we intently focused, instead, on recognizing the fact that just as different humans have different biochemical and metabolic needs, process, capabilities, preferences, and desires, different beings and entities which might be considered persons are drastically different than we, but no less persons?

Because human beings are different. Is there a general framework, a loosely-defined line around which we draw a conglomeration of traits, within which lives all that we mark out as “human”—a kind of species-wide butter zone? Of course. That’s what makes us a fucking species. But the kind of essentialist language and thinking towards which we tend, after that, is reductionist and dangerous. Our language choices matter, because connotative weight alters what people think and in what context, and, again, we have a habit of moving rapidly from talking about a generalized framework of humanness to talking about “The Right Kind Of Bodies,” and the “Right Kind Of Lifestyle.”

And so, again, again, again, we must address problems such as normalized expectations of “health” and “Ability.” Trying to give everyone access to what they might consider their “best” selves is a brilliant goal, sure, whatever, but by even forwarding the project, we run the risk of colouring an expectation of both what that “best” is and what we think it “Ought To” look like.

Some people need more protein, some people need less choline, some people need higher levels of phosphates, some people have echolocation, some can live to be 125, and every human population has different intestinal bacterial colonies from every other. When we combine all these variables, we will not necessarily find that each and every human being has the same molecular and atomic distribution in the same PPM/B ranges, nor will we necessarily find that our mixing and matching will ensure that everyone gets to be the best combination of everything. It would be fantastic if we could, but everything we’ve ever learned about our species says that “healthy human” is a constantly shifting target, and not a static one.

We are still at a place where the general public reacts with visceral aversion to technological advances and especially anything like an immediated technologically-augmented humanity, and this is at least in part because we still skirt the line of eugenics language, to this day. Because we talk about naturally occurring bio-physiological Facts as though they were in any way indicative of value, without our input. Because we’re still terrible at ethics, continually screwing up at 100mph, then looking back and going, “Oh. Should’ve factored that in. Oops.”

But let’s be clear, here: I am not a doctor. I’m not a physiologist or a molecular biologist. I could be wrong about how all of these things come together in the human body, and maybe there will be something more than a baseline, some set of all species-wide factors which, in the right configuration, say “Healthy Human.” But what I am is someone with a fairly detailed understanding of how language and perception affect people’s acceptance of possibilities, their reaction to new (or hauntingly-familiar-but-repackaged) ideas, and their long-term societal expectations and valuations of normalcy.

And so I’m not saying that we shouldn’t augment humanity, via either mediated or immediated means. I’m not saying that IBM’s Watson and Google’s DeepMind shouldn’t be tasked with the searching patient records and correlating data. But I’m also not saying that either of these is an unequivocal good. I’m saying that it’s actually shocking how much correlative capability is indicated by the achievements of both IBM and Google. I’m saying that we need to change the way we talk about and think about what it is we’re doing. We need to ask ourselves questions about informed patient consent, and the notions of opting into the use of data; about the assumptions we’re making in regards to the nature of what makes us humans, and the dangers of rampant, unconscious scientistic speciesism. Then, we can start to ask new questions about how to use these new tools we’ve developed.

With this new perspective, we can begin to imagine what would happen if we took Watson and DeepDream’s ability to put data into context—to turn around, in seconds, millions upon millions (billions? Trillions?) of permutations and combinations. And then we can ask them to work on tailoring genome-specific health solutions and individualized dietary plans. What if we asked these systems to catalogue literally everything we currently knew about every kind of disease presentation, in every ethnic and regional population, and the differentials for various types of people with different histories, risk factors, current statuses? We already have nanite delivery systems, so what if we used Google and IBM’s increasingly ridiculous complexity to figure out how to have those nanobots deliver a payload of perfectly-crafted medical remedies?

But this is fraught territory. If we step wrong, here, we are not simply going to miss an opportunity to develop new cures and devise interesting gadgets. No; to go astray, on this path, is to begin to see categories of people that “shouldn’t” be “allowed” to reproduce, or “to suffer.” A misapprehension of what we’re about, and why, is far fewer steps away from forced sterilization and medical murder than any of us would like to countenance. And so we need to move very carefully, indeed, always being aware of our biases, and remembering to ask those affected by our decisions what they need and what it’s like to be them. And remembering, when they provide us with their input, to believe them.

I spoke with Klint Finley over at WIRED about Amazon, Facebook, Google, IBM, and Microsoft’s new joint ethics and oversight venture, which they’ve dubbed the “Partnership on Artificial Intelligence to Benefit People and Society.” They held a joint press briefing, today, in which Yann LeCun, Facebook’s director of AI, and Mustafa Suleyman, the head of applied AI at DeepMind discussed what it was that this new group would be doing out in the world. From the Article:

Creating a dialogue beyond the rather small world of AI researchers, LeCun says, will be crucial. We’ve already seen a chat bot spout racist phrases it learned on Twitter, an AI beauty contest decide that black people are less attractive than white people and a system that rates the risk of someone committing a crime that appears to be biased against black people. If a more diverse set of eyes are looking at AI before it reaches the public, the thinking goes, these kinds of thing can be avoided.

The rub is that, even if this group can agree on a set of ethical principles–something that will be hard to do in a large group with many stakeholders—it won’t really have a way to ensure those ideals are put into practice. Although one of the organization’s tenets is “Opposing development and use of AI technologies that would violate international conventions or human rights,” Mustafa Suleyman, the head of applied AI at DeepMind, says that enforcement is not the objective of the organization.

This isn’t the first time I’ve talked to Klint about the intricate interplay of machine intelligence, ethics, and algorithmic bias; we discussed it earlier just this year, for WIRED’s AI Issue. It’s interesting to see the amount of attention this topic’s drawn in just a few short months, and while I’m trepidatious about the potential implementations, as I note in the piece, I’m really fairly glad that more people are more and more willing to have this discussion, at all.

To see my comments and read the rest of the article, click through, here: “Tech Giants Team Up to Keep AI From Getting Out of Hand”

-Human Dignity-

The other day I got a CFP for “the future of human dignity,” and it set me down a path thinking.

We’re worried about shit like mythical robots that can somehow simultaneously enslave us and steal the shitty low paying jobs we none of us want to but all of us have to have so we can pay off the debt we accrued to get the education we were told would be necessary to get those jobs, while other folks starve and die of exposure in a world that is just chock full of food and houses…

About shit like how we can better regulate the conflated monster of human trafficking and every kind of sex work, when human beings are doing the best they can to direct their own lives—to live and feed themselves and their kids on their own terms—without being enslaved and exploited…

About, fundamentally, how to make reactionary laws to “protect” the dignity of those of us whose situations the vast majority of us have not worked to fully appreciate or understand, while we all just struggle to not get: shot by those who claim to protect us, willfully misdiagnosed by those who claim to heal us, or generally oppressed by the system that’s supposed to enrich and uplift us…

…but no, we want to talk about the future of human dignity?

Louisiana’s drowning, Missouri’s on literal fire, Baltimore is almost certainly under some ancient mummy-based curse placed upon it by the angry ghost of Edgar Allan Poe, and that’s just in the One Country.

Motherfucker, human dignity ain’t got a Past or a Present, so how about let’s reckon with that before we wax poetically philosophical about its Future.

I mean, it’s great that folks at Google are finally starting to realise that making sure the composition of their teams represents a variety of lived experiences is a good thing. But now the questions are, 1) do they understand that it’s not about tokenism, but about being sure that we are truly incorporating those who were previously least likely to be incorporated, and 2) what are we going to do to not only specifically and actively work to change that, but also PUBLICIZE THAT WE NEED TO?

These are the kinds of things I mean when I say, “I’m not so much scared of/worried about AI as I am about the humans who create and teach them.”

There’s a recent opinion piece at the Washington Post, titled “Why perceived inequality leads people to resist innovation,”. I read something like that and I think… Right, but… that perception is a shared one based on real impacts of tech in the lives of many people; impacts which are (get this) drastically unequal. We’re talking about implications across communities, nations, and the world, at an intersection with a tech industry that has a really quite disgusting history of “disruptively innovating” people right out of their homes and lives without having ever asked the affected parties about what they, y’know, NEED.

So yeah. There’s a fear of inequality in the application of technological innovation… Because there’s a history of inequality in the application of technological innovation!

This isn’t some “well aren’t all the disciplines equally at fault here,” pseudo-Kumbaya false equivalence bullshit. There are neoliberal underpinnings in the tech industry that are basically there to fuck people over. “What the market will bear” is code for, “How much can we screw people before there’s backlash? Okay so screw them exactly that much.” This model has no regard for the preexisting systemic inequalities between our communities, and even less for the idea that it (the model) will both replicate and iterate upon those inequalities. That’s what needs to be addressed, here.

Check out this piece over at Killscreen. We’ve talked about this before—about how we’re constantly being sold that we’re aiming for a post-work economy, where the internet of things and self-driving cars and the sharing economy will free us all from the mundaneness of “jobs,” all while we’re simultaneously being asked to ignore that our trajectory is gonna take us straight through and possibly land us square in a post-Worker economy, first.

Never mind that we’re still gonna expect those ex-workers to (somehow) continue to pay into capitalism, all the while.

If, for instance, either Uber’s plan for a driverless fleet or the subsequent backlash from their stable—i mean “drivers” are shocking to you, then you have managed to successfully ignore this trajectory.

Completely.

Disciplines like psychology and sociology and history and philosophy? They’re already grappling with the fears of the ones most likely to suffer said inequality, and they’re quite clear on the fact that, the ones who have so often been fucked over?

Yeah, their fears are valid.

You want to use technology to disrupt the status quo in a way that actually helps people? Here’s one example of how you do it: “Creator of chatbot that beat 160,000 parking fines now tackling homelessness.”

Until then, let’s talk about constructing a world in which we address the needs of those marginalised. Let’s talk about magick and safe spaces.

 

-Squaring the Circle-

Speaking of CFPs, several weeks back, I got one for a special issue of Philosophy and Technology on “Logic As Technology,” and it made me realise that Analytic Philosophy somehow hasn’t yet understood and internalised that its wholly invented language is a technology

…and then that realisation made me realise that Analytic Philosophy hasn’t understood that language as a whole is a Technology.

And this is something we’ve talked about before, right? Language as a technology, but not just any technology. It’s the foundational technology. It’s the technology on which all others are based. It’s the most efficient way we have to cram thoughts into the minds of others, share concept structures, and make the world appear and behave the way we want it to. The more languages we know, right?

We can string two or more knowns together in just the right way, and create a third, fourth, fifth known. We can create new things in the world, wholecloth, as a result of new words we make up or old words we deploy in new ways. We can make each other think and feel and believe and do things, with words, tone, stance, knowing looks. And this is because Language is, at a fundamental level, the oldest magic we have.

1528_injection_splash

Scene from the INJECTION issue #3, by Warren Ellis, Declan Shalvey, and Jordie Bellaire. ©Warren Ellis & Declan Shalvey.

Lewis Carroll tells us that whatever we tell each other three times is true, and many have noted that lies travel far faster than the truth, and at the crux of these truisms—the pivot point, where the power and leverage are—is Politics.

This week, much hay is being made is being made about the University of Chicago’s letter decrying Safe Spaces and Trigger Warnings. Ignoring for the moment that every definition of “safe space” and “trigger warning” put forward by their opponents tends to be a straw man of those terms, let’s just make an attempt to understand where they come from, and how we can situate them.

Trauma counseling and trauma studies are the epitome of where safe space and trigger warnings come from, and for the latter, that definition is damn near axiomatic. Triggers are about trauma. But safe space language has far more granularity than that. Microggressions are certainly damaging, but they aren’t on the same level as acute traumas. Where acute traumas are like gun shots or bomb blasts (and may indeed be those actual things), societal micragressions are more like a slow constant siege. But we still need the language of a safe spaces to discuss them—said space is something like a bunker in which to regroup, reassess, and plan for what comes next.

Now it is important to remember that there is a very big difference between “safe” and “comfortable,” and when laying out the idea of safe spaces, every social scientist I know takes great care to outline that difference.

Education is about stretching ourselves, growing and changing, and that is discomfort almost by definition. I let my students know that they will be uncomfortable in my class, because I will be challenging every assumption they have. But discomfort does not mean I’m going to countenance racism or transphobia or any other kind of bigotry.

Because the world is not a safe space, but WE CAN MAKE IT SAFER for people who are microagressed against, marginalised, assaulted, and killed for their lived identities, by letting them know not only how to work to change it, but SHOWING them through our example.

Like we’ve said, before: No, the world’s not safe, kind, or fair, and with that attitude it never will be.

So here’s the thing, and we’ll lay it out point-by-point:

A Safe Space is any realm that is marked out for the nonjudgmental expression of thoughts and feelings, in the interest of honestly assessing and working through them.

Safe Space” can mean many things, from “Safe FROM Racist/Sexist/Homophobic/Transphobic/Fatphobic/Ableist Microagressions” to “safe FOR the thorough exploration of our biases and preconceptions.” The terms of the safe space are negotiated at the marking out of them.

The terms are mutually agreed-upon by all parties. The only imposition would be, to be open to the process of expressing and thinking through oppressive conceptual structures.

Everything else—such as whether to address those structures as they exist in ourselves (internalised oppressions), in others (aggressions, micro- or regular sized), or both and their intersection—is negotiable.

The marking out of a Safe Space performs the necessary function, at the necessary time, defined via the particular arrangement of stakeholders, mindset, and need.

And, as researcher John Flowers notes, anyone who’s ever been in a Dojo has been in a Safe Space.

From a Religious Studies perspective, defining a safe space is essentially the same process as that of marking out a RITUAL space. For students or practitioners of any form of Magic[k], think Drawing a Circle, or Calling the Corners.

Some may balk at the analogy to the occult, thinking that it cheapens something important about our discourse, but look: Here’s another way we know that magick is alive and well in our everyday lives:

If they could, a not-insignificant number of US Republicans would overturn the Affordable Care Act and rally behind a Republican-crafted replacement (RCR). However, because the ACA has done so very much good for so many, it’s likely that the only RCR that would have enough support to pass would be one that looked almost identical to the ACA. The only material difference would be that it didn’t have President Obama’s name on it—which is to say, it wouldn’t be associated with him, anymore, since his name isn’t actually on the ACA.

The only reason people think of the ACA as “Obamacare” is because US Republicans worked so hard to make that name stick, and now that it has been widely considered a triumph, they’ve been working just as hard to get his name away from it. And if they did mange to achieve that, it would only be true due to some arcane ritual bullshit. And yet…

If they managed it, it would be touted as a “Crushing defeat for President Obama’s signature legislation.” It would have lasting impacts on the world. People would be emboldened, others defeated, and new laws, social rules, and behaviours would be undertaken, all because someone’s name got removed from a thing in just the right way.

And that’s Magick.

The work we do in thinking about the future sometimes requires us to think about things from what stuffy assholes in the 19th century liked to call a “primitive” perspective. They believed in a kind of evolutionary anthropological categorization of human belief, one in which all societies move from “primitive” beliefs like magic through moderate belief in religion, all the way to sainted perfect rational science. In the contemporary Religious Studies, this evolutionary model is widely understood to be bullshit.

We still believe in magic, we just call it different things. The concept structures of sympathy and contagion are still at play, here, the ritual formulae of word and tone and emotion and gesture all still work when you call them political strategy and marketing and branding. They’re all still ritual constructions designed to make you think and behave differently. They’re all still causing spooky action at a distance. They’re still magic.

The world still moves on communicated concept structure. It still turns on the dissemination of the will. If I can make you perceive what I want you to perceive, believe what I want you to believe, move how I want you to move, then you’ll remake the world, for me, if I get it right. And I know that you want to get it right. So you have to be willing to understand that this is magic.

It’s not rationalism.

It’s not scientism.

It’s not as simple as psychology or poll numbers or fear or hatred or aspirational belief causing people to vote against their interests. It’s not that simple at all. It’s as complicated as all of them, together, each part resonating with the others to create a vastly complex whole. It’s a living, breathing thing that makes us think not just “this is a thing we think” but “this is what we are.” And if you can do that—if you can accept the tools and the principles of magic, deploy the symbolic resonance of dreamlogic and ritual—then you might be able to pull this off.

But, in the West, part of us will always balk at the idea that the Rational won’t win out. That the clearer, more logical thought doesn’t always save us. But you have to remember: Logic is a technology. Logic is a tool. Logic is the application of one specific kind of thinking, over and over again, showing a kind of result that we convinced one another we preferred to other processes. It’s not inscribed on the atoms of the universe. It is one kind of language. And it may not be the one most appropriate for the task at hand.

Put it this way: When you’re in Zimbabwe, will you default to speaking Chinese? Of course not. So why would we default to mere Rationalism, when we’re clearly in a land that speaks a different dialect?

We need spells and amulets, charms and warded spaces; we need sorcerers of the people to heal and undo the hexes being woven around us all.

 

-Curious Alchemy-

Ultimately, the rigidity of our thinking, and our inability to adapt has lead us to be surprised by too much that we wanted to believe could never have come to pass. We want to call all of this “unprecedented,” when the truth of the matter is, we carved this precedent out every day for hundreds of years, and the ability to think in weird paths is what will define those who thrive.

If we are going to do the work of creating a world in which we understand what’s going on, and can do the work to attend to it, then we need to think about magic.

 


If you liked this article, consider dropping something into the A Future Worth Thinking About Tip Jar

So this past Saturday, a surrogate speaker for the Republican nominee for President of the United States spoke on CNN about how it was somehow a problem that the Democratic Nominee for Vice President of the United States spoke in Spanish, in his first speech addressing the nation as the Dem VP Nom.

She—the Republican Surrogate—then made a really racist reference to “Dora The Explorer.”

Now, we should note that Spanish is the first or immediate second language of 52.6 million people in the United States of America, and it is spoken by approximately 427 million people in the entire world, making in the second most widely spoken language, after Chinese (English is 3rd). So even if it weren’t just politically a good idea to address between 52.6 and 427 million people in a way that made them comfortable (and it is), there’s something weird about how…offended people get by being “made to” hear another language. There’s something of the “I don’t understand it, and I never want to have to!” in there that just baffles me. Something that we seem to read as a threat, when we observe others communicating by means in which we aren’t fluent. Rather than take it as a chance to open ourselves up, and learn something new, or to recognise that, for some of us, the ability to speak candidly in a native language is the only personal space to be had, within a particular society—rather than any of that, we get scared and feel excluded, and take offense.

When perhaps we should recognise that that exclusion and fear is something felt by precisely the same people we shout at to “Learn the Damn Language.”

But let’s set that aside, for a second, and talk about why it’s good to learn other languages. Studies have shown that the more languages we speak, the more conceptual structures we create in our minds, and this goes for everything from Spanish to Sign Language to Math to Coding to the way someone with whom we’re intimate expresses their emotionality. Any time we learn a new way to communicate perceptions and ideas and needs and desires, we create whole new ways of thinking and functioning, in ourselves. Those aforementioned conceptual structures then mean that we’ll be in a better position to understand and be understood by people who aren’t just exactly like us. Politically, the benefits of this should be obvious, in terms of diplomacy and opportunities to craft coalitions of peace, but even simpler than that is the fact that, through new languages, we provide ourselves and others a wider array of potential connections and intersections, in the world we all share.

And if that doesn’t strike us all as a VERY GOOD THING, then I don’t know what the hell else to say.

Let’s just make it real simple: Understanding Each Other Is Good.

To that end, we have to remember that understanding doesn’t just mean that we make everyone speak and behave exactly the way we want them to. Understanding means a mutual reaching-toward, when possible, and it means those of us who can expend the extra effort doing so, especially when another might not be able to, at all.

There’s really not much else to it.

[Originally Published at Eris Magazine]

So Gabriel Roberts asked me to write something about police brutality, and I told him I needed a few days to get my head in order. The problem being that, with this particular topic, the longer I wait on this, the longer I want to wait on this, until, eventually, the avoidance becomes easier than the approach by several orders of magnitude.

Part of this is that I’m trying to think of something new worth saying, because I’ve already talked about these conditions, over at A Future Worth Thinking About. We talked about this in “On The Invisible Architecture of Bias,” “Any Sufficiently Advanced Police State…,” “On the Moral, Legal, and Social Implications of the Rearing and Development of Nascent Machine Intelligences,” and most recently in “On the European Union’s “Electronic Personhood” Proposal.” In these articles, I briefly outlined the history of systemic bias within many human social structures, and the possibility and likelihood of that bias translating into our technological advancements, such as algorithmic learning systems, use of and even access to police body camera footage, and the development of so-called artificial intelligence.

Long story short, the endemic nature of implicit bias in society as a whole plus the even more insular Us-Vs-Them mentality within the American prosecutorial legal system plus the fact that American policing was literally borne out of slavery on the work of groups like the KKK, equals a series of interlocking systems in which people who are not whitepassing, not male-perceived, not straight-coded, not “able-bodied” (what we can call white supremacist, ableist, heteronormative, patriarchal hegemony, but we’ll just use the acronym WSAHPH, because it satisfyingly recalls that bro-ish beer advertising campaign from the late 90’s and early 2000’s) stand a far higher likelihood of dying at the hands of agents of that system.

Here’s a quote from Sara Ahmed in her book The Cultural Politics of Emotion, which neatly sums this up:

“[S]ome bodies are ‘in an instant’ judged as suspicious, or as dangerous, as objects to be feared, a judgment that can have lethal consequences. There can be nothing more dangerous to a body than the social agreement that that body is dangerous.”

At the end of this piece, I’ve provided some of the same list of links that sits at the end of “On The Invisible Architecture of Bias,” just to make it that little bit easier for us to find actual evidence of what we’re talking about, here, but, for now, let’s focus on these:

A Brief History of Slavery and the Origins of American Policing
2006 FBI Report on the infiltration of Law Enforcement Agencies by White Supremacist Groups
June 20, 2016 “Texas Officers Fired for Membership in KKK”

And then we’ll segue to the fact that we are, right now, living through the exemplary problem of the surveillance state. We’ve always been told that cameras everywhere will make us all safer, that they’ll let people know what’s going on and that they’ll help us all. People doubted this, even in Orwell’s day, noting that the more surveilled we are, the less freedom we have, but more recently people have started to hail this from the other side: Maybe videographic oversight won’t help the police help us, but maybe it will help keep us safe from the police.

But the sad fact of the matter is that there’s video of Alton Sterling being shot to death while restrained, and video of John Crawford III being shot to death by a police officer while holding a toy gun down at his side in a big box store where it was sold, and there’s video of Alva Braziel being shot to death while turning around with his hands up as he was commanded to do by officers, of Eric Garner being choked to death, of Delrawn Small being shot to death by an off-duty cop who cut him off in traffic. There’s video of so damn many deaths, and nothing has come of most of them. There is video evidence showing that these people were well within their rights, and in lawful compliance with officers’ wishes, and they were all shot to death anyway, in some cases by people who hadn’t even announced themselves as cops, let alone ones under some kind of perceived threat.

The surveillance state has not made us any safer, it’s simply caused us to be confronted with the horror of our brutality. And I’d say it’s no more than we deserve, except that even with protests and retaliatory actions, and escalations to civilian drone strikes, and even Newt fucking Gingrich being able to articulate the horrors of police brutality, most of those officers are still on the force. Many unconnected persons have been fired, for indelicate pronouncements and even white supremacist ties, but how many more are still on the force? How many racist, hateful, ignorant people are literally waiting for their chance to shoot a black person because he “resisted” or “threatened?” Or just plain disrespected. And all of that is just what happened to those people. What’s distressing is that those much more likely to receive punishment, however unofficial, are the ones who filmed these interactions and provided us records of these horrors, to begin with. Here, from Ben Norton at Salon.com, is a list of what happened to some of the people who have filmed police killings of non-police:

Police have been accused of cracking down on civilians who film these shootings.

Ramsey Orta, who filmed an NYPD cop putting unarmed black father Eric Garner in a chokehold and killing him, says he has been constantly harassed by police, and now faces four years in prison on drugs and weapons charges. Orta is the only one connected to the Garner killing who has gone to jail.

Chris LeDay, the Georgia man who first posted a video of the police shooting of Alton Sterling, also says he was detained by police the next day on false charges that he believes were a form of retaliation.

Early media reports on the shooting of Small uncritically repeated the police’s version of the incident, before video exposed it to be false.

Wareham noted that the surveillance footage shows “the cold-blooded nature of what happened, and that the cop’s attitude was, ‘This was nothing more than if I had stepped on an ant.'”

As we said, above, black bodies are seen as inherently dangerous and inhuman. This perception is trained into officers at an unconscious level, and is continually reinforced throughout our culture. Studies like the Implicit Association Test, this survey of U.Va. medical students, and this one of shooter bias all clearly show that people are more likely to a) associate words relating to evil and inhumanity to; b) think pain receptors working in a fundamentally different fashion within; and c) shoot more readily at bodies that do not fit within WSAHPH. To put that a little more plainly, people have a higher tendency to think of non-WSAHPH bodies as fundamentally inhuman.

And yes, as we discussed, in the plurality of those AFWTA links, above, there absolutely is a danger of our passing these biases along not just to our younger human selves, but to our technology. In fact, as I’ve been saying often, now, the danger is higher, there, because we still somehow have a tendency to think of our technology as value-neutral. We think of our code and (less these days) our design as some kind of fundamentally objective process, whereby the world is reduced to lines of logic and math, and that simply is not the case. Codes are languages, and languages describe the world as the speaker experiences it. When we code, we are translating our human experience, with all of its flaws, biases, perceptual glitches, errors, and embellishments, into a technological setting. It is no wonder then that the algorithmic systems we use to determine the likelihood of convict recidivism and thus their bail and sentencing recommendations are seen to exhibit the same kind of racially-biased decision-making as the humans it learned from. How could this possibly be a surprise? We built these systems, and we trained them. They will, in some fundamental way, reflect us. And, at the moment, not much terrifies me more than that.

Last week saw the use of a police bomb squad robot to kill an active shooter. Put another way, we carried out a drone strike on a civilian in Dallas, because we “saw no other option.” So that’s in the Overton Window, now. And the fact that it was in response to a shooter who was targeting any and all cops as a mechanism of retribution against police brutality and violence against non-WSAHPH bodies means that we have thus increased the divisions between those of us who would say that anti-police-overreach stances can be held without hating the police themselves and those of us who think that any perceived attack on authorities is a real, existential threat, and thus deserving of immediate destruction. How long do we really think it’s going to be until someone with hate in their heart says to themselves, “Well if drones are on the table…” and straps a pipebomb to a quadcopter? I’m frankly shocked it hasn’t happened yet, and this line from the Atlantic article about the incident tells me that we need to have another conversation about normalization and depersonalization, right now, before it does:

“Because there was an imminent threat to officers, the decision to use lethal force was likely reasonable, while the weapon used was immaterial.”

Because if we keep this arms race up among civilian populations—and the police are still civilians which literally means that they are not military, regardless of how good we all are at forgetting that—then it’s only a matter of time before the overlap between weapons systems and autonomous systems comes home.

And as always—but most especially in the wake of this week and the still-unclear events of today—if we can’t sustain a nuanced investigation of the actual meaning of nonviolence in the Reverend Doctor Martin Luther King, Jr.’s philosophy, then now is a good time to keep his name and words out our mouths

Violence isn’t only dynamic physical harm. Hunger is violence. Poverty is violence. Systemic oppression is violence. All of the invisible, interlocking structures that sustain disproportionate Power-Over at the cost of some person or persons’ dignity are violence.

Nonviolence means a recognition of these things and our places within them.

Nonviolence means using all of our resources in sustained battle against these systems of violence.

Nonviolence means struggle against the symptoms and diseases killing us all, both piecemeal, and all at once.

 

Further Links:


A large part of how I support myself in the endeavor to think in public is with your help, so if you like what you’ve read here, and want to see more like it, then please consider becoming either a recurring Patreon subscriber or making a one-time donation to the Tip Jar, it would be greatly appreciated.
And thank you.