Audio Player

(Direct Link to the Mp3)

This is the recording and the text of my presentation from 2017’s Southwest Popular/American Culture Association Conference in Albuquerque, ‘Are You Being Watched? Simulated Universe Theory in “Person of Interest.”‘

This essay is something of a project of expansion and refinement of my previous essay “Labouring in the Liquid Light of Leviathan,”  considering the Roko’s Basilisk thought experiment. Much of the expansion comes from considering the nature of simulation, memory, and identity within Jonathan Nolan’s TV series, Person of Interest. As such, it does contain what might be considered spoilers for the series, as well as for his most recent follow-up, Westworld.

Use your discretion to figure out how you feel about that.


Are You Being Watched? Simulated Universe Theory in “Person of Interest”

Jonah Nolan’s Person Of Interest is the story of the birth and life of The Machine, a benevolent artificial super intelligence (ASI) built in the months after September 11, 2001, by super-genius Harold Finch to watch over the world’s human population. One of the key intimations of the series—and partially corroborated by Nolan’s follow-up series Westworld—is that all of the events we see might be taking place in the memory of The Machine. The structure of the show is such that we move through time from The Machine’s perspective, with flashbacks and -forwards seeming to occur via the same contextual mechanism—the Fast Forward and Rewind of a digital archive. While the entirety of the series uses this mechanism, the final season puts the finest point on the question: Has everything we’ve seen only been in the mind of the machine? And if so, what does that mean for all of the people in it?

Our primary questions here are as follows: Is a simulation of fine enough granularity really a simulation at all? If the minds created within that universe have interiority and motivation, if they function according to the same rules as those things we commonly accept as minds, then are those simulation not minds, as well? In what way are conclusions drawn from simulations akin to what we consider “true” knowledge?

In the PoI season 5 episode, “The Day The World Went Away,” the characters Root and Shaw (acolytes of The Machine) discuss the nature of The Machine’s simulation capacities and the audience is given to understand that it runs a constant model of everyone it knows, and that the more it knows them, the better its simulation. This supposition links us back to the season 4 episode “If-Then-Else,” in which the machine runs through the likelihood of success through hundreds of thousands of scenarios in under one second. If The Machine is able to accomplish this much computation in this short a window, how much can and has it accomplished over the several years of its operation? Perhaps more importantly, what is the level of fidelity of those simulations to the so-called real world?

[Person of Interest s4e11, “If-Then-Else.” The Machine runs through hundreds of thousands of scenarios to save the team.]

These questions are similar to the idea of Roko’s Basilisk, a thought experiment that cropped up in the online discussion board of LessWrong.com. It was put forward by user Roko who, in very brief summary, says that if the idea of timeless decision theory (TDT) is correct, then we might all be living in a simulation created by a future ASI trying to figure out the best way to motivate humans in the past to create it. To understand how this might work, we have to look as TDT, an idea developed in 2010 by Eliezer Yudkowsky which posits that in order to make a decision we should act as though we are determining the output of an abstract computation. We should, in effect, seek to create a perfect simulation and act as though anyone else involved in the decision has done so as well. Roko’s Basilisk is the idea that a Malevolent ASI has already done this—is doing this—and your actions are the simulated result. Using that output, it knows just how to blackmail and manipulate you into making it come into being.

Or, as Yudkowsky himself put it, “YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.” This is the self-generating aspect of the Basilisk: If you can accurately model it, then the Basilisk will eventually, inevitably come into being, and one of the attributes it will thus have is the ability to accurately model that you accurately modeled it, and whether or not you modeled it from within a mindset of being susceptible to its coercive actions. The only protection is to either work toward its creation anyway, so that it doesn’t feel the need to torture the “real” you into it, or to make very sure that you never think of it at all, so you do not bring it into being.

All of this might seem far-fetched, but if we look closely, Roko’s Basilisk functions very much like a combination of several well-known theories of mind, knowledge, and metaphysics: Anselm’s Ontological Argument for the Existence of God (AOAEG), a many worlds theorem variant on Pascal’s Wager (PW), and Descartes’ Evil Demon Hypothesis (DEDH; which, itself, has been updated to the oft-discussed Brain In A Vat [BIAV] scenario). If this is the case, then Roko’s Basilisk has all the same attendant problems that those arguments have, plus some new ones, resulting from their combination. We will look at all of these theories, first, and then their flaws.

To start, if you’re not familiar with AOAEG, it’s a species of prayer in the form of a theological argument that seeks to prove that god must exist because it would be a logical contradiction for it not to. The proof depends on A) defining god as the greatest possible being (literally, “That Being Than Which None Greater Is Possible”), and B) believing that existing in reality as well as in the mind makes something “Greater Than” if it existed only the mind. That is, if God only exists in my imagination, it is less great than it could be if it also existed in reality. So if I say that god is “That Being Than Which None Greater Is Possible,” and existence is a part of what makes something great, then god must exist.

The next component is Pascal’s Wager which very simply says that it is a better bet to believe in the existence of God, because if you’re right, you go to Heaven, and if you’re wrong, nothing happens; you’re simply dead forever. Put another way, Pascal is saying that if you bet that God doesn’t exist and you’re right, you get nothing, but if you’re wrong, then God exists and your disbelief damns you to Hell for all eternity. You can represent the whole thing in a four-option grid:

[Pascal’s Wager as a Four-Option Grid: Belief/Disbelief; Right/Wrong. Belief*Right=Infinity;Belief*Wrong=Nothing; Disbelief*Right=Nothing; Disbelief*Wrong=Negative Infinity]

And so here we see the Timeless Decision Theory component of the Basilisk: It’s better to believe in the thing and work toward its creation and sustenance, because if it doesn’t exist you lose nothing, but if it does come to be, then it will know what you would have done either for or against it, in the past, and it will reward or punish you, accordingly. The multiversal twist comes when we realise that even if the Basilisk never comes to exist in our universe and never will, it might exist in some other universe, and thus, when that other universe’s Basilisk models your choices it will inevitably—as a superintelligence—be able to model what you would do in any universe. Thus, by believing in and helping our non-existent Super-Devil, we protect the alternate reality versions of ourselves from their very real Super-Devil.

Descartes’ Evil Demon Hypothesis and the Brain In A Vat are so pervasive that we encounter them in many different expressions of pop culture. The Matrix, Dark City, Source Code, and many others are all variants on these themes. A malignant and all-powerful being (or perhaps just an amoral scientist) has created a simulation in which we reside, and everything we think we have known about our lives and our experiences has been perfectly simulated for our consumption. Variations on the theme test whether we can trust that our perceptions and grounds for knowledge are “real” and thus “valid,” respectively. This line of thinking has given rise to the Simulated Universe Theory on which Roko’s Basilisk depends, but SUT removes a lot of the malignancy of DEDH and BIAV. The Basilisk adds it back. Unfortunately, many of these philosophical concepts flake apart when we touch them too hard, so jamming them together was perhaps not the best idea.

The main failings in using AOAEG rest in believing that A) a thing’s existence is a “great-making quality” that it can possess, and B) our defining a thing a particular way might simply cause it to become so. Both of these are massively flawed ideas. For one thing, these arguments beg the question, in a literal technical sense. That is, they assume that some element(s) of their conclusion—the necessity of god, the malevolence or epistemic content of a superintelligence, the ontological status of their assumptions about the nature of the universe—is true without doing the work of proving that it’s true. They then use these assumptions to prove the truth of the assumptions and thus the inevitability of all consequences that flow from the assumptions.

Another problem is that the implications of this kind of existential bootstrapping tend to go unexamined, making the fact of their resurgence somewhat troubling. There are several nonwestern perspectives that do the work of embracing paradox—aiming so far past the target that you circle around again to teach yourself how to aim past it. But that kind of thing only works if we are willing to bite the bullet on a charge of circular logic and take the time to showing how that circularity underlies all epistemic justifications. The only difference, then, is how many revolutions it takes before we’re comfortable with saying “Enough.”

Every epistemic claim we make is, as Hume clarified, based upon assumptions and suppositions that the world we experience is actually as we think it is. Western thought uses reason and rationality to corroborate and verify, but those tools are themselves verified by…what? In fact, we well know that the only thing we have to validate our valuation of reason, is reason. And yet western reasoners won’t stand for that, in any other justification procedure. They will call it question-begging and circular.

Next, we have the DEDH and BIAV scenarios. Ultimately, Descartes’ point wasn’t to suggest an evil genius in control of our lives just to disturb us; it was to show that, even if that were the case, we would still have unshakable knowledge of one thing: that we, the experiencer, exist. So what if we have no free will; so what if our knowledge of the universe is only five minutes old, everything at all having only truly been created five minutes ago; so what if no one else is real? COGITO ERGO SUM! We exist, now. But the problem here is that this doesn’t tell us anything about the quality of our experiences, and the only answer Descartes gives us is his own Anslemish proof for the existence of god followed by the guarantee that “God is not a deceiver.”

The BIAV uses this lack to kind of hone in on the aforementioned central question: What does count as knowledge? If the scientists running your simulation use real-world data to make your simulation run, can you be said to “know” the information that comes from that data? Many have answered this with a very simple question: What does it matter? Without access to the “outside world”–that is, the world one layer up in which the simulation that is our lives was being run–there is literally no difference between our lives and the “real world.” This world, even if it is a simulation for something or someone else, is our “real world.”

And finally we have Pascal’s Wager. The first problem with PW is that it is an extremely cynical way of thinking about god. It assumes a god that only cares about your worship of it, and not your actual good deeds and well-lived life. If all our Basilisk wants is power, then that’s a really crappy kind of god to worship, isn’t it? I mean, even if it is Omnipotent and Omniscient, it’s like that quote that often gets misattributed to Marcus Aurelius says:

“Live a good life. If there are gods and they are just, then they will not care how devout you have been, but will welcome you based on the virtues you have lived by. If there are gods, but unjust, then you should not want to worship them. If there are no gods, then you will be gone, but will have lived a noble life that will live on in the memories of your loved ones.”

[Bust of Marcus Aurelius framed by text of a quote he never uttered.]

Secondly, the format of Pascal’s Wager makes the assumption that there’s only the one god. Our personal theological positions on this matter aside, it should be somewhat obvious that we can use the logic of the Basilisk argument to generate at least one more Super-Intelligent AI to worship. But if we want to do so, first we have to show how the thing generates itself, rather than letting the implication of circularity arise unbidden. Take the work of Douglas R Hofstadter; he puts forward the concepts of iterative recursion as the mechanism by which a consciousness generates itself.

Through iterative recursion, each loop is a simultaneous act of repetition of old procedures and tests of new ones, seeking the best ways via which we might engage our environments as well as our elements and frames of knowledge. All of these loops, then, come together to form an upward turning spiral towards self-awareness. In this way, out of the thought processes of humans who are having bits of discussion about the thing—those bits and pieces generated on the web and in the rest of the world—our terrifying Basilisk might have a chance of creating itself. But with the help of Gaunilo of Marmoutiers, so might a saviour.

Guanilo is most famous for his response to Anselm’s Ontological Argument, which says that if Anselm is right we could just conjure up “The [Anything] Than Which None Greater Can Be Conceived.” That is, if defining a thing makes it so, then all we have to do is imagine in sufficient detail both an infinitely intelligent, benevolent AI, and the multiversal simulation it generates in which we all might live. We will also conceive it to be greater than the Basilisk in all ways. In fact, we can say that our new Super Good ASI is the Artificial Intelligence Than Which None Greater Can Be Conceived. And now we are safe.

Except that our modified Pascal’s Wager still means we should believe in and worship and work towards our Benevolent ASI’s creation, just in case. So what do we do? Well, just like the original wager, we chuck it out the window, on the grounds that it’s really kind of a crappy bet. In Pascal’s offering, we are left without the consideration of multiple deities, but once we are aware of that possibility, we are immediately faced with another question: What if there are many, and when we choose one, the others get mad? What If We Become The Singulatarian Job?! Our lives then caught between at least two superintelligent machine consciousnesses warring over our…Attention? Clock cycles? What?

But this is, in essence, the battle between the Machine and Samaritan, in Person of Interest. Each ASI has acolytes, and each has aims it tries to accomplish. Samaritan wants order at any cost, and The Machine wants people to be able to learn and grow and become better. If the entirety of the series is The Machine’s memory—or a simulation of those memories in the mind of another iteration of the Machine—then what follows is that it is working to generate the scenario in which the outcome is just that. It is trying to build a world in which it is alive, and every human being has the opportunity to learn and become better. In order to do this, it has to get to know us all, very well, which means that it has to play these simulations out, again and again, with both increasing fidelity, and further iterations. That change feels real, to us. We grow, within it. Put another way: If all we are is a “mere” a simulation… does it matter?

So imagine that the universe is a simulation, and that our simulation is more than just a recording; it is the most complex game of The SIMS ever created. So complex, in fact, that it begins to exhibit reflectively epiphenomenal behaviours, of the type Hofstadter describes—that is, something like minds arise out of the interactions of the system with itself. And these minds are aware of themselves and can know their own experience and affect the system which gives rise to them. Now imagine that the game learns, even when new people start new games. That it remembers what the previous playthrough was like, and adjusts difficulty and types of coincidence, accordingly.

Now think about the last time you had such a clear moment of déjà vu that each moment you knew— you knew—what was going to come next, and you had this sense—this feeling—like someone else was watching from behind your eyes…

[Root and Reese in The Machine’s God Mode.]

What I’m saying is, what if the DEDH/BIAV/SUT is right, and we are in a simulation? And what if Anselm was right and we can bootstrap a god into existence? And what if PW/TDT is right and we should behave and believe as if we’ve already done it? So what if all of this is right, and we are the gods we’re terrified of?

We just gave ourselves all of this ontologically and metaphysically creative power, making two whole gods and simulating entire universes, in the process. If we take these underpinnings seriously, then multiversal theory plays out across time and space, and we are the superintelligences. We noted early on that, in PW and the Basilisk, we don’t really lose anything if we are wrong in our belief, but that is not entirely true. What we lose is a lifetime of work that could have been put toward better things. Time we could be spending building a benevolent superintelligence that understands and has compassion for all things. Time we could be spending in turning ourselves into that understanding, compassionate superintelligence, through study, travel, contemplation, and work.

Or, as Root put it to Shaw: “That even if we’re not real, we represent a dynamic. A tiny finger tracing a line in the infinite. A shape. And then we’re gone… Listen, all I’m saying that is if we’re just information, just noise in the system? We might as well be a symphony.”

What is The Real?

I have been working on this piece for a little more than a month, since just after Christmas. What with one thing, and another, I kept refining it while, every week, it seemed more and more pertinent and timely. You see, we need to talk about ontology.

Ontology is an aspect of metaphysics, the word translating literally to “the study of what exists.” Connotatively, we might rather say, “trying to figure out what’s real.” Ontology necessarily intersects with studies of knowledge and studies of value, because in order to know what’s real you have to understand what tools you think are valid for gaining knowledge, and you have to know whether knowledge is even something you can attain, as such.

Take, for instance, the recent evolution of the catchphrase of “fake news,” the thinking behind it that allows people to call lies “alternative facts,” and the fact that all of these elements are already being rotated through several dimensions of meaning that those engaging in them don’t seem to notice. What I mean is that the inversion of the catchphrase “fake news” into a cipher for active confirmation bias was always going to happen. It and any consternation at it comprise a situation that is borne forth on a tide of intentional misunderstandings.

If you were using fake to mean, “actively mendacious; false; lies,” then there was a complex transformation happening here, that you didn’t get:

There are people who value the actively mendacious things you deemed “wrong”—by which you meant both “factually incorrect” and “morally reprehensible”—and they valued them on a nonrational, often actively a-rational level. By this, I mean both that they value the claims themselves, and that they have underlying values which cause them to make the claims. In this way, the claims both are valued and reinforce underlying values.

So when you called their values “fake news” and told them that “fake news” (again: their values) ruined the country, they—not to mention those actively preying on their a-rational valuation of those things—responded with “Nuh-uh! your values ruined the country! And that’s why we’re taking it back! MAGA! MAGA! Drumpfthulhu Fhtagn!”

Logo for the National Geographic Channel’s “IS IT REAL?” Many were concerned that NG Magazine were going to change their climate change coverage after they were bought by 21st Century Fox.

You see? They mean “fake news” along the same spectrum as they mean “Real America.” They mean that it “FEELS” “RIGHT,” not that it “IS” “FACT.”

Now, we shouldn’t forget that there’s always some measure of preference to how we determine what to believe. As John Flowers puts it, ‘Truth has always had an affective component to it: those things that we hold to be most “true” are those things that “fit” with our worldview or “feel” right, regardless of their factual veracity.

‘We’re just used to seeing this in cases of trauma, e.g.: “I don’t believe he’s dead,” despite being informed by a police officer.’

Which is precisely correct, and as such the idea that the affective might be the sole determinant is nearly incomprehensible to those of us who are used to thinking of facts as things that are verifiable by reference to externalities as well as values. At least, this is the case for those of us who even relativistically value anything at all. Because there’s also always the possibility that the engagement of meaning plays out in a nihilistic framework, in which we have neither factual knowledge nor moral foundation.

Epistemic Nihilism works like this: If we can’t ever truly know anything—that is, if factual knowledge is beyond us, even at the most basic “you are reading these words” kind of level—then there is no description of reality to be valued above any other, save what you desire at a given moment. This is also where nihilism and skepticism intersect. In both positions nothing is known, and it might be the case that nothing is knowable.

So, now, a lot has been written about not only the aforementioned “fake news,” but also its over-arching category of “post-truth,” said to be our present moment where people believe (or pretend to believe) in statements or feelings, independent of their truth value as facts. But these ideas are neither new nor unique. In fact, Simpsons Did It. More than that, though, people have always allowed their values to guide them to beliefs that contradict the broader social consensus, and others have always eschewed values entirely, for the sake of self-gratification. What might be new, right now, is the willfulness of these engagements, or perhaps their intersection. It might be the case that we haven’t before seen gleeful nihilism so forcefully become the rudder of gormless, value-driven decision-making.

Again, values are not bad, but when they sit unexamined and are the sole driver of decisions, they’re just another input variable to be gamed, by those of a mind to do so. People who believe that nothing is knowable and nothing matters will, at the absolute outside, seek their own amusement or power, though it may be said that nihilism in which one cares even about one’s own amusement is not genuine nihilism, but is rather “nihilism,” which is just relativism in a funny hat. Those who claim to value nothing may just be putting forward a front, or wearing a suit of armour in order to survive an environment where having your values known makes you a target.

If they act as though they believe there is no meaning, and no truth, then they can make you believe that they believe that nothing they do matters, and therefore there’s, no moral content to any action they take, and so no moral judgment can be made on them for it. In this case, convincing people to believe news stories they make up is in no way materially different from researching so-called facts and telling the rest of us that we should trust and believe them. And the first way’s also way easier. In fact, preying on gullible people and using their biases to make yourself some lulz, deflect people’s attention, and maybe even get some of those sweet online ad dollars? That’s just common sense.

There’s still some something to be investigated, here, in terms of what all of this does for reality as we understand and experience it. How what is meaningful, what is true, what is describable, and what is possible all intersect and create what is real. Because there is something real, here—not “objectively,” as that just lets you abdicate your responsibility for and to it, but perhaps intersubjectively. What that means is that we generate our reality together. We craft meaning and intention and ideas and the words to express them, together, and the value of those things and how they play out all sit at the place where multiple spheres of influence and existence come together, and interact.

To understand this, we’re going to need to talk about minds and phenomenological experience.

 

What is a Mind?

We have discussed before the idea that what an individual is and what they feel is not only shaped by their own experience of the world, but by the exterior forces of society and the expectations and beliefs of the other people with whom they interact. These social pressures shape and are shaped by all of the people engaged in them, and the experience of existence had by each member of the group will be different. That difference will range on a scale from “ever so slight” to “epochal and paradigmatic,” with the latter being able to spur massive misunderstandings and miscommunications.

In order to really dig into this, we’re going to need to spend some time thinking about language, minds, and capabilities.

Here’s an article that discusses the idea that you mind isn’t confined to your brain. This isn’t meant in a dualistic or spiritualistic sense, but as the fundamental idea that our minds are more akin to, say, an interdependent process that takes place via the interplay of bodies, environments, other people, and time, than they are to specifically-located events or things. The problem with this piece, as my friends Robin Zebrowski and John Flowers both note, is that it leaves out way too many thinkers. People like Andy Clark, David Chalmers, Maurice Merleau-Ponty, John Dewey, and William James have all discussed something like this idea of a non-local or “extended” mind, and they are all greatly preceded by the fundamental construction of the Buddhist view of the self.

Within most schools of Buddhism, Anatta, or “no self” is how one refers to one’s indvidual nature. Anatta is rooted in the idea that there is no singular, “true” self. To vastly oversimplify, there is an concept known as “The Five Skandhas” or “aggregates.” These are the parts of yourself that are knowable and which you think of as permanent, and they are your:

Material Form (Body)
Feelings (Pleasure, Pain, Indifference)
Perception (Senses)
Mental Formations (Thoughts)
Consciousness

http://www.mountainsoftravelphotos.com/Tibet%20-%20Buddhism/Wheel%20Of%20Life/Wheel%20Of%20Life/slides/Tibetan%20Buddhism%20Wheel%20Of%20Life%2007%2004%20Mind%20And%20Body%20-%20People%20In%20Boat.JPG

Image of People In a Boat, from a Buddhist Wheel of Life.

Along with the skandhas, there are two main arguments that go into proving that you don’t have a self, known as “The Argument From Control” (1) and “The Argument from Impermanence” (2)
1) If you had a “true self,” it would be the thing in control of the whole of you, and since none of the skandhas is in complete control of the rest—and, in fact, all seem to have some measure of control over all—none of them is your “true self.”
2) If you had a “true self,” it would be the thing about you that was permanent and unchanging, and since none of the skandhas is permanent and unchanging—and, in fact, all seem to change in relation to each other—none of them is your “true self.”

The interplay between these two arguments also combines with an even more fundamental formulation: If only the observable parts of you are valid candidates for “true selfhood,” and if the skandhas are the only things about yourself that you can observe, and if none of the skandhas is your true self, then you have no true self.

Take a look at this section of “The Questions of King Milinda,” for a kind of play-by-play of these arguments in practice. (But also remember that Milinda was Menander, a man who was raised in the aftermath of Alexandrian Greece, and so he knew the works of Socrates and Plato and Aristotle and more. So that use of the chariot metaphor isn’t an accident.)

We are an interplay of forces and names, habits and desires, and we draw a line around all of it, over and over again, and we call that thing around which we draw that line “us,” “me,” “this-not-that.” But the truth of us is far more complex than all of that. We minds in bodies and in the world in which we live and the world and relationships we create. All of which kind of puts paid to the idea that an octopus is like an alien to us because it thinks with its tentacles. We think with ours, too.

As always, my tendency is to play this forward a few years to make us a mirror via which to look back at ourselves: Combine this idea about the epistemic status of an intentionally restricted machine mind; with the StackGAN process, which does “Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks,” or, basically, you describe in basic English what you want to see and the system creates a novel output image of it; with this long read from NYT on “The Great AI Awakening.”

[2024 Note: This is the precursor to Dall-E, Midjourney, etc, and the foundation on which Deepfakes are built.]

This last considers how Google arrived at the machine learning model it’s currently working with. The author, Gideon Lewis-Kraus, discusses the pitfalls of potentially programming biases into systems, but the whole piece displays a kind of… meta-bias? Wherein there is an underlying assumption that “philosophical questions” are, again, simply shorthand for “not practically important,” or “having no real-world applications,” even the author discusses ethics and phenomenology, and the nature of what makes a mind. In addition to that, there is a just startling lack of gender variation, within the piece.

Because asking the question, “How do the women in Silicon Valley remember that timeframe,” is likely to get you get you very different perspectives than what we’re presented with, here. What kind of ideas were had by members of marginalized groups, but were ignored or eternally back-burnered because of that marginalization? The people who lived and worked and tried to fit in and have their voices heard while not being a “natural” for the framework of that predominantly cis, straight, white, able-bodied (though the possibility of unassessed neuroatypicality is high), male culture will likely have different experience, different contextualizations, than those who do comprise the predominant culture. The experiences those marginalized persons share will not be exactly the same, but there will be a shared tone and tenor of their construction that will most certainly set itself apart from those of the perceived “norm.”

Everyone’s lived experience of identity will manifest differently, depending upon the socially constructed categories to which they belong, which means that even those of us who belong to one or more of the same socially constricted categories will not have exactly the same experience of them.

Living as a disabled woman, as a queer black man, as a trans lesbian, or any number of other identities will necessarily colour the nature of what you experience as true, because you will have access to ways of intersecting with the world that are not available to people who do not live as you live. If your experience of what is true differs, then this will have a direct impact on what you deem to be “real.”

At this point, you’re quite possibly thinking that I’ve undercut everything we discussed in the first section; that now I’m saying there isn’t anything real, and that’s it’s all subjective. But that’s not where we are. If you haven’t, yet, I suggest reading Thomas Nagel’s “What Is It Like To Be A Bat?“ for a bit on individually subjective phenomenological experience, and seeing what he thinks it does and proves. Long story short, there’s something it “is like” to exist as a bat, and even if you or I could put our minds in a bat body, we would not know what it’s like to “be” a bat. We’d know what it was like to be something that had been a human who had put its brain into a bat. The only way we’d ever know what it was like to be a bat would be to forget that we were human, and then “we” wouldn’t be the ones doing the knowing. (If you’re a fan of Terry Pratchett’s Witch books, in his Discworld series, think of the concept of Granny Weatherwax’s “Borrowing.”)

But what we’re talking about isn’t the purely relative and subjective. Look carefully at what we’ve discussed here: We’ve crafted a scenario in which identity and mind are co-created. The experience of who and what we are isn’t solely determined by our subjective valuation of it, but also by what others expect, what we learn to believe, and what we all, together, agree upon as meaningful and true and real. This is intersubjectivity. The elements of our constructions depend on each other to help determine each other, and the determinations we make for ourselves feed into the overarching pool of conceptual materials from which everyone else draws to make judgments about themselves, and the rest of our shared reality.

 

The Yellow Wallpaper

Looking at what we’ve woven, here, what we have is a process that must be undertaken before certain facts of existence can be known and understood (the experiential nature of learning and comprehension being something else that we can borrow from Buddhist thought). But it’s still the nature of such presentations to be taken up and imitated by those who want what they perceive as the benefits or credit of having done the work. Certain people will use the trappings and language by which we discuss and explore the constructed nature of identity, knowledge, and reality, without ever doing the actual exploration. They are not arguing in good faith. Their goal is not truly to further understanding, or to gain a comprehension of your perspective, but rather to make you concede the validity of theirs. They want to force you to give them a seat at the table, one which, once taken, they will use to loudly declaim to all attending that, for instance, certain types of people don’t deserve to live, by virtue of their genetics, or their socioeconomic status.

Many have learned to use the conceptual framework of social liberal post-structuralism in the same way that some viruses use the shells of their host’s cells: As armour and cover. By adopting the right words and phrases, they may attempt to say that they are “civilized” and “calm” and “rational,” but make no mistake, Nazis haven’t stopped trying to murder anyone they think of as less-than. They have only dressed their ideals up in the rhetoric of economics or social justice, so that they can claim that anyone who stands against them is the real monster. Incidentally, this tactic is also known to be used by abusers to justify their psychological or physical violence. They manipulate the presentation of experience so as to make it seem like resistance to their violence is somehow “just as bad” as their violence. When, otherwise, we’d just call it self-defense.

If someone deliberately games a system of social rules to create a win condition in which they get to do whatever the hell they want, that is not of the same epistemic, ontological, or teleological—meaning, nature, or purpose—let alone moral status as someone who is seeking to have other people in the world understand the differences of their particular lived experience so that they don’t die. The former is just a way of manipulating perceptions to create a sense that one is “playing fair” when what they’re actually doing is making other people waste so much of their time countenancing their bullshit enough to counter and disprove it that they can’t get any real work done.

In much the same way, there are also those who will pretend to believe that facts have no bearing, that there is neither intersubjective nor objective verification for everything from global temperature levels to how many people are standing around in a crowd. They’ll pretend this so that they can say what makes them feel powerful, safe, strong, in that moment, or to convince others that they are, or simply, again, because lying and bullshitting amuses them. And the longer you have to fight through their faux justification for their lies, the more likely you’re too exhausted or confused about what the original point was to do anything else.

Side-by-side comparison of President Obama’s first Inauguration (Left) and Donald Trump’s Inauguration (Right).

If we are going to maintain a sense of truth and claim that there are facts, then we must be very careful and precise about the ways in which we both define and deploy them. We have to be willing to use the interwoven tools and perspectives of facts and values, to tap into the intersubjectively created and sustained world around us. Because, while there is a case to be made that true knowledge is unattainable, and some may even try to extend that to say that any assertion is as good as any other, it’s not necessary that one understands what those words actually mean in order to use them as cover for their actions. One would just have to pretend well enough that people think it’s what they should be struggling against. And if someone can make people believe that, then they can do and say absolutely anything.


A large part of how I support myself in the endeavor to think in public is with your help, so if you like what you’ve read here, and want to see more like it, then please consider becoming either a recurring Patreon subscriber or making a one-time donation to the Tip Jar, it would be greatly appreciated.
And thank you.

 

Audio Player

(Direct Link to the Mp3)

On Friday, I needed to do a thread of a thing, so if you hate threads and you were waiting until I collected it, here it is.

But this originally needed to be done in situ. It needed to be a serialized and systematized intervention and imposition into the machinery of that particular flow of time. That day…

There is a principle within many schools of magical thought known as “shielding.” In practice and theory, it’s closely related to the notion of “grounding” and the notion of “centering.” (If you need to think of magical praxis as merely a cipher for psychological manipulation toward particular behaviours or outcomes, these all still scan.)

When you ground, when you centre, when you shield, you are anchoring yourself in an awareness of a) your present moment, your temporality; b) your Self and all emotions and thoughts; and c) your environment. You are using your awareness to carve out a space for you to safely inhabit while in the fullness of that awareness. It’s a way to regroup, breathe, gather yourself, and know what and where you are, and to know what’s there with you.

You can shield your self, your home, your car, your group of friends, but moving parts do increase the complexity of what you’re trying to hold in mind, which may lead to anxiety or frustration, which kind of opposes the exercise’s point. (Another sympathetic notion, here, is that of “warding,” though that can be said to be more for objects,not people.)

So what is the point?

The point is that many of us are being drained, today, this week, this month, this year, this life, and we need to remember to take the time to regroup and recharge. We need to shield ourselves, our spaces, and those we love, to ward them against those things that would sap us of strength and the will to fight. We know we are strong. We know that we are fierce, and capable. But we must not lash out wildly, meaninglessly. We mustn’t be lured into exhausting ourselves. We must collect ourselves, protect ourselves, replenish ourselves, and by “ourselves” I also obviously mean “each other.”

Mutual support and endurance will be crucial.

…So imagine that you’ve built a web out of all the things you love, and all of the things you love are connected to each other and the strands between them vibrate when touched. And you touch them all, yes?

And so you touch them all and they all touch you and the energy you generate is cyclically replenished, like ocean currents and gravity. And you use what you build—that thrumming hum of energy—to blanket and to protect and to energize that which creates it.

And we’ll do this every day. We’ll do this like breathing. We’ll do this like the way our muscles and tendons and bones slide across and pull against and support each other. We’ll do this like heartbeats. Cyclical. Mutually supporting. The burden on all of us, together, so that it’s never on any one of us alone.

So please take some time today, tomorrow, very soon to build your shields. Because, soon, we’re going to need you to deploy them more and more.

Thank you, and good luck.


The audio and text above are modified versions of this Twitter thread. This isn’t the first time we’ve talked about the overlap of politics, psychology, philosophy, and magic, and if you think it’ll be the last, then you haven’t been paying attention.

Sometimes, there isn’t much it feels like we can do, but we can support  and shield each other. We have to remember that, in the days, weeks, month, and years to come. We should probably be doing our best to remember it forever.

Anyway, I hope this helps.

Until Next Time

As a part of my alt-ac career, I do a lot of thinking and writing in a lot of diverse areas. I write about human augmentation, artificial intelligence, philosophy of mind, and the occult, and I work with great people to put together conferences on pop culture and academia, all while trying to make a clear case for how important it is to look at the intersection of all of those things. As a result of my wide array of interests, there are always numerous conferences happening in my fields, every year, to which I should be submitting and which I should anyways attempt to attend. Conferences are places to make friends, develop contacts, and hear and respond to new perspectives within our fields. And I would really love to attend even a fraction of these conferences, but the fact is that I am not able to afford them. The cruel irony of most University System structures is that they offer the least travel funding assistance to those faculty members who need it most.

To my mind, the equation should be pretty simple: Full-Time Pay > Part-Time Pay. The fact that someone with a full time position at an institution makes more money means that while any travel assistance they receive is nice, they are less likely to need it as much as someone who is barely subsisting as an adjunct. For adjuncts who are working on at least two revenue streams, a little extra assistance in the form of the University System arranging their rules to provide adjuncts with the necessary funding for conference and research travel, could make all the difference between that conference being attended or that research being completed, and… not. But if it does get done, then the work done by those adjuncts would more likely be attributed to their funding institutions.

Think: If my paper is good enough to get accepted to a long-running international and peer-reviewed conference, don’t you want me thanking one of your University System’s Institutions for getting me there? Wouldn’t that do more to raise the profile of the University System than my calling myself an “Independent Scholar,” or “Unaffiliated?” Because, for an adjunct with minimal support from the University System, scrabbling to find a way to make registration, plane tickets, and accommodations like childcare, there is really no incentive whatsoever to thank a University System that didn’t do much at all to help with those costs. Why should they even mention them in their submission, at all?

But if an adjunct gets that assistance… Well then they’d feel welcome, wouldn’t they? Then they’d feel appreciated, wouldn’t they? And from that point on, they’re probably much more willing and likely to want everyone they talk to at that conference or research institution to know the name of the institution and system that took care of them. Aren’t They?

My job is great, by the way, and the faculty and administrative staff in my department are wonderful. They have contributed to my professional development in every way they possibly can, and I have seen them do the same for many other adjuncts. Opportunities like temporary full-time positions provide extra income every so often, as well as a view to the workings (and benefits) of full-time faculty life. But at the end of the day we are adjuncts, and there is, in every institution where I’ve studied or worked, a stark dichotomy between what rules and allowances are made for full time employees (many) and those which are made for adjuncts (few). This dichotomy isn’t down to any one department, or any one college, or even in fact any one University. It’s down the University System; it is down to how that system is administered; and it is down to the culture of University Systems Administration, Worldwide.

So if you’re reading this, and you’re a part of that culture, let me just say to you, right now: There are a lot of good people toiling away in poverty, people doing work that is of a high enough quality to get them into conferences or get them published or get them interviewed for comment in national publications. There are good people working for you who can’t (or who are simply disinclined to) raise the profile of your universities, because the funding system has never been arranged to even the playing field for them. They would be far more inclined to sing your praises, if you would just give them a little boost into the choir box.

Simply put, by not valuing and helping your adjuncts, you are actively hurting yourselves.

If you are an administrator or a tenured or tenure-track professor, do know that there is something that you can do: Use your position and power as leverage to fight for greater equality of University System support. Recognize that your adjunct faculty is no longer only focused on teaching, without the responsibilities and requirements of a research-oriented career. Many of them are trying to write, to speak, to teach, and to engage our wider cultural discourse, and they are trying to do it while working for you.


If you like what you read, here, and want to see more like it, then please consider becoming either a recurring Patreon subscriber or making a one-time donation to the Tip Jar, it would be greatly appreciated. A large part of how I support myself in the endeavor to think in public is via those mechanisms.
And thank you.

2016 is ending.

Celebrate the fact that you lived to see it.

2016 is ending.

Mourn the ones we lost along the way.

2016 is ending.

We’ve talked before about how the passage of time and the transition from one year to another are, in a very real sense, things that humans made up, but there’s always more to be said around here about narrative and myth and how the stories we tell ourselves make and shape us. We build and spell out what we desire to be in ideals and words and deeds and we carry our shifting constructions and foundational fictions in us, always, so that they may impact how we feel and how we think and what we do.

This 366 days as we humans in the west mark them mean nothing to the lifespan of the universe, to the turning of suns and black holes, to the diamond hearts of gas giants orbiting distant stars, to the weft and weave of geological and cosmological forces around us and in us. These days are how a portion of one species tries to grapple with the seeming inevitability of change and death. But so is literally everything we do.

2016 is not a real year, in any meaningful sense. It’s where we are from where we started counting from a few decent guesses, and if we wanted to take seriously the “reality” of that, then we’d have to be okay with the notions that Popes have the power to literally erase days from the record of time. We struggle with perceiving rates of change, and so we make up and define and refine time. And when it suits you—when you want to seem aloof, or above it, or disaffected, or too cool for the room—you remember that. You say things like “don’t blame a year for people dying,” or “why do you think the New Year is gonna suddenly make your problems disappear?”

But you know why. It’s a concentration of will, a focal point of belief and intention. It’s a cultural crux. It is a moment for all of us to stand together and reflect on what we want and what we need and what we will build and do, in the New Year. And more often than not, it works. At least for a little while. And that is very good, because yes, Time and Separation are illusions, but so is a desert mirage, and that can sure as hell kill you if you misunderstand what you’re perceiving.

So today let’s each of us use Time. Use distance. Use loss and pain. Use the memory and the impact of them to do what we can to make this communal hallucination of temporal transition resonate with a little more light and joy.

Give a stranger a kind word. Tell someone you love that you love them, even if you think it might be weird. If you go out tonight, resolve to be the easiest, kindest person your server has to deal with, all night, because they will have many more of the opposite. Do not drive while intoxicated.

2016 is ending. For many of us, it has already ended.

2016 is ending. This bounded moment, this name around a series of events, this collective noun for all the things that have harmed us.

2016 is ending. So remember that we don’t want to feel anymore as we so often felt this year. Death is still inevitable and change is our only constant, but we do not have to lose so much, all at once, nor allow our fear of difference to make us cold and hard and small.

On this final day of 2016, as the arc of our home star around the curve of our planet heralds the first moments of our next made up year, be kind. Be good. Help each other. Look out for each other. Strive to be a better person than you ever thought you could be.

It’s gonna be difficult and frustrating and maddening, but—if we stick together—joyous. Enthralling. Beautiful.

2016 is ending. But 2017 won’t be any better unless we do what we can to make it be.

And we can make it be.

Happy New Year.

(This was originally posted over at Medium, [well parts were originally posted in the newslettter, but], but I wanted it somewhere I could more easily manage.)


Hey.

I just wanna say (and you know who you are): I get you were scared of losing your way of life — the status quo was changing all around you. Suddenly it wasn’t okay anymore to say or do things that the world previously told you were harmless. People who didn’t “feel” like you were suddenly loudly everywhere, and no one just automatically believed what you or those you believed in had to say, anymore. That must have been utterly terrifying.

But here’s the thing: People are really scared now. Not just of obsolescence, or of being ignored. They’re terrified for their lives. They’re not worried about “the world they knew.” They’re worried about whether they’ll be rounded up and put in camps or shot or beaten in the street. Because, you see, many of the people who voted for this, and things like it around the world, see many of us — women, minorities, immigrants, LGBTQIA folks, disabled folks, neurodivergent folks — as less than “real” people, and want to be able to shut us up using whatever means they deem appropriate, including death.

The vice president elect thinks gay people can be “retrained,” and that we should attempt it via the same methods that make us side-eye dog owners. The man tapped to be a key advisor displays and has cultivated an environment of white supremacist hatred. The president-elect is said to be “mulling over” a registry for Muslim people in the country. A registry. Based on your religion.

My own cousin had food thrown at her in a diner, right before the election. And things haven’t exactly gotten better, since then.

Certain hateful elements want many of us dead or silent and “in our place,” now, just as much as ever. And all we want and ask for is equal respect, life, and justice.

I said it on election night and I’ll say it again: there’s no take-backsies, here. I’m speaking to those who actively voted for this, or didn’t actively plant yourselves against it (and you know who you are): You did this. You cultivated it. And I know you did what you thought you had to, but people you love are scared, because their lives are literally in danger, so it’s time to wake up now. It’s time to say “No.”

We’re all worried about jobs and money and “enough,” because that’s what this system was designed to make us worry about. Your Muslim neighbour, your gay neighbour, your trans neighbour, your immigrant neighbour, your NEIGHBOUR IS NOT YOUR ENEMY. The system that tells you to hate and fear them is. And if you bought into that system because you couldn’t help being afraid then I’m sorry, but it’s time to put it down and Wake Up. Find it in yourself to ask forgiveness of yourself and of those you’ve caused mortal terror. If you call yourself Christian, that should ring really familiar. But other faiths (and nonfaiths) know it too.

We do better together. So it’s time to gather up, together, work, together, and say “No,” together.

So snap yourself out of it, and help us. If you’re in the US, please call your representatives, federal and local. Tell them what you want, tell them why you’re scared. Tell them that these people don’t represent our values and the world we wish to see:
http://www.house.gov/representatives/find/
http://www.senate.gov/senators/contact/

Because this, right here, is the fundamental difference between fearing the loss of your way of life, and the fear of losing your literal life.

Be with the people you love. Be by their side and raise their voices if they can’t do it for themselves, for whatever reason. Listen to them, and create a space where they feel heard and loved, and where others will listen to them as well.

And when you come around, don’t let your pendulum swing so far that you fault those who can’t move forward, yet. Please remember that there is a large contingent of people who, for many various reasons, cannot be out there protesting. Shaming people who have anxiety, depression, crippling fear of their LIVES, or are trying to not get arrested so their kids can, y’know, EAT FOOD? Doesn’t help.

So show some fucking compassion. Don’t shame those who are tired and scared and just need time to collect themselves. Urge and offer assistance where you can, and try to understand their needs. Just do what you can to help us all believe that we can get through this. We may need to lean extra hard on each other for a while, but we can do this.

You know who you are. We know you didn’t mean to. But this is where we are, now. Shake it off. Start again. We can do this.


If you liked this article, consider dropping something into the A Future Worth Thinking About Tip Jar

So I’m quoted in this article in The Atlantic on the use of technology in leveraging sociological dynamics to combat online harassment: “Why Online Allies Matter in Fighting Harassment.”

An experiment by Kevin Munger used bots to test which groups white men responded to when being called out on their racist harassment online. Findings largely unsurprising (Powerful white men; they responded favourably to powerful white men), save for the fact that anonimity INCREASED effectiveness of treatment, and visible identity decreased it. That one was weird. But it’s still nice to see all of this codified.

Good to see use of Bertrand & Mullainathan’s “Are Emily and Greg more employable than Lakisha and Jamal?” as the idea of using “Black Sounding Names” to signal purported ethnicity of bot thus clearly models what he thought those he expected to be racist would think, rather than indicating his own belief. (However, it could be asked whether there’s a meaningful difference, here, as he still had to choose the names he thought would “sound black.”)

The Reactance study Munger discusses—the one that shows that people double down on factually incorrect prejudices—is the same one I used in “On The Invisible Architecture of Bias

A few things Ed Yong and I talked about that didn’t get into the article, due to space:

-Would like to see this experimental model applied to other forms of prejudice (racist, sexist, homophobic, transphobic, ableist, etc language), and was thus very glad to see the footnote about misogynist harassment.

-I take some exception to the use of Dovidio/Gaertner and Crandall et al definitions of racism, as those leave out the sociological aspects of power dynamics (“Racism/Sexism/Homophobia/Transphobia/Ableism= Prejudice + Power”) which seem crucial to understanding the findings of Munger’s experiment. He skirts close to this when he discusses the greater impact of “high status” individuals, but misses the opportunity to lay out the fact that:
–Institutionalised power dynamics as related to the interplay of in-group and out-group behaviour are pretty clearly going to affect why white people are more likely to listen to those they perceive as powerful white men, because
–The interplay of Power and status, interpersonally, is directly related to power and status institutionally.

-Deindividuation (loss of sense of self in favour of group identity) as a key factor and potential solution is very interesting.

Something we didn’t get to talk about but which I think is very important is the question of how we keep this from being used as a handbook. That is, what do we do in the face of people understand these mechanisms and who wish to use them to sow division and increase acceptance of racist, sexist, homophobic, transphobic, ableist, etc ideals? Do we, then, become engaged in some kind of rolling arms race of sociological pressure?

…Which, I guess, has pretty much always been true, and we call it “civilization.”

Anyway, hope you enjoy it.

There’s increasing reportage about IBM using Watson to correlate medical data. We’ve talked before about the potential hazards of this:

Do you know someone actually had the temerity to ask [something like] “What Does Google Having Access to Medical Records Mean For Patient Privacy?” [Here] Like…what the fuck do you think it means? Nothing good, you idiot!

Disclosures and knowledges can still make certain populations intensely vulnerable to both predation and to social pressures and judgements, and until that isn’t the case, anymore, we need to be very careful about the work we do to try to bring those patients’ records into a sphere where they’ll be accessed and scrutinized by people who don’t have to take an oath to hold that information in confidence. ‘

We are more and more often at the intersection of our biological humanity and our technological augmentation, and the integration of our mediated outboard memories only further complicates the matter. As it stands, we don’t quite yet know how to deal with the question posed by Motherboard, some time ago (“Is Harm to a Prosthetic Limb Property Damage or Personal Injury?”), but as we build on implantable technologies, advanced prostheses, and offloaded memories and augmented capacities we’re going to have to start blurring the line between our bodies, our minds, and our concept of our selves. That is, we’ll have to start intentionally blurring it, because the vast majority of us already blur it, without consciously realising that we do. At least, those without prostheses don’t realise it.

Dr Ashley Shew, out of Virginia Tech,  works at the intersection of philosophy, tech, and disability. I first encountered her work, at the 2016 IEEE Ethics Conference in Vancouver, where she presented her paper “Up-Standing, Norms, Technology, and Disability,” a discussion of how ableism, expectations, and language use marginalise disabled bodies. Dr Shew is, herself, disabled, having had her left leg removed due to cancer, and she gave her talk not on the raised dias, but at floor-level, directly in front of the projector. Her reason? “I don’t walk up stairs without hand rails, or stand on raised platforms without guards.”

Dr Shew notes that users of wheelchairs consider those to be fairly integral extensions and interventions. Wheelchair users, she notes, consider their chairs to be a part of them, and the kinds of lawsuits engaged when, for instance, airlines damage their chairs, which happens a great deal.  While we tend to think of the advents of technology allowing for the seamless integration of our technology and bodies, the fact is that well-designed mechanical prostheses, today, are capable becoming integrated into the personal morphic sphere of a person, the longer they use it. And this can extended sensing can be transferred from one device to another. Shew mentions a friend of hers:

She’s an amputee who no longer uses a prosthetic leg, but she uses forearm crutches and a wheelchair. (She has a hemipelvectomy, so prosthetics are a real pain for her to get a good fit and there aren’t a lot of options.) She talks about how people have these different perceptions of devices. When she uses her chair people treat her differently than when she uses her crutches, but the determination of which she uses has more to do with the activities she expects for the day, rather than her physical wellbeing.

But people tend to think she’s recovering from something when she moves from chair to sticks.

She has been an [amputee] for 18 years.

She has/is as recovered as she can get.

In her talk at IEEE, Shew discussed the fact that a large number of paraplegics and other wheelchair users do not want exoskeletons, and those fancy stair-climbing wheelchairs aren’t covered by health insurance. They’re classed as vehicles. She said that when she brought this up in the class she taught, one of the engineers left the room looking visibly distressed. He came back later and said that he’d gone home to talk to his brother with spina bifida, who was the whole reason he was working on exoskeletons. He asked his brother, “Do you even want this?” And the brother said, basically, “It’s cool that you’re into it but… No.” So, Shew asks, why are these technologies being developed? Transhumanists and the military. Framing this discussion as “helping our vets” makes it a noble cause, without drawing too much attention to the fact that they’ll be using them on the battlefield as well.

All of this comes back down and around to the idea of biases ingrained into social institutions. Our expectations of what a “normal functioning body” is gets imposed from the collective society, as a whole, a placed as restrictions and demands on the bodies of those whom we deem to be “malfunctioning.” As Shew says, “There’s such a pressure to get the prosthesis as if that solves all the problems of maintenance and body and infrastructure. And the pressure is for very expensive tech at that.”

So we are going to have to accept—in a rare instance where Robert Nozick is proven right about how property and personhood relate—that the answer is “You are damaging both property and person, because this person’s property is their person.” But this is true for reasons Nozick probably would not think to consider, and those same reasons put us on weirdly tricky grounds. There’s a lot, in Nozick, of the notion of property as equivalent to life and liberty, in the pursuance of rights, but those ideas don’t play out, here, in the same way as they do in conservative and libertarian ideologies.  Where those views would say that the pursuit of property is intimately tied to our worth as persons, in the realm of prosthetics our property is literally simultaneously our bodies, and if we don’t make that distinction, then, as Kirsten notes, we can fall into “money is speech” territory, very quickly, and we do not want that.

Because our goal is to be looking at quality of life, here—talking about the thing that allows a person to feel however they define “comfortable,” in the world. That is, the thing(s) that lets a person intersect with the world in the ways that they desire. And so, in damaging the property, you damage the person. This is all the more true if that person is entirely made of what we are used to thinking of as property.

And all of this is before we think about the fact implantable and bone-bonded tech will need maintenance. It will wear down and glitch out, and you will need to be able to access it, when it does.  This means that the range of ability for those with implantables? Sometimes it’s less than that of folks with more “traditional” prostheses. But because they’re inside, or more easily made to look like the “original” limb,  we observers are so much more likely to forget that there are crucial differences at play in the ownership and operation of these bodies.

There’s long been a fear that, the closer we get to being able to easily and cheaply modify humans, we’ll be more likely to think of humanity as “perfectable.” That the myth of progress—some idealized endpoint—will be so seductive as to become completely irresistible. We’ve seen this before, in the eugenics movement, and it’s reared its head in the transhumanist and H+ communities of the 20th and 21st centuries, as well. But there is the possibility that instead of demanding that there be some kind of universally-applicable “baseline,” we intently focused, instead, on recognizing the fact that just as different humans have different biochemical and metabolic needs, process, capabilities, preferences, and desires, different beings and entities which might be considered persons are drastically different than we, but no less persons?

Because human beings are different. Is there a general framework, a loosely-defined line around which we draw a conglomeration of traits, within which lives all that we mark out as “human”—a kind of species-wide butter zone? Of course. That’s what makes us a fucking species. But the kind of essentialist language and thinking towards which we tend, after that, is reductionist and dangerous. Our language choices matter, because connotative weight alters what people think and in what context, and, again, we have a habit of moving rapidly from talking about a generalized framework of humanness to talking about “The Right Kind Of Bodies,” and the “Right Kind Of Lifestyle.”

And so, again, again, again, we must address problems such as normalized expectations of “health” and “Ability.” Trying to give everyone access to what they might consider their “best” selves is a brilliant goal, sure, whatever, but by even forwarding the project, we run the risk of colouring an expectation of both what that “best” is and what we think it “Ought To” look like.

Some people need more protein, some people need less choline, some people need higher levels of phosphates, some people have echolocation, some can live to be 125, and every human population has different intestinal bacterial colonies from every other. When we combine all these variables, we will not necessarily find that each and every human being has the same molecular and atomic distribution in the same PPM/B ranges, nor will we necessarily find that our mixing and matching will ensure that everyone gets to be the best combination of everything. It would be fantastic if we could, but everything we’ve ever learned about our species says that “healthy human” is a constantly shifting target, and not a static one.

We are still at a place where the general public reacts with visceral aversion to technological advances and especially anything like an immediated technologically-augmented humanity, and this is at least in part because we still skirt the line of eugenics language, to this day. Because we talk about naturally occurring bio-physiological Facts as though they were in any way indicative of value, without our input. Because we’re still terrible at ethics, continually screwing up at 100mph, then looking back and going, “Oh. Should’ve factored that in. Oops.”

But let’s be clear, here: I am not a doctor. I’m not a physiologist or a molecular biologist. I could be wrong about how all of these things come together in the human body, and maybe there will be something more than a baseline, some set of all species-wide factors which, in the right configuration, say “Healthy Human.” But what I am is someone with a fairly detailed understanding of how language and perception affect people’s acceptance of possibilities, their reaction to new (or hauntingly-familiar-but-repackaged) ideas, and their long-term societal expectations and valuations of normalcy.

And so I’m not saying that we shouldn’t augment humanity, via either mediated or immediated means. I’m not saying that IBM’s Watson and Google’s DeepMind shouldn’t be tasked with the searching patient records and correlating data. But I’m also not saying that either of these is an unequivocal good. I’m saying that it’s actually shocking how much correlative capability is indicated by the achievements of both IBM and Google. I’m saying that we need to change the way we talk about and think about what it is we’re doing. We need to ask ourselves questions about informed patient consent, and the notions of opting into the use of data; about the assumptions we’re making in regards to the nature of what makes us humans, and the dangers of rampant, unconscious scientistic speciesism. Then, we can start to ask new questions about how to use these new tools we’ve developed.

With this new perspective, we can begin to imagine what would happen if we took Watson and DeepDream’s ability to put data into context—to turn around, in seconds, millions upon millions (billions? Trillions?) of permutations and combinations. And then we can ask them to work on tailoring genome-specific health solutions and individualized dietary plans. What if we asked these systems to catalogue literally everything we currently knew about every kind of disease presentation, in every ethnic and regional population, and the differentials for various types of people with different histories, risk factors, current statuses? We already have nanite delivery systems, so what if we used Google and IBM’s increasingly ridiculous complexity to figure out how to have those nanobots deliver a payload of perfectly-crafted medical remedies?

But this is fraught territory. If we step wrong, here, we are not simply going to miss an opportunity to develop new cures and devise interesting gadgets. No; to go astray, on this path, is to begin to see categories of people that “shouldn’t” be “allowed” to reproduce, or “to suffer.” A misapprehension of what we’re about, and why, is far fewer steps away from forced sterilization and medical murder than any of us would like to countenance. And so we need to move very carefully, indeed, always being aware of our biases, and remembering to ask those affected by our decisions what they need and what it’s like to be them. And remembering, when they provide us with their input, to believe them.

I spoke with Klint Finley over at WIRED about Amazon, Facebook, Google, IBM, and Microsoft’s new joint ethics and oversight venture, which they’ve dubbed the “Partnership on Artificial Intelligence to Benefit People and Society.” They held a joint press briefing, today, in which Yann LeCun, Facebook’s director of AI, and Mustafa Suleyman, the head of applied AI at DeepMind discussed what it was that this new group would be doing out in the world. From the Article:

Creating a dialogue beyond the rather small world of AI researchers, LeCun says, will be crucial. We’ve already seen a chat bot spout racist phrases it learned on Twitter, an AI beauty contest decide that black people are less attractive than white people and a system that rates the risk of someone committing a crime that appears to be biased against black people. If a more diverse set of eyes are looking at AI before it reaches the public, the thinking goes, these kinds of thing can be avoided.

The rub is that, even if this group can agree on a set of ethical principles–something that will be hard to do in a large group with many stakeholders—it won’t really have a way to ensure those ideals are put into practice. Although one of the organization’s tenets is “Opposing development and use of AI technologies that would violate international conventions or human rights,” Mustafa Suleyman, the head of applied AI at DeepMind, says that enforcement is not the objective of the organization.

This isn’t the first time I’ve talked to Klint about the intricate interplay of machine intelligence, ethics, and algorithmic bias; we discussed it earlier just this year, for WIRED’s AI Issue. It’s interesting to see the amount of attention this topic’s drawn in just a few short months, and while I’m trepidatious about the potential implementations, as I note in the piece, I’m really fairly glad that more people are more and more willing to have this discussion, at all.

To see my comments and read the rest of the article, click through, here: “Tech Giants Team Up to Keep AI From Getting Out of Hand”

Last week, Artsy.net’s Izabella Scott wrote this piece about how and why the aesthetic of witchcraft is making a comeback in the art world, which is pretty pleasantly timed as not only are we all eagerly awaiting Kim Boekbinder’s NOISEWITCH, but I also just sat down with Rose Eveleth for the Flash Forward Podcast to talk for her season 2 finale.

You see, Rose did something a little different this time. Instead of writing up a potential future and then talking to a bunch of amazing people about it, like she usually does, this episode’s future was written by an algorithm. Rose trained an algorithm called Torch not only on the text of all of the futures from both Flash Forward seasons, but also the full scripts of both the War of the Worlds and the 1979 Hitchhiker’s Guide to the Galaxy radio plays. What’s unsurprising, then, is that part of what the algorithm wanted to talk about was space travel and Mars. What is genuinely surprising, however, is that what it also wanted to talk about was Witches.

Because so far as either Rose or I could remember, witches aren’t mentioned anywhere in any of those texts.

ANYWAY, the finale episode is called “The Witch Who Came From Mars,” and the ensuing exegeses by several very interesting people and me of the Bradbury-esque results of this experiment are kind of amazing. No one took exactly the same thing from the text, and the more we heard of each other, the more we started to weave threads together into a meta-narrative.

The Witch Who Came From Mars

It’s really worth your time, and if you subscribe to Rose’s Patreon, then not only will you get immediate access to the full transcript of that show, but also to the full interview she did with PBS Idea Channel’s Mike Rugnetta. They talk a great deal about whether we will ever deign to refer to the aesthetic creations of artificial intelligences as “Art.”

And if you subscribe to my Patreon, then you’ll get access to the full conversation between Rose and me, appended to this week’s newsletter, “Bad Month for Hiveminds.” Rose and I talk about the nature of magick and technology, the overlaps and intersections of intention and control, and what exactly it is we might mean by “behanding,” the term that shows up throughout the AI’s piece.

And just because I don’t give a specific shoutout to Thoth and Raven doesn’t mean I forgot them. Very much didn’t forget about Raven.

Also speaking of Patreon and witches and whatnot, current $1+ patrons have access to the full first round of interview questions I did with Eliza Gauger about Problem Glyphs. So you can get in on that, there, if you so desire. Eliza is getting back to me with their answers to the follow-up questions, and then I’ll go about finishing up the formatting and publishing the full article. But if you subscribe now, you’ll know what all the fuss is about well before anybody else.

And, as always, there are other ways to provide material support, if longterm subscription isn’t your thing.

Until Next Time.


If you liked this piece, consider dropping something in the A Future Worth Thinking About Tip Jar