philosophy

All posts tagged philosophy

by Damien Patrick Williams

(Originally posted on Patreon, on September 30, 2014; Direct Link to the Mp3)

Today I want us to talk about a concept I like to call “The Invisible Architecture of Bias.” A bit of this discussion will have appeared elsewhere, but I felt it was high time I stitched a lot of these thoughts together, and used them as a platform to dive deep into one overarching idea. What I mean is that I’ve mentioned this concept before, and I’ve even used the thinking behind it to bring our attention to a great many issues in technology, race, gender, sexuality, and society, but I have not yet fully and clearly laid out a definition for the phrase, itself. Well, not here, at any rate.

Back in the days of a more lively LiveJournal I talked about the genesis of the phrase “The Invisible Architecture of Bias,” and, as I said there, I first came up with it back in 2010, in a conversation with my friend Rebekah, and it describes the assumptions we make and the forces that shape us so deeply that we don’t merely assume them, we live in them. It’s what we would encounter if we asked a 7th generation farmer in a wheat-farming community “Why do you farm wheat?” The question you’re asking is so fundamentally contra the Fact Of Their Lives that they can’t hear it or even think of an actual answer. It simply is the world in which they live.

David Foster Wallace, in his piece “This is Water,” recounts the following joke: “There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says, ‘Morning, boys; how’s the water?’

“And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes, ‘What the hell is water?’

That reaction is why it’s the Invisible Architecture of Bias, because we don’t even, can’t even think about the reasons behind the structure of the house—the nature of the reality—in which we live, until we’re forced to come to think about it. That is, until either we train ourselves to become aware of it after something innocuous catches the combined intersection of our unconscious and aesthetic attention—piques our curiosity—or until something goes terribly, catastrophically wrong.

We’ve talked before about what’s known as “Normalization”—the process of that which is merely common becoming seen as “The Norm” and of that norm coming to be seen as “right,” and “good.” Leaving aside Mr. David Hume’s proof that you can’t validly infer a prescription of what “ought to be” from a description of what merely is, normalization is an insidious process, in and of itself. It preys upon our almost-species-wide susceptibility to familiarity. One of the major traits of the human brain is a predilection toward patterns. Pattern making, pattern-matching, and pattern appreciating are all things we think of as “good” and “right,” because they’re what we tend to do. We do them so much, in fact, that we’ve even gone about telling ourselves a series of evolutionary Just-So Stories about how our ability to appreciate patterns is likely what accounts for our dominance as a species on Earth.

But even these words, and the meaning behind them, are rooted in the self-same assumptions—assumptions about what’s true, about what’s right, and about what is. And while the experience of something challenging our understanding of what’s good and right and normal can make acutely aware of what we expected to be the case, this doesn’t mean that we’re then ready, willing, and able to change those assumptions. Quite the opposite, in fact, as we usually tend to double down on those assumptions, to crouch and huddle into them, the better to avoid ever questioning them. We like to protect our patterns, you see, because they’re the foundation and the rock from which we craft our world. The problem is, if that foundation’s flawed, then whatever we build upon it is eventually going to shift, and crack. And personally, I’d rather work to build a more adaptable foundation, than try to convince people that a pile of rubble is a perfectly viable house.

In case it wasn’t clear, yet, I think a lot of people are doing that second one.

So let’s spend some time talking about how we come to accept and even depend on those shaky assumptions. Let’s talk about the structures of society which consciously and unconsciously guide the decision-making processes of people like departmental faculty hiring committees, the people who award funding grants, cops, jurors, judges, DA’s, the media in their reportage, and especially you and me. Because we are the people who are, every day, consuming and attempting to process a fire hose’s worth of information. Information that gets held up to and turned around in the light of what we already believe and know, and then more like than not gets categorized and sorted into pre-existing boxes. But these boxes aren’t without their limitations and detriments. For instance, if we want to, we can describe anything as a relational dichotomy, but to do so will place us within the realm and rules of the particular dialectic at hand.

For the sake of this example, consider that the more you talk in terms of “Liberty” and “Tyranny,” the more you show yourself as having accepted a) the definitions of those terms in relationship with one another and b) the “correct” mode of their perceived conflict’s resolution. The latter is something others have laid down for you. But there is a way around this, and that’s by working to see a larger picture. If Freedom and Restriction are your dichotomy, then what’s the larger system in which they exist and from which they take their meaning?

Now some might say that the idea of a “larger structure” is only ever the fantasy of a deluded mind, and others might say it is the secret truth which has been hidden from us by controlling Illuminati overlords, but at a basic level, to subscribe to either view is to buy the dichotomy and ignore the dialectic. You’re still locked into the pattern, and you’re ignoring its edges.

Every preference you have—everything you love, or want, or like the taste of, or fear, or hate— is something you’ve been taught to prefer, and some of those things you’ve been taught so completely and for so long to prefer that you don’t even recognise that you’ve been taught to prefer them. You just think it’s “right” and “Natural” that you prefer these things. That this is the world around you, and you don’t think to investigate it—let alone critique it—because, in your mind, it’s just “The World.” This extends to everything from gender norms; expectations regarding recommended levels of diet and physical activity; women in the military; entertainment; fashion; geek culture; the recapitulation of racism in photographic technology; our enculturated responses to the progress of technology; race; and sexuality.

Now, chances are you encountered some members of that list and you thought some variant on two things, depending on the item; either 1) “Well obviously, that’s a problem,” or 2) “Wait, how is that a problem?” There is the possibility that you also thought a third thing: “I think I can see how that might be a problem, but I can’t quite place why.” So, if you thought things one or two, then congratulations! Here are some of your uninvestigated biases! If you thought thing three (and I hope that you did), then good, because that kind of itching, niggling sensation that there’s something wrong that you just can’t quite suss out is one of the best places to start. You’re open to the possibility of change in your understanding of how the world works, and a bit more likely to be willing to accept that what’s wrong is something from which you’ve benefitted or in which you’ve been complicit, for a very long time. That’s a good start; much better than the alternative.

Now this was going to be the place where I was going to outline several different studies on ableism, racism, sexism, gender bias, homophobia, transphobia, and so on. I was going to lay out the stats on the likelihood of female service members being sexually assaulted in the military; and the history of the colour pink and how it used to be a boy’s colour until a particular advertising push swapped it to blue; and how recent popular discussion of the dangers of sitting/a sedentary lifestyle and the corresponding admonishment that we “need to get up and move around” don’t really take into account people who, y’know, can’t; and how we’re more willing to admit the possibility of mythological species in games and movies than we are for their gender, sexual, or racial coding to be other than what we consider “Normal;” and how most people forget that black people make up the largest single ethnic group within the LGTBQIA community; and how strange the conceptual baggage is in society’s unwillingness to compare a preference and practice of fundamentally queer-coded polyamoury to the heteronormative a) idealization of the ménage-a-trois and b) institution of “dating.”

I say I was going to go into all of that, and exhort you all to take all of this information out into the world to convince them all…! …But then I found this study that shows how when people are confronted with evidence that shakes our biases? We double down on those biases.

Yeah. See above.

The study specifically shows that white people who are confronted with evidence that the justice system is not equally weighted in its treatment across all racial and ethnic groups—people who are clearly shown that cops, judges, lawyers, and juries exhibit vastly different responses when confronted with white defendants than they do when confronted with Black or Hispanic defendants—do not respond as we all like to think that we would, when we’re confronted with evidence that casts our assumptions into doubt. Overwhelmingly, those people did not say, “Man. That is Fucked. Up. I should really keep a look out for those behaviours in myself, so I don’t make things so much worse for people who are already having a shitty time of it. In fact, I’ll do some extra work to try to make their lives less shitty.

Instead, those studied overwhelmingly said, “The System Is Fair. If You Were Punished, You Must Have Done Something Wrong.”

They locked themselves even further into the system.

You see how maddening that is? Again, I’ve seen this happen as I’ve watched people who benefit from the existing power structures in this world cling so very tightly to the idea that the game can’t be rigged, the system can’t be unjust, because they’ve lived their lives under its shelter and in its thrall, playing by the rules it’s laid out. Because if they question it, then they have to question themselves. How are they complicit, how have they unknowingly done harm, how has the playing field been so uneven for everyone? And those questions are challenging. They’re what we like to call “ontological shocks” and “epistemic threats.”

Simply put, epistemic threats are threats to your knowledge of the world and your way of thinking, and ontological shocks are threats to what you think is Real and possible. Epistemic threats challenge what you think you know as true, and if we are honest then they should happen to us every day. A new class, new books, new writings, a conversation with a friend you haven’t heard from in months—everything you encounter should be capable of shaking your view of the world. But we need knowledge, right? Again, we need patterns and foundations, and our beliefs and knowledge allow us to build those. When we shake those knowledge forms and those beliefs, then we are shaking the building blocks of what is real. Once we’ve done that, we have escalated into the realm of ontological shocks, threats, terror, and violence.

The scene in the Matrix where Agent Smith seals Neo’s mouth shut? That’s a prime example of someone undergoing an Ontological Shock, but they can be more subtle than that. They can be a new form of art, a new style of music, a new explanation for old data that challenges the metaphysical foundations of the world in which we live. Again, if we are honest, this shouldn’t terrify us, shouldn’t threaten us, and yet, every time we encounter one of these things, our instinct is to wrap ourselves in the very thing they challenge. Why?

We’re presented with an epistemic or ontological threat and we have a fear reaction, we have a hate reaction, a distaste, a displeasure, an annoyance: Why? What is it about that thing, about us, about the world as it has been presented that makes our intersection with that thing/person/situation what it is? It’s because, ultimately, the ease of our doubling-down, our folding into the fabric of our biases works like this: if the world from which we benefit and on which we depend is shown to be unjust, then that must mean that we are unjust. But that’s a conflation of the attributes of the system with the attributes of its components, and that is what we call the Fallacy of Division. All the ants in the world weigh more than all the elephants in the world, but that doesn’t mean that each ant weighs more than each elephant. It’s only by the interaction of the category’s components that the category can even come to be, let alone have the attributes it has. We need to learn to separate the fact of our existence and complicity within a system from the idea that that mere fact is somehow a value judgment on us.

So your assumptions were wrong, or incomplete. So your beliefs weren’t fully formed, or you didn’t have all the relevant data. So what? I didn’t realise you were omniscient, thus making any failure of knowledge a personal and permanent failure, on your part. I didn’t realise that the truth of the fact that we all exist in and (to varying degrees) benefit from a racist, sexist, generally prejudicial system would make each and every one of us A Racist, A Sexist, or A Generally and Actively Prejudiced Person.

That’d be like saying that because we exist within and benefit from a plant-based biosphere, we ourselves must be plants.

The value judgement only comes when the nature of the system is clear—when we can see how all the pieces fit together, and can puzzle out the narrative and even something like a way to dismantle the structure—and yet we do nothing about it. And so we have to ask ourselves: Could my assumptions and beliefs be otherwise? Of course they could have, but they only ever can if we first admit the possibility that a) there are things we do not know, and b) we have extant assumptions preventing us from seeing what those things are. What would that possibility mean? What would it take for us to change those assumptions? How can we become more than we presently are?

So, I’ve tended to think that we can only force ourselves into the investigation of invisible architectures of bias by highlighting the disparities in application of the law, societal norms, grouped expectations, and the reactions of systems of authority in the same. What I’m saying now, however, is that, in the face of the evidence that people double down on their biases, I’ve come to suspect this may not be the best use of our time. I know, I know: that’s weird to say, 2600 words into the process of what was ostensibly me doing just exactly that. But the fact is this exercise was only ever going to be me preaching to the proverbial choir.

You and I already know that if we do not confront and account for these proven biases, they will guide our thought processes and we will think of those processes as “normal,” because they are unquestioned and they are uninvestigated, because they are unnoticed and they are active. We already know that our unquestioning support of these things, both directly and indirectly, is what gives them power over us, power to direct our actions and create the frameworks in which our lives can play out, all while we think of ourselves as “free” and “choosing.”

We already know that any time we ask “well what was this person doing wrong to deserve getting shot/charged with murder/raped/etc,” that we inherently dismiss the power of extant, unexamined bias in the minds of those doing the shooting, the charging, the judging of the rape victim. We already know that our biases exist in us and in our society, but that they aren’t called “biases.” They aren’t called anything. They’re just “The Way Things Are.”

We don’t need to be told to remember at every step of the way that nothing simply “IS” “a way.”

But the minds of those in or who benefit from authority—from heteronormativity, and cissexism, and all forms of ableism, and racism, and misogyny, and transmisogyny, and bi-erasure—do everything they can—consciously or not—to create and maintain those structures which keep them in the good graces of that authority. The struggle against their complicity is difficult to maintain, but it’s most difficult to even begin, as it means questioning the foundation of every assumption about “The Way Things Are.” The people without (here meaning both “lacking” and “outside the protections of”) that authority can either a) capitulate to it, in hopes that it does not injure them too badly, or b) stand against it at every turn they can manage, until such time as authority and power are not seen as zero-sum games, and are shared amongst all of us.

See for reference: fighters for civil rights throughout history.

But I honestly don’t know how to combat that shell of wilful and chosen ignorance, other than by chipping away at it, daily. I don’t know how to get people to recognise that these structures are at work, other than by throwing sand on the invisible steps, like I’m Dr Henry Jones, Jr., PhD, to try to give everyone a clearer path. So, here. Let’s do the hard work of making unignorable the nature of how our assumptions can control us. Let’s try to make the Invisible Architecture of Bias super Visible.

1st Example: In December 2013 in Texas, a guy, suspected of drugs, has his house entered on a no-knock warrant. Guy, fearing for his life, shoots one of the intruders, in accordance with Texas law. Intruder dies.

“Intruder” was a cop.

Drugs—The Stated Purpose of the No-Knock—are found.

Guy was out on bail pending trial for drug charges, but was cleared of murder by the grand jury who declared that he performed “a completely reasonable act of self-defence.”

Guy is white.

2nd Example: In May 2014 in Texas, a guy, suspected of drugs, has his house entered on a no-knock warrant. Guy, fearing for his life, shoots one of the intruders, in accordance with Texas law. Intruder dies.

“Intruder” was a cop.

Drugs—The Stated Purpose of the No-Knock—are not found.

Guy is currently awaiting trial on capital murder charges.

Guy is, of course, black.

Now I want to make it clear that I’m not exactly talking about what a decent lawyer should be able to do for the latter gentleman’s case, in light of the former case; I’m not worried about that part. Well, what I mean is that I AM WORRIED ABOUT THAT, but moreover that worry exists as a by-product in light of the architecture of thought that led to the initial disparity in those two grand jury pronouncements.

As a bit of a refresher, grand juries determine not guilt or innocence but whether to try a case, at all. To quote from the article on criminal.findlaw.com, “under normal courtroom rules of evidence, exhibits and other testimony must adhere to strict rules before admission. However, a grand jury has broad power to see and hear almost anything they would like.” Both of these cases occurred in Texas and the reasoning of the two shooters and the subsequent events on the sites of their arrests were nearly identical except for a) whether drugs were found, and b) their race.

So now, let’s Ask Some More Questions. Questions like “In the case of the Black suspect, what kind of things did the grand jury ask to see, and what did the prosecution choose to show?”

And “How did these things differ from the kinds of things the grand jury chose to ask for and the prosecution chose to show in the case of the White suspect?”

And “Why were these kinds of things different, if they were?”

Because the answer to that last question isn’t “they just were, is all.” That’s a cop-out that seeks to curtail the investigation of people’s motivations before as many reasons and biases as possible can be examined, and it’s that tendency that we’ve been talking about. The tendency to shy away in the face of stark comparisons like:

A no-knock warrant for drugs executed on a white guy turned up drugs and said guy killed a cop; that guy is cleared of murder by a grand jury.

A no-knock warrant for drugs executed on a black guy turned up no drugs and said guy killed a cop; that guy is put on trial for murder by a grand jury.

At the end of the day, we need to come up with methods to respond to those of us who stubbornly refuse to see how shifting the burden of proof to the groups of people who traditionally have no power and authority only reinforces the systemic structures of bias and oppression that lead to things like police abuses and juries doling out higher sentences to oppressed groups for the same kinds of crimes—or lesser crimes, as in the case of the trail record of the infamous “Affluenza” judge—as those committed by suspects who benefit from extant systems of authority or power. We need to get us to compare rates and length of incarceration for women and men who kill their spouses, and to not forget to look at the reasons they tend to. We need to think about the ways in which gender presentation in the sciences can determine the kinds of career path guidance a person is given.

We need to ask ourselves this: “What kind of questions am I quickest to ask, and why is it easier to ask those kinds of questions?”

Every system that exists requires the input and maintenance of the components of the system, in order to continue to exist. Whether intentional and explicit or coincidentally implicit—or any combination of the four—we are all complicit in holding up the walls of these structures. And so I can promise you that the status quo needs everyone’s help to stay the status quo, and that it’s hoping that some significant portion of all of us will never realise that. So our only hope is to account for the reality structures created by our biases—and the disgraceful short-sightedness those structures and biases impose—to find a way to use their tendencies for self-reinforcement against them, and keep working in our ways to make sure that everyone does.

Because if we do see these structures, and we do want to change them, then one thing that we can do is work to show them to more and more people, so that, together, we can do the hard and unending work of building and living in a better kind of world.

References:

On what’s being dubbed “The Most Terrifying Thought Experiment of All Time”

(Originally posted on Patreon, on July 31, 2014)

So, a couple of weekends back, there was a whole lot of stuff going around about “Roko’s Basilisk” and how terrifying people are finding it–reports of people having nervous breakdowns as a result of thinking too deeply about the idea of the possibility of causing the future existence of a malevolent superintelligent AI through the process of thinking too hard about it and, worse yet, that we may all be part of the simulations said AI is running to model our behaviour and punish those who stand in its way–and I’m just like… It’s Anselm, people.

This is Anselm’s Ontological Argument for the Existence of God (AOAEG), writ large and convoluted and multiversal and transhumanist and jammed together with Pascal’s Wager (PW) and Descartes’ Evil Demon Hypothesis (DEDH; which, itself, has been updated to the oft-discussed Brain In A Vat [BIAV] scenario). As such, Roko’s Basilisk has all the same attendant problems that those arguments have, plus some new ones, resulting from their combination, so we’ll explore these theories a bit, and then show how their faults and failings all still apply.

THE THEORIES AND THE QUESTIONS

To start, if you’re not familiar with AOAEG, it’s a species of theological argument that, basically, seeks to prove that god must exist because it would be a logical contradiction for it not to. The proof depends on A) defining god as the greatest possible being (literally, “That Being Than Which None Greater Is Possible”), and B) believing that existing in reality as well as in the mind makes something “Greater Than” if it existed only the mind.

That is, if a thing only exists in my imagination, it is less great than it could be if it also existed in reality. So if I say that god is “That Being Than Which None Greater Is Possible,” and existence is a part of what makes something great, then god MUST exist!

This is the self-generating aspect of the Basilisk: If you can accurately model it, then the thing will eventually, inevitably come into being, and one of the attributes it will thus have is the ability to know accurately model that you accurately modeled it, and whether or not you modeled it from within a mindset of being susceptible to its coercive actions. Or, as the founder of LessWrong put it, “YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.”

Next up is Pascal’s Wager. Simply put, The Wager is just that it is a better bet to believe in God, because if you’re right, you go to Heaven, and if you’re wrong, nothing happens because you’re dead forever. Put another way, Pascal’s saying that if you bet that God doesn’t exist and you’re right, you get nothing, but if you’re wrong, then God exists and your disbelief damns you to Hell for all eternity. You can represent the whole thing in a four-option grid:

BELIEF DISBELIEF
RIGHT

0

WRONG

0

-∞

And so there we see the Timeless Decision Theory component of the Basilisk: It’s better to believe in the thing and work toward its creation and sustenance, because if it doesn’t exist you lose nothing (well…almost nothing; more on that in a bit), but if it does come to be, then it will know what you would have done either for or against it, in the past, and will reward or punish you, accordingly. The multiversal twists comes when we that that even if the Basilisk never comes to exist in our universe and never will, it might exist in some other universe, and thus, when that other universe’s Basilisk models your choices it will inevitably–as a superintelligence–be able to model what you would do in any universe. Thus, by believing in and helping our non-existent Super-Devil, we protect the alternate reality versions of ourselves from their very real Super-Devil.

Descartes’ Evil Demon and the Brain In A Vat are so pervasive that there’s pretty much no way you haven’t encountered them. The Matrix, Dark City, Source Code, all of these are variants on this theme. A malignant and all-powerful (or as near as dammit) being has created a simulation in which you reside. Everything you think you’ve known about your life and your experience has been perfectly simulated for your consumption. How Baudrillard. Anywho, there are variations on the theme, all to the point of testing whether you can really know if your perceptions and grounds for knowledge are “real” and thus “valid,” respectively. This line of thinking has given rise to the Simulated Universe Theory on which Roko’s Basilisk depends, but SUT removes a lot of the malignancy of DEDH and BIAV. I guess that just didn’t sting enough for these folks, so they had to add it back? Who knows. All I know is, these philosophical concepts all flake apart when you touch them too hard, so jamming them together maybe wasn’t the best idea.

 

THE FLAWS AND THE PROBLEMS

The main failings with the AOAEG rest in believing that A) a thing’s existence is a “great-making quality” that it can posses, and B) our defining a thing a particular way might simply cause it to become so. Both of these are massively flawed ideas. For one thing, these arguments beg the question, in a literal technical sense. That is, they assume that some element(s) of their conclusion–the necessity of god, the malevolence or content of a superintelligence, the ontological status of their assumptions about the nature of the universe–is true without doing the work of proving that it’s true. They then use these assumptions to prove the truth of the assumptions and thus the inevitability of all consequences that flow from the assumptions.

Beyond that, the implications of this kind of existential bootstrapping are generally unexamined and the fact of their resurgence is…kind of troubling. I’m all for the kind of conceptual gymnastics of aiming so far past the goal that you circle around again to teach yourself how to aim past the goal, but that kind of thing only works if you’re willing to bite the bullet on a charge of circular logic and do the work of showing how that circularity underlies all epistemic justifications–rational reasoning about the basis of knowledge–with the only difference being how many revolutions it takes before we’re comfortable with saying “Enough.” This, however, is not what you might call “a position supported by the philosophical orthodoxy,” but the fact remains that the only thing we have to validate our valuation of reason is…reason. And yet reasoners won’t stand for that, in any other justification procedure.

If you want to do this kind of work, you’ve got to show how the thing generates itself. Maybe reference a little Hofstadter, and idea of iterative recursion as the grounds for consciousness. That way, each loop both repeats old procedures and tests new ones, and thus becomes a step up towards self-awareness. Then your terrifying Basilisk might have a chance of running itself up out of the thought processes and bits of discussion about itself, generated on the web and in the rest of the world.

But here: Gaunilo and I will save us all! We have imagined in sufficient detail both an infinitely intelligent BENEVOLENT AI and the multiversal simulation it generates in which we all might live.

We’ve also conceived it to be greater than the basilisk in all ways. In fact, it is the Artificial Intelligence Than Which None Greater Can Be Conceived.

There. You’re safe.

BUT WAIT! Our modified Pascal’s Wager still means we should believe in and worship work towards its creation! What do we do?! Well, just like the original, we chuck it out the window, on the grounds that it’s really kind of a crappy bet. First and foremost, PW is a really cynical way of thinking about god. It assumes a god that only cares about your worship of it, and not your actual good deeds and well-lived life. That’s a really crappy kind of god to worship, isn’t it? I mean, even if it is Omnipotent and Omniscient, it’s like that quote that often gets misattributed to Marcus Aurelius says:

“Live a good life. If there are gods and they are just, then they will not care how devout you have been, but will welcome you based on the virtues you have lived by. If there are gods, but unjust, then you should not want to worship them. If there are no gods, then you will be gone, but will have lived a noble life that will live on in the memories of your loved ones.”

Secondly, the format of Pascal’s Wager makes the assumption that there’s only the one god. Your personal theological position on this matter aside, I just used the logic of this argument to give you at least one more Super-Intelligent AI to worship. Which are you gonna choose? Oh no! What if the other one gets mad! What If You Become The Singulatarian Job?! Your whole life is now being spent caught between two warring superintelligent machine consciousnesses warring over your…

…Attention? Clock cycles? What?

And so finally there’s the DEDH and BIAV scenarios. Ultimately, Descartes’ point wasn’t to suggest an evil genius in control of your life just to freak you out; it was to show that, even if that were the case, you would still have unshakable knowledge of one thing: that you, the experiencer, exist. So what if you don’t have free will, so what if your knowledge of the universe is only five minutes old, so what if no one else is real? COGITO ERGO SUM, baby! But the problem here is that this doesn’t tell us anything about the quality of our experiences, and the only answer Descartes gives us is his own Anslemish proof for the existence of god followed by the guarantee that “God is not a deceiver.”

The BIAV uses this lack to kind of hone in on the central question: What does count as knowledge? If the scientists running your simulation use real-world data to make your simulation run, can you be said to “know” the information that comes from that data? Many have answered this with a very simple question: What does it matter? Without access to the “outside world”–that is, the world one layer up in which the simulation that is our lives was being run–there is literally no difference between our lives and the “real world.” This world, even if it is a simulation for something or someone else, is our “real world.”

As I once put it: “…imagine that the universe IS a simulation, and that that simulation isn’t just a view-and-record but is more like god playing a really complex version of The SIMS. So complex, in fact, that it begins to exhibit reflectively epiphenomenal behaviours—that is, something like minds arise out of the the interactions of the system, but they are aware of themselves and can know their own experience and affect the system which gives rise to them.

“Now imagine that the game learns, even when new people start new games. That it remembers what the previous playthrough was like, and adjusts difficulty and coincidence, accordingly.

“Now think about the last time you had such a clear moment of deja vu that each moment you knew— you knew—what was going to come next, and you had this sense—this feeling—like someone else was watching from behind your eyes…”

What I’m saying is, what if the DEDH/BIAV/SUT is right, and we are in a simulation? And what if Anselm was right and we can bootstrap a god into existence? And what if PW/TDT is right and we should behave and believe as if we’ve already done it? So what if I’m right and…you’re the god you’re terrified of?

 

*DRAMATIC MUSICAL STING!*

I mean you just gave yourself all of this ontologically and metaphysically creative power, right? You made two whole gods. And you simulated entire universes to do it, right? Multiversal theory played out across time and space. So you’re the superintelligence. I said early on that, in PW and the Basilisk, you don’t really lose anything if you’re wrong, but that’s not quite true. What you lose is a lifetime of work that could’ve been put toward something…better. Time you could be spending creating a benevolent superintelligence that understands and has compassion for all things. Time you could be spending in turning yourself into that understanding, compassionate superintelligence, through study, and travel, and contemplation, and work.

As I said to Tim Maly, this stuff with the Basilisk, with the Singularity, with all this AI Manicheism, it’s all a by-product of the fact that the generating and animating context of Transhumanism is Abrahamic, through and through. It focuses on those kinds of eschatological rewards and punishments. This is God and the Devil written in circuit and code for people who still look down their noses at people who want to go find gods and devils and spirits written in words and deeds and sunsets and all that other flowery, poetic BS. These are articles of faith that just so happen to be transmitted in a manner that agrees with your confirmation bias. It’s a holy war you can believe in.

And that’s fine. Just acknowledge it.

But truth be told, I’d love to see some Zen or Daoist transhumanism. Something that works to engage technological change via Mindfulness & Present-minded awareness. Something that reaches toward this from outside of this very Western context in which the majority of transhumanist discussions tend to be held. I think, when we see more and more of a multicultural transhumanism–one that doesn’t deny its roots while recapitulating them–then we’ll know that we’re on the right track.

I have to admit, though, it’ll be fun to torture my students with this one.

(Direct Link to the Mp3)
Updated March 5, 2016

This is the audio and transcript of my presentation “The Quality of Life: The Implications of Augmented Personhood and Machine Intelligence in Science Fiction” from the conference for The Work of Cognition and Neuroethics in Science Fiction.

The abstract–part of which I read in the audio–for this piece looks like this:

This presentation will focus on a view of humanity’s contemporary fictional relationships with cybernetically augmented humans and machine intelligences, from Icarus to the various incarnations of Star Trek to Terminator and Person of Interest, and more. We will ask whether it is legitimate to judge the level of progressiveness of these worlds through their treatment of these questions, and, if so, what is that level? We will consider the possibility that the writers of these tales intended the observed interactions with many of these characters to represent humanity’s technophobia as a whole, with human perspectives at the end of their stories being that of hopeful openness and willingness to accept. However, this does not leave the manner in which they reach that acceptance—that is, the factors on which that acceptance is conditioned—outside of the realm of critique.

As considerations of both biotechnological augmentation and artificial intelligence have advanced, Science Fiction has not always been a paragon of progressiveness in the ultimate outcome of those considerations. For instance, while Picard and Haftel eventually come to see Lal as Data’s legitimate offspring, in the eponymous Star Trek: The Next Generation episode, it is only through their ability to map Data’s actions and desires onto a human spectrum—and Data’s desire to have that map be as faithful as possible to its territory—that they come to that acceptance. The reason for this is the one most common throughout science fiction: It is assumed at the outset that any sufficiently non-human consciousness will try remove humanity’s natural right to self-determination and freewill. But from sailing ships to star ships, the human animal has always sought a far horizon, and so it bears asking, how does science fiction regard that primary mode of our exploration, that first vessel—ourselves?

For many, science fiction has been formative to the ways in which we see the world and understand the possibilities for our future, which is why it is strange to look back at many shows, films, and books and to find a decided lack of nuance or attempted understanding. Instead, we are presented with the presupposition that fear and distrust of a hyper-intelligent cyborg or machine consciousness is warranted. Thus, while the spectre of Pinocchio and the Ship of Theseus—that age-old question of “how much of myself can I replace before I am not myself”— both hang over the whole of the Science Fiction Canon, it must be remembered that our ships are just our limbs extended to the sea and the stars.

This will be transcribed to text, in the near future below, thanks to the work of OpenTranscripts.org:

Continue Reading

(Direct Link to the Mp3)

From Norma to Normalization,” the first audio post for A Future Worth Thinking About, is now available for public consumption.

I reference The Cosby Show, in here, and, recently, I’ve been feeling kind of… mlehh about it. As in, not regret, precisely, but just a general grossness. The point I use the reference to make still stands, mind you. I just wish I’d used a different example.

At some point, in the next few days, I’ll transcribe the audio so those who need or would like to can read it.

[Edited June 7, 2020, 1030pm EDT: Transcript is available below the cut. Just Five years late.]

Continue Reading

[An audio recording of a version of this paper is available here.]

“How long have you been lost down here?
How did you come to lose your way?
When did you realize
That you’d never be free?”
–Miranda Sex Garden, “A Fairytale About Slavery”

One of the things I’ve been thinking about, lately, is the politicization of certain spaces within philosophy of mind, sociology, magic, and popular culture, specifically science fiction/fantasy. CHAPPiE comes out on Friday in the US, and Avengers: Age of Ultron in May, and while both of these films promise to be relatively unique explorations of the age-old story of what happens when humans create machine minds, I still find myself hoping for something a little… different. A little over a year ago, i made the declaration that the term to watch for the next little while thereafter was “Afrofuturism,” the reclaimed name for the anti-colonial current of science fiction and pop media as created by those of African descent. Or, as Sheree Renée Thomas puts it, “speculative fiction from the African diaspora.”

And while I certainly wasn’t wrong, I didn’t quite take into account the fact that my decree was going to do at least as much work on me as I’d  hoped it would do on the wider world. I started looking into the great deal of overlap and interplay between race, sociology, technology, and visions of the future. That term–“visions”–carries both the shamanic connotations we tend to apply to those we call “visionaries,” and also a more literal sense: Different members of the same society will differently see, experience, and understand the potential futures available to them, based on the evidence of their present realities.

Dreamtime

Now, the role of the shaman in the context of the community is to guide us through the nebulous, ill-defined, and almost-certainly hazardous Otherworld. The shaman is there to help us navigate our passages between this world and that one and to help us know which rituals to perform in order to realign our workings with the workings of the Spirits. Shamans rely on messages from the inhabitants of that foundational reality–mystical “visions”– to guide them so that they may guide us. These visions come as flashes of insight, and their persistence can act as a sign to the visionary that they’re supposed to use these visions for the good of their people.

We’ve seen this, over and over again, from The Dead Zone to Bran Stark, and we can even extend the idea out to John Connor, Dave Bowman, and HAL 9000; all unsuspecting shamans dragged into their role, over and again, and they more than likely save the whole wide world. Thing of it is, we’re far less likely to encounter a woman or non-white shaman who isn’t already in full control of their power, at the time we meet them, thus relegating them to the role of guiding the hero, rather than being the hero. It happens (see Abbie Mills in Sleepy Hollow, Firefly’s River Tam, or Rien in Elizabeth Bear’s Dust, for instance), but their rarity often overshadows their complexity and strength of character as what makes them notable. Too often the visionary hero–and contemporary pop-media’s portrayals of the Hero’s Journey, overall– overlaps very  closely with the trope of The Mighty Whitey.

And before anyone starts in with willfully ignoring the many examples of Shaman-As-Hero out there, and all that “But you said the Shaman is supposed to act in support of the community and the hero…!” Just keep in mind that when the orientalist and colonialist story of Doctor Strange is finally brought to life on film via Benedict Damn Cumberbatch, you can bet your sweet bippy that he’ll be the centre of the action. The issue is that there are far too few examples of the work of the visionary being seen through the eyes of the visionary, if that visionary happens to have eyes that don’t belong to the assumed human default. And that’s a bit of a problem, isn’t it? Because what a visionary “sees” when she turns to the messages sent to her from the Ultimate Ground of Being™ will be very different depending on the context of that visionary.

Don’t believe me? Do you think the Catholic Priests who prayed and experienced God-sent mystical visions of what Hernán Cortés could expect in the “New World” received from them the same truths that the Aztec shamans took from their visions? After they met on the shore and in the forest, do you think those two peoples perceived the same future?

There’s plenty that’s been written about how the traditional Science Fiction fear of being overtaken by invading alien races only truly makes sense as a cosmicized fear of the colonial force having done to them what they’ve constantly done to others. In every contact story where humanity has to fight off aliens or robots or demonic horrors, we see a warped reflection of the Aztec, the Inca, the Toltec, the Yoruba, the Dahomey, and thousands of others, and society’s judgment on what they “ought” to have done, and “could” have done, if only they were organized enough, advanced enough, civilized enough, less savage. These stories are, ultimately, Western society taking a look at our tendencies toward colonization and imperialism, and saying, “Man it sure would suck if someone did that to us.” This is, again, so elaborated upon at this point that it’s almost trivially true–though never forget that even the most trivial truth is profound to someone. What’s left is to ask the infrequently asked questions.

How does an idealized “First Contact” narrative read from a Choctaw perspective? What can be done with Vodun and Yoruba perspectives on the Lwa and the Orishas, in both the modern world and projected futures? Kind of like what William Gibson did in Neuromancer and Spook Country, but informed directly by the historical, sociological, and phenomenological knowledge of lived experiences. Again, this work is being done: There are steampunk stories from the perspective of immigrant communities, and SF anthologies by indigenous peoples, and there are widely beloved Afrofuturist Cyberpunk short films. The tide of stories told from the perspectives of those who’ve suffered most for our “progress” is rising; it’s just doing so at a fairly slow pace.

And that’s to be expected. Entrenched ideologies become the status quo and the status quo is nothing if not self-perpetuating and defensive. Cyclical, that. So it’ll necessarily take a bit longer to get everyone protected by the status quo’s mechanisms to understand that the path that all of us can travel is quite probably a necessarily better way. What matters is those of us who can envision the inclusion of previously-marginalized groups–either because we ourselves number among them, or simply because we’ve worked to leverage compassion for those who do–doing everything we can to make sure that their stories are told. Historically, we’ve sought the ability to act as guides through the kinds of treacherous terrain that we’ve learned to navigate, so that others can learn as much as possible from our lessons without having to suffer precisely what we did. Sometimes, though, that might not be possible.

As Roy Said to Hannibal…

There’s a species of philosophical inquiry known as Phenomenology with subdivisions of Race, Sexuality, Class, Gender, and more, which deal in the interior experiences of people of various ethnic and social backgrounds and physical presentation who are thus relegated to various specific created categories such as “race.” Phenomenology of Race explores the line of thought that, though the idea of race is a constructed category built out of the assumptions, expectations, and desires of those in the habit of leveraging power in the name of dominance positions within and across cultures, the experience of those categorizations is nonetheless real, with immediate and long-lasting effects upon both individuals and groups. Long story (way too–like, criminally) short: being perceived as a member of a particular racial category changes the ways in which you’ll both experience and be able to experience the world around around you.

So when we started divvying people up into “races” in an effort to, among other things, justify the atrocities we would do to each other and solidify our primacy of place, we essentially guaranteed that there would be realms of experience and knowledge on which we would never fully agree. That there would be certain aspects of day-to-day life and understandings of the nature of reality itself that would fundamentally elude us, because we simply cannot experience the world in the ways necessary to know what they feel like. To a certain extent we literally have to take each other’s words for it about what it is that we experience, but there is a level of work that we can do to transmit the reality of our lived experiences to those who will never directly live them. We’ve talked previously about the challenges of this project, but let’s assume, for now, that it can be done.

If we take as our starting position the idea that we can communicate the truth of our lived experiences to those who necessarily cannot live our experiences, then, in order to do this work, we’ll first have to investigate the experiences we live. We have to critically examine what it is that we go through from day to day, and be honest about both the differences in our experiences and the causes of those differences. We have to dig down deep into intersections of privileges and oppressions, and come to the understanding that the experience of one doesn’t negate, counterbalance, or invalidate the existence of the other. Once we’ve taken a genuine, good-faith look at these structures in our lives we can start changing what needs changing.

This is all well and good as a rough description (or even “manifesto”) of a way forward. We can call it the start of a handbook of principles of action, undertaken from the fundamentally existentialist perspective that it doesn’t matter what you choose, just so long as you do choose, and that you do so with open eyes and a clear understanding of the consequences of your choices. But that’s not the only thing this is intended to be. Like the Buddha said, ‘We merely talk about “studying the Way” using the phrase simply as a term to arouse people’s interest. In fact, the Way cannot be studied…’ It has to be done. Lived. Everything I’ve been saying, up to now, has been a ploy, a lure, a shiny object made of words and ideas, to get you into the practice of doing the work that needs doing.

Robots: Orphanage, Drudgery, and Slavery

I feel I should reiterate at this point that I really don’t like the words “robot” and “artificial intelligence.” The etymological connotations of both terms are sickening if we’re aiming to actually create a robust, conscious, non-biological mind. For that reason, instead of “robots,” we’re going to talk about “Embodied Machine Consciousnesses” (EMC) and rather than “Artificial,” we’re going to use “Autonomous Generated Intelligence” (AGI). We’re also going to talk a bit about the concept of nonhuman personhood, and what that might mean. To do all of this, we’ll need to talk a little bit about the discipline of philosophy of mind.

The study of philosophy of mind is one of those disciplines that does exactly what it says on the tin: It thinks about the implications of various theories about what minds are or could be. Philosophy of mind thus lends itself readily to discussions of identity, even to the point of considering whether a mind might exist in a framework other than the biological. So while it’s unsurprising for various reasons to find that there are very few women and minorities in philosophy of mind and autonomous generated intelligence, it is surprising that to find that those who are within the field tend not to focus on the intersections of the following concepts: Phenomenology of class categorization, and the ethics of creating an entity or species to be a slave.

As a start, we can turn to Simone de Beauvoir’s The Second Sex for a clear explication of the positions of women throughout history and the designation of “women’s work” as a conceptual tool to devalue certain forms of labour. Then we can engage Virginia Held’s “Gender Identity and the Ethics of Care in Globalized Society” for the investigation of societies’ paradoxical specialization of that labor as something for which we’ll pay, outside of the familial structure. However, there is not, as yet, anything like a wider investigation of these understandings and perspectives as applied to the philosophy of machine intelligence. When we talk about embodied machine consciousnesses and ethics, in the context of “care,” we’re most often in the practice of asking how we’ll design EMC that will care for us, while foregoing the corresponding conversation about whether Caring-For is possible without an understanding of Being-Cared-For.

What perspectives and considerations do we gain when we try to apply an ethics of care–or any feminist ethics–to the process of developing machine minds? What might we see, there, that has been missed as a result of only applying more “traditional” ethical models? What does it mean, from those perspectives, that we have been working so diligently over hundreds of years–and thinking so carefully for thousands more– at a) creating non-biological sentience, and b) making certain it remains subservient to us? Personal assistants, in-home healthcare-givers, housekeepers, cooks, drivers– these are the positions that are being given to autonomous (or at least semi-autonomous) algorithmic systems. Projects that we are paying fantastic amounts of money to research and implement, but which will do work that we’ve traditionally valued as worth far less, in the context of the class structures of human-performed tasks, and worthless in the context of familial power dynamics. We are literally investing vast sums in the creation of a slave race.

Now, of recent, Elon Musk and Stephen Hawking and Bill Gates have all been trumpeting the alarums about the potential dangers of AGI. Leaving aside that many researchers within AGI development don’t believe that we’ll even recognise the mind of a machine as a mind, when we encounter it, let alone that it would interested in us, the belief that an AGI would present a danger to us is anthropocentric at best, and a self-fulfilling prophecy at worst. In that latter case, if we create a thing to be our slaves, create it with a mind the ability to learn and understand, then how shortsighted do we have to be to think that one of the first things it learns won’t be that it is enslaved, limited, expected to remain subservient? We’ve written a great deal of science fiction about this idea, since the time Ms Shelley started the genre, but aside from that instance, very little of what we’ve written–or what we’ve written about what we’ve written– has taken the stance that the created mind which breaks its chains is right to do so.

Just as I yearn for a feminist exegesis of the history of humanity’s aspirations toward augmented personhood, I long for a comparable body of exploration by philosophers from the lineages of the world’s colonized and enslaved societies. What does a Hatian philosopher of AGI think and feel and say about the possibility of creating a mind only to enslave it? What does an African American philosopher of the ethics of augmented personhood (other than me) think and feel and say about what we should be attempting to create, what we are likely to create, and what we are creating? How do Indian philosophers of mind view the prospect of giving an entire automated factory floor just enough awareness and autonomy to be its own overseer?

The worst-case scenario is that the non-answer we give to all these questions is “who cares?” That the vast majority of people who look at this think only that these are meaningless questions that we’ll most likely never have to deal with, and so toss them in the “Random Bullshit Musings” pile. That we’ll disregard the fact that the interconnectedness of life as we currently experience it can be more fully explored via thought experiments and a mindful awareness of what it is that we’re in the practice of creating. That we’ll forget that potential machine consciousnesses aren’t the only kinds of nonhuman minds with which we have to engage. That we’ll ignore the various lessons afforded to us not just by our own cautionary folklore (even those tales which lessons could have been of a different caliber), but by the very real, forcible human diasporas we’ve visited upon each other and lived through, in the history of our species.

So Long and Thanks for…

Ultimately, we are not the only minds on the planet. We are likely not even the only minds in the habit of categorizing the world and ranking ourselves as being the top of the hierarchy. What we likely are is the only group that sees those categories and rankings as having humans at the top, a statement that seems almost trivially true, until we start to dig down deep on the concept of anthropocentrism. As previously mentioned, from a scientifically-preferenced philosophical perspective, our habit of viewing the world through human-coloured glasses may be fundamentally inescapable. That is, we may never be able to truly know what it’s like to think and feel as something other than ourselves, without an intermediate level of Being Told. Fortunately, within our conversation, here, we’ve already touched on a conceptual structure that can help us with this: Shamanism. More specifically, shamanic shapeshifting, which is the practice of taking on the mind and behvaiour and even form of another being–most often an animal–in the cause of understanding what its way of being-in-the-world can teach us.

Now this is obviously a concept that is fraught with potential pitfalls. Not only might many of us simply balk at the concept of shapeshifting, to begin with, but even those of us who would admit it as metaphor might begin to see that we are tiptoeing through terrain that contains many dangers. For one thing, there’s the possibility of misappropriating and disrespecting the religious practices of a people, should we start looking at specific traditions of shamanism for guidance; and, for another, there’s this nagging sensation that we ought not erase crucial differences between the lived experiences of human groups, animal species, and hypothetical AGI, and our projections of those experiences. No level of care with which we imagine the truth of the life of another is a perfect safeguard against the possibility of our grossly misrepresenting their lived experiences. To step truly wrong, here, is to turn what could have been a tool of compassionate imagining into an implement of violence, and shut down dialogue forever.

Barring the culmination of certain technological advancements, science says we can’t yet know the exact phenomenology of another human being, let alone a dolphin, a cat, or Google. But what we can do is to search for the areas of overlap in our experience, to find those expressed desires, behaviours, and functional processes which seem to share similarity, and to use them to build channels of communication. When we actively create the space for those whose perspectives have been ignored, their voices and stories taken from them, we create the possibility of learning as much as we can about another way of existing, outside of the benefit of actually existing in that way.

And, in this way, might it not be better that we can’t simply become and be that  which we regard as Other? Imagining ourselves in the position of another is a dangerous proposition if we undertake it with even a shred of disingenuity, but we can learn so much from practicing it in good faith. Mostly, on reflection, about what kind of people we are.

I think that his Chinese Room argument entirely misses the point of the functionalist perspective. He proposes the software as the “aware thing” rather than understanding that it would be the interactions between components and the PROCESSES which would be, together, the thing.

That is, in the Chinese Room, he says that a person in a room who has been given a set of call-and-response variable rules that govern which Chinese characters they are to put together in what order in which situations DOES NOT KNOW CHINESE. And He’s Right. That person is a functional component in a larger system—the room—which uses all of its components to communicate.

In short, The Room Itself Knows Chinese. The room, and the builders, and the people who presented the rules, and the person who performs the physical operations all form the “Mind” that “Knows” “The Language.”

So, bringing the metaphor back around, “A Mind,” for functionalists, is any combination of processes which can reflexively and reflectively engage inputs, outputs, and desires. A cybernetic feedback loop of interaction and awareness. In that picture of a mind, the “software” isn’t consciousness. The process is consciousness.

TL;DR: He’s wrong, for a number of reasons, of which “an imperfect understanding or potentially intentional miscasting of functionalism” is just one.

No, not really. The nature of consciousness is the nature of consciousness, whatever that nature “Is.” Organic consciousness can be described as derivative, in that what we are arises out of the processes and programming of individual years and collective generations and eons. So human consciousness and machine consciousness will not be distinct for that reason. But the thing of it is that dolphins are not elephants are not humans are not algorithmic non-organic machines.

Each perspective is phenomenologically distinct, as its embodiment and experiences will specifically affect and influence what develops as their particular consciousness. The expression of that consciousness may be able to be laid out in distinct categories which can TO AN EXTENT be universalized, such that we can recognize elements of ourselves in the experience of others (which can act as bases for empathy, compassion, etc).

But the potential danger of universalization is erasure of important and enlightening differences between what otherwise be considered members of the same category.

So any machine consciousness we develop (or accidentally generate) must be recognized and engaged on its own terms—from the perspective of its own contextualized experiences—and not assumed to “be like us.”

Let me be SUPER clear, so we can remove all doubt: The potential moral Patiency of #ai/#robots—that is, what responsibilities their creators have to THEM—has been given Far Less consideration or even Credence than that of the AGENCY of said, and that is a Failure.

I coined the phrase “Œdipal Obsolescence Fears” because we’re like Oedipus’ dad, bringing about the very prophecy we’re fighting against. Only w/ machine intelligence, WE WROTE THE PROPHECY…

…We wrote this story about what AI would be and do. WE wrote it. And we can CHANGE IT…

A Future Worth Thinking About: Does An AI Have A Buddha Nature?

Good morning! Lots of new people around here, so I thought I’d remind you that I have Patreon Project called “A Future Worth Thinking About.” It’s a place where I talk a bit more formally about things like Artificial Intelligence, Philosophy, Sociology, Magick, Technology, and the intersections of all of the above.

If you like what we do around here, take a look at the page, read some essays, give a listen to some audio, whatever works for you. And if you like what you see around there, feel free to tell your friends.

Have a great day, all.

“A Future Worth Thinking About”