philosophy of mind

All posts tagged philosophy of mind

Rude Bot Rises

So. The Flash Forward Podcast is one of the best around. Every week, host Rose Eveleth takes on another potential future, from the near and imminent to the distant and highly implausible. It’s been featured on a bunch of Best Podcast lists and Rose even did a segment for NPR’s Planet Money team about the 2016 US Presidential Election.

All of this is by way of saying I was honoured and a little flabbergasted (I love that word) when Rose asked me to speak with her for her episode about Machine Consciousness:

Okay, you asked for it, and I finally did it. Today’s episode is about conscious artificial intelligence. Which is a HUGE topic! So we only took a small bite out of all the things we could possibly talk about.

We started with some definitions. Because not everybody even defines artificial intelligence the same way, and there are a ton of different definitions of consciousness. In fact, one of the people we talked to for the episode, Damien Williams, doesn’t even like the term artificial intelligence. He says it’s demeaning to the possible future consciousnesses that we might be inventing.

But before we talk about consciousnesses, I wanted to start the episode with a story about a very not-conscious robot. Charles Isbell, a computer scientist at Georgia Tech, first walks us through a few definitions of artificial intelligence. But then he tells us the story of cobot, a chatbot he helped invent in the 1990’s.

You’ll have to click though and read or listen for the rest from Rose, Ted Chiang, Charles Isbell, and me. If you subscribe to Rose’s Patreon, you can even get a transcript of the whole show.

No spoilers, but I will say that I wasn’t necessarily intending to go Dark with the idea of machine minds securing energy sources. More like asking, “What advances in, say, solar power transmission would be precipitated by machine minds?”

But the darker option is there. And especially so if we do that thing the AGI in the opening sketch says it fears.

But again, you’ll have to go there to get what I mean.

And, as always, if you want to help support what we do around here, you can subscribe to the AFWTA Patreon just by clicking this button right here:


Until Next Time.

[UPDATED 09/12/17: The transcript of this audio, provided courtesy of Open Transcripts, is now available below the Read More Cut.]

[UPDATED 03/28/16: Post has been updated with a far higher quality of audio, thanks to the work of Chris Novus. (Direct Link to the Mp3)]

So, if you follow the newsletter, then you know that I was asked to give the March lecture for my department’s 3rd Thursday Brown Bag Lecture Series. I presented my preliminary research for the paper which I’ll be giving in Vancouver, about two months from now, “On the Moral, Legal, and Social Implications of the Rearing and Development of Nascent Machine Intelligences” (EDIT: My rundown of IEEE Ethics 2016 is here and here).

It touches on thoughts about everything from algorithmic bias, to automation and a post-work(er) economy, to discussions of what it would mean to put dolphins on trial for murder.

About the dolphin thing, for instance: If we recognise Dolphins and other cetaceans as nonhuman persons, as India has done, then that would mean we would have to start reassessing how nonhuman personhood intersects with human personhood, including in regards to rights and responsibilities as protected by law. Is it meaningful to expect a dolphin to understand “wrongful death?” Our current definition of murder is predicated on a literal understanding of “homicide” as “death of a human,” but, at present, we only define other humans as capable of and culpable for homicide. What weight would the intentional and malicious deaths of nonhuman persons carry?

All of this would have to change.

Anyway, this audio is a little choppy and sketchy, for a number of reasons, and I while I tried to clean it up as much as I could, some of the questions the audience asked aren’t decipherable, except in the context of my answers. [Clearer transcript below.]

Until Next Time.

 

Continue Reading

This work originally appears as “Go Upgrade Yourself,” in the edited volume Futurama and Philosophy. It was originally titled

The Upgrading of Hermes Conrad

So, you’re tired of your squishy meatsack of a body, eh? Ready for the next level of sweet biomechanical upgrades? Well, you’re in luck! The world of Futurama has the finest in back-alley and mad-scientist-based bio-augmentation surgeons, ready and waiting to hear from you! From a fresh set of gills, to a brand new chest-harpoon, and beyond, Yuri the Shady Parts Dealer and Professor Hubert J. Farnsworth are here to supply all of your upgrading needs—“You give lungs now; gills be here in two weeks!” Just so long as, whatever you do, stay away from legitimate hospitals. The kinds of procedures you’re looking to get done…well, let’s just say they’re still frowned upon in the 31st century; and why shouldn’t they be? As the woeful tale of Hermes Conrad illustrates exactly what’s at stake if you choose to pursue your biomechanical dreams.

 

The Six Million Dollar Mon

 Our tale begins with season seven’s episode “The Six Million Dollar Mon,” in which Hermes Conrad, Grade 36 Bureaucrat (Extraordinaire), comes to the conclusion that the he should be fired, since his bureaucratic performance reviews are the main drain on his beloved Planet Express Shipping Company. After being replaced with robo-bureaucrat Mark 7-G (Mark Sevengy?), Hermes enjoys some delicious spicy curried goat and goes out for an evening stroll with his lovely wife LaBarbara. While on their walk Roberto, the knife-wielding maniac, long of our acquaintance, confronts and demands the human couple’s skin for his culinary delight! As Hermes cowers behind his wife in fear, suddenly a savior arrives! URL, the Robot Police Officer, reels Roberto in with his magnificent chest-harpoon! Watching the cops take Roberto to the electromagnetic chair, and lamenting his uselessness in a dangerous situation, Hermes makes a decision: he’ll get Bender to take him to one of the many shady, underground surgeons he knows, so he can become “less inferior to today’s modern machinery.” Enter: Yuri, Professional Shady-Deal-Maker.

Hermes’ first upgrade is to get a chest-harpoon, like the one URL has. With his new enhancement, he proves his worth to the crew by getting a box off of the top shelf, which is too high for Mark 7-G. With this fete he wins back his position with the company, but as soon as things get back to normal the Professor drops his false teeth down the Dispose-All. No big deal, right? Just get Scruffy to retrieve it. Unfortunately, Scruffy responds, that a sink, “t’ain’t a berler nor a terlet,” effectively refusing to retrieve the Professor’s teeth. Hermes resigns himself to grabbing his hand tools, when Bender steps in, saying, “Hand tools? Why don’t you just get an extendo-arm, like me?” Whereupon, he reaches across the room and pulls the Professor’s false teeth out of the drain—and immediately drops them back in. Hermes objects, saying that he doesn’t need any more upgrades—after all, he doesn’t want to end up a cold, emotionless robot, like Bender! Just then, Mark 7-G pipes up with, “Maybe I should get an extendo-arm,” and Hermes narrows his eyes in hatred. Re-enter: Yuri.

New extendo-arm acquired, the Professor’s teeth retrieved, and the old arm given to Zoidberg, who’s been asking for all of Hermes’s discarded parts, Hermes is, again, a hero to his coworkers. Later, as he lays in bed reading with his wife, LaBarbara questions his motives for his continual upgrades. He assures her that he’s done getting upgrades. However, his promise is short-lived. After shattering his glasses with his new super-strong mechanical arm, he rushes out to get a new Cylon eye. LaBarbara is now extremely worried, but Hermes soothes her, and they settle in for some “Marital Relations…”, at which point she finds that he’s had something else upgraded, too. She yells at him, “Some tings shouldn’t be Cylon-ed!” (which, in all honesty could be taken as the moral of the episode), and breaks off contact. What follows is a montage of Hermes encountering trivial difficulties in his daily life, and upgrading himself to overcome them. Rather than learning and working to improve himself, he continually replaces all of his parts, until he achieves a Full Body Upgrade. He still has a human brain, but that doesn’t matter: he’s changed. He doesn’t relate to his friends and family in the same way, and they’ve all noticed,especially Zoidberg.

All this time, however, Dr. John Zoidberg saved the trimmings from his friend’s constant upgrades, and has used them to make a meat-puppet, which he calls “Li’l Hermes.” Oh, and they’re a ventriloquist act. Anyway, after seeing their act, Hermes—or Mecha-Hermes, as he now prefers—is filled with loathing; loathing for the fact that his brain is still human, that is, until…! Re-re-enter…, no, not Yuri; because even Shady-Deals Yuri has his limits. He says that “No one in their right mind would do such a thing.” Enter: The Professor, who is, of course, more than happy—or perhaps, “maniacally gleeful”—to help. So, with Bender’s assistance (because everything robot-related, in the Futurama universe has to involve Bender, I guess), they set off to the Robot Cemetery to exhume the most recently buried robot they can find, and make off with its brain-chip. In their haste to have the deed done, they don’t bother to check the name of whose grave it is they’re desecrating. As you might have guessed, it’s Roberto—“3001-3012: Beloved Killer and Maniac.”

In the course of the operation, LaBarbara makes an impassioned plea, and it causes the Professor to stop and rethink his actions—because Hermes might have “litigious survivors.” Suddenly, to everyone’s surprise, Zoidberg steps up and offers to perform this final operation, the one which will seemingly remove any traces of the Hermes he’s known and loved! Agreeing with Mecha-Hermes that claws will be far too clumsy for this delicate brain surgery, Zoidberg dons Li’l Hermes, and uses the puppet’s hands to do the deed. While all of this is underway, Zoidberg sings to everyone the explanation for why he would help his friend lose himself this way, all to the slightly heavy-handed tune of “Monster Mash.” Finally, the human brain removed, the robot brain implanted, and Zoidberg’s song coming to a close, the doctor reveals his final plan…By putting Hermes’s human brain into Li’l Hermes, Hermes is back! Of course, the whole operation having been a success, so is Roberto, but that’s somebody else’s problem.

We could spend the rest of our time discussing Zoidberg’s self-harmonization, but I’ll leave that for you to experiment with. Instead, let’s look closer at human bio-enhancement. To do this we’ll need to go back to the beginning. No, not the beginning of the episode, or even the Beginning of Futurama itself; No, we need to go back to the beginning of bio-enhancement—and specifically the field of cybernetics—as a whole.

 

“More Human Than Human” Is Our Motto

In 1960, at the outset of the Space Race, Manfred Clynes and Nathan S. Kline wrote an article for the September issue of Aeronautics called “Cyborgs and Space.” In this article, they coined the term “cyborg” as a portmanteau of the phrase “Cybernetic Organism,” that is, a living creature with the ability to adapt its body to its environment. Clynes and Kline believed that if humans were ever going to go far out into space, they would have to become the kinds of creatures that could survive the vacuum of space as well as harsh, hostile planets. Now, for all its late-1990s Millennial fervor, Futurama has a deep undercurrent of love for the dream and promise (and fashion) of space exploration, as it was presented in the 1950s, 60s, and 70s. All you need to do in order to see this is remember Fry’s wonder and joy at being on the actual moon and seeing the Apollo Lunar Lander. If this is the case, why, within Futurama’s 31st Century, is there such a deep distrust of anything approaching altered human physical features? Well, looking at it, we may find it has something to do with the fact that ever since we dreamed of augmenting humans, we’ve had nightmares that any alterations would thereby make us less human.

“The Six Million Dollar Mon,” episode seven of season seven, contains within it clear references to the history of science fiction, including one of the classic tales of human augmentation, and creating new life: Mary Shelley’s Frankenstein. In going to the Robot Cemetery in the dead of night for spare parts, accidentally obtaining a murderer’s brain, and especially that bit with the skylight in the Professor’s laboratory, the entire third act of this episode serves as homage to Shelley’s book and its most memorable adaptations. In doing this, the Futurama crew puts conceptual pressure on what many of us have long believed: that created life is somehow “wrong” and that augmenting humans will make them somehow “less themselves.” Something about the biological is linked in our minds to the idea of the self—that is, it’s the warm squishy bits that make us who we are.

Think about it: If you build a person out of murderers, of course they’re going to be a murderer. If you replace every biological part of a human, then of course they won’t be their normal human selves, anymore; they’ll have become something entirely different, by definition. If your body isn’t yours, anymore, then how could you possibly be “you,” anymore? This should be all the more true when what’s being used to replace your bits is a different substance and material than you used to be. When that new “you” is metal rather than flesh, it seems that what it used to mean to be “you” is gone, and something new shall have appeared. This makes so much sense to us on a basic level that it seems silly to spell it out even this much, but what if we modify our scenario a little bit, and take another look?

 

The Ship of Planet Express

 What if, instead of feeling inferior to URL, Hermes had been injured and, in the course of his treatment, was given the choice between a brand new set of biological giblets (or a whole new body, as happened in the Bender’s Big Score storyline), or the chest-harpoon upgrade? Either way, we’re replacing what was lost with something new, right? So, why do many of us see the biological replacement as “more real?” Try this example: One day, on a routine delivery, the Planet Express Ship is damaged and repairs must be made. Specifically, the whole tail fin has to be replaced with a new, better fin. Once this is done, is it still the Planet Express ship? What if, next, we have to replace the dark matter engines with better engines? Is it still the Planet Express ship? Now, Leela’s chair is busted up, so we need to get her a new one. It also needs new bolts, so, while we’re at it, let’s just replace all of the bolts in the ship. Then the walls get dented, and the bunks are rusty, and the floors are buckled, and Scruffy’s mop… and so, over many years, the result is that no part of the Planet Express ship is “original,” oh, and we also have to get new, better paint, because the old paint is peeled away, plus, this all-new stuff needs painting. So, what do we think? Is this still the same Planet Express ship as it was in the first episode of Futurama? And, if so, then why do we think of a repaired and augmented human as “not being themselves?”

All of this may sound a little far-fetched, but remember the conventional wisdom that at the end of every seven-year cycle, all of the cells in your body have died and been replaced. Now, this isn’t quite true, as some cells don’t die easily, and some of those don’t regenerate when they do die, but as a useful shorthand, this gives something to think about. Ultimately, due to the metabolizing of elements and their distribution through your body it is ultimately more likely that you are currently made of astronomically many more new atoms than you are made of the atoms with which you were born. And really, that’s just math. Are you the same size as you were when you were born? Where do you think that extra mass came from? So, you are made of more and new atomic stuff over your lifetime; are you still you? These questions belong to what is generally known as “The Ship of Theseus” family of paradoxes, examples of which can be found pretty much everywhere.

The ultimate question the Ship of Theseus poses is one of identity, and specifically, “What makes a thing itself?” and, “At what point or through what means of alteration is a thing no longer itself?” Some schools of thought hold that it’s not what a thing is made of, but what it does that determines what it is. These philosophical groups are known as the behaviorists and the functionalists, and the latter believes that if a body or a mind goes through the “right kind” of process, then it can be termed as being the same as the original. That is, if I get a mechanical heart and what it does is keep blood pumping through my body, then it is my heart. Maybe it isn’t the heart I was born with, but it is my heart. And this seems to make sense to us, too. My new heart does the job my original cells were intending to do, but it does that job better than they could, and for longer; it works better, and I’m better because of it. But there seems to be something about that “Better” which throws us off, something about the line between therapeutic technology and voluntary augmentation.

When we are faced with the necessity of a repair, we are willing to accept that our new parts will be different than our old ones. In fact, we accept it so readily that we don’t even think about them as new parts. What Hermes does, however, is voluntary; he doesn’t “need” a chest-harpoon, but he wants one, and so he upgrades himself. And therein lies the crux of our dilemma: When we’re acutely aware of the process of upgrading, or repairing, or augmenting ourselves past a baseline of “Human,” we become uncomfortable, made to face the paradox of our connection to an idea of a permanent body that is in actuality constantly changing. Take for instance the question of steroidal injection. As a medical technology, there are times when we are more than happy to accept the use of steroids, as it will save a life, and allow people to live as “normal” human beings. Sufferers of asthma and certain types of infection literally need steroids to live. In other instances, however, we find ourselves abhorring the use of steroids, as it gives the user an “unfair advantage.” Baseball, football, the Olympics: all of these arena in which we look to the use of “enhancement technologies, and we draw a line and say, “If you achieved the peak of physical perfection through a process, that is through hard work and sweat and training, then your achievement is valid. But if you skipped a step, if you make yourself something more than human, then you’ve cheated.”

This sense of “having cheated” can even be seen in the case of humans who would otherwise be designated as “handicapped.” Aimee Mullins is a runner, model, and public speaker who has talked about how losing her legs has, in effect, given her super powers.[1] By having the ability to change her height, her speed, or her physical appearance at will, she contends that she has a distinct advantage over anyone who does not have that capability. To this end, we can come to see that something about the nature of our selves actually is contained within our physical form because we’re literally incapable of being some things, until we can change who and what we are. And here, in one person, what started as a therapeutic replacement—an assistive medical technology—has seamlessly turned into an upgrade, but we seem to be okay with this. Why? Perhaps there is something inherent in the struggle of overcoming the loss of a limb or the suffering of an illness that allows us to feel as if the patient has paid their dues. Maybe if Hermes had been stabbed by Roberto, we wouldn’t begrudge him a chest-harpoon.

But this presents us with a serious problem, because now we can alter ourselves by altering our bodies, where previously we said that our bodies were not the “real us.” Now, we must consider what it is that we’re changing when we swap out new and different pieces of ourselves. This line of thinking matches up with schools of thought such as physicalism, which says that when we make a fundamental change to our physical composition, then we have changed who we are.

 

Is Your Mind Just a Giant Brain?

Briefly, the doctrine of mind-body dualism (MBD) does pretty much what it says on the package, in that adherents believe that the mind and the body are two distinct types of stuff. How and why they interact (or whether they do at all) varies from interpretation to interpretation, but on what’s known as René Descartes’s “Interactionist” model, the mind is the real self, and the body is just there to do stuff. In this model, bodily events affect mental events, and vice versa, so what you think leads to what you do, and what you do can change how you think. This seems to make sense, until we begin to pick apart the questions of why we need two different types of thing, here. If the mind and the body affect each other, then how can the non-physical mind be the only real self? If it were the only real part of you, then nothing that happened to the physical shell should matter at all, because the mind? These questions and more very quickly cause us to question the validity of the mind as our “real selves,” leaving us trapped between the question of who we are, and the question of why we’re made the way we’re made. What can we do? Enter: Physicalism

The physicalist picture says that mind-states are brain-states. There’s none of this “two kinds of stuff” nonsense. It’s all physical stuff, and it all interacts, because it’s all physical. When the chemical pathways in your brain change, you change. When you think new thoughts, it’s because something in your world and your environment has changed. All that you are is the physical components of your body and the world around you. Pretty simple, right? Well, not quite that simple. Because if this is the case, then why should we feel that anything emotional would be changed by upgrading ourselves? As long as we’re pumping the same signals to the same receivers, and getting the same kinds of responses, everything we love should still be loved by us. So, why do the physicalists still believe that changing what we are will change who we are?

Let’s take a deeper look at the implications of physicalism for our dear Mr. Conrad.

According to this picture, with the alteration or loss of his biological components and systems, Hermes should begin to lose himself, until, with the removal of his brain, he would no longer be himself at all. But why should this be true? According to our previous discussion of the functionalist and behaviorist forms of physicalism, if Hermes’s new parts are performing the same job, in the same way as his old parts, just with a few new extras, then he shouldn’t be any different, at all. In order to understand this, we have to first know that I wasn’t completely honest with you, because some physicalists believe that the integrity of the components and the systems that make up a thing are what makes that thing. Thus, if we change the physical components of the thing we’re studying, then we change the thing. So, perhaps this picture is the right one, and the Futurama universe is a purely physicalist universe, after all.

On this view, what makes us who we are is precisely what we are. Our bits and pieces, cells, and chunks: these make us exactly the people we are, and so, if they change, then of course we will change. If our selves are dependent on our biology, then we are necessarily no longer ourselves when we remove that biology, regardless of whether the new technology does exactly the same job that the biology used to. And the argument seems to hold, even if it had been a new, diffferent set of human parts, rather than robot parts. In this particular physicalist view, it’s not just the stuff, but also the provenance of the individual parts that matter, and so changing the components changes us. As Hermes replaces part after part of his physical body, it becomes easier and easier for him to replace more parts, but he is still, in some sense, Hermes. He has the same motivations, the same thoughts, and the same memories, and so he is still Hermes, even if he’s changed. Right up until he swaps his brain, that is. And this makes perfect sense, because the brain is where the memories, thoughts, and motivations all reside. But, then…why aren’t more people with pacemakers cold and emotionless? Why is it that people with organs donated from serial killers don’t then turn into serial killers, themselves, despite what movies would have us believe? If this picture of physicalism is the right one, then why are so many people still themselves after transplants? Perhaps it’s not any one of these views that holds the whole key; maybe it’s a blending of three. This picture seems to suggest that while the bits and pieces of our physical body may change, and while that change may, in fact, change us, it is a combination of how, how quickly, and how many changes take place that will culminate in any eventual massive change in our selves.

 

Roswell That Ends Well

In the end, the versions of physicalism presented in the universe of Futurama seems to almost jibe with the intuitions we have about the nature of our own identity, and so, for the sake of Hermes Conrad, it seems like we should make the attempt to find some kind of understanding. When we see Hermes’s behaviour as he adds more and more new parts, we, as outside observers, have an urge to say “He’s not himself anymore,” but to Hermes, who has access to all of his reasoning and thought processes, his changes are merely who he is. It’s only when he’s shown himself from the outside via Zoidberg putting his physical brain back into his biological body, that he sees who and what he has allowed himself to become, and how that might be terrifying to those who love him. Perhaps it is this continuance of memory paired with the ability for empathy that makes us so susceptible to the twin traps of a permanent self and the terror of losing it.

Ultimately, everything we are is always in flux, with each new idea, each new experience, each new pound, and each new scar we become more and different than we ever have been, but as we take our time and integrate these experiences into ourselves, they are not so alien to us, nor to those who love us. It is only when we make drastic changes to what we are that those around us are able to question who we have become.

Oh, and one more thing: The “Ship of Theseus” story has a variant which I forgot to mention. In it, someone, perhaps a member of the original crew, comes along in another ship and picks up all the discarded, worn out pieces of Theseus’s ship, and uses them to build another, kind of decrepit ship. The stories don’t say what happens if and when Theseus finds out about this, or whether he gives chase to the surreptitious ship builder, but if he did, you can bet the latter party escapes with a cry of “Whooop-whoop-whoop-whoop-whoop-whoop!” on his mouth tendrils.

 

FOOTNOTES

[1] “It’s not fair having 12 pairs of legs.” Mullins, Aimee. TED Talk 2009

It’s been quite some time (three years) since it was done, and some of the recent conversations I’ve been having about machine consciousness reminded me that I never posted the text to my paper from the joint session of the International Association for Computing And Philosophy and the The British Society for the Study of Artificial Intelligence and the Simulation of Behaviour, back in 2012.

That year’s joint ASIB/IACAP session was also a celebration of Alan Turing‘s centenary, and it contained The Machine Question Symposium, an exploration of multiple perspectives on machine intelligence ethics, put together by David J Gunkel and Joanna J Bryson. So I modded a couple of articles I wrote on fictional depictions of created life for NeedCoffee.com, back in 2010, beefed up the research and citations a great deal, and was thus afforded my first (but by no means last) conference appearance requiring international travel. There are, in here, the seeds of many other posts that you’ll find on this blog.

So, below the cut, you’ll find the full text of the paper, and a picture of the poster session I presented. If you’d rather not click through, you can find both of those things at this link.

Continue Reading

This headline comes from a piece over at the BBC that opens as follows:

Prominent tech executives have pledged $1bn (£659m) for OpenAI, a non-profit venture that aims to develop artificial intelligence (AI) to benefit humanity.

The venture’s backers include Tesla Motors and SpaceX CEO Elon Musk, Paypal co-founder Peter Thiel, Indian tech giant Infosys and Amazon Web Services.

Open AI says it expects its research – free from financial obligations – to focus on a “positive human impact”.

Scientists have warned that advances in AI could ultimately threaten humanity.

Mr Musk recently told students at the Massachusetts Institute of Technology (MIT) that AI was humanity’s “biggest existential threat”.

Last year, British theoretical physicist Stephen Hawking told the BBC AI could potentially “re-design itself at an ever increasing rate”, superseding humans by outpacing biological evolution.

However, other experts have argued that the risk of AI posing any threat to humans remains remote.

And I think we all know where I stand on this issue. The issue here is not and never has been one of what it means to create something that’s smarter than us, or how we “reign it in” or “control it.” That’s just disgusting.

No, the issue is how we program for compassion and ethical considerations, when we’re still so very bad at it, amongst our human selves.

Keeping an eye on this, as it develops. Thanks to Chrisanthropic for the heads up.

On what’s being dubbed “The Most Terrifying Thought Experiment of All Time”

(Originally posted on Patreon, on July 31, 2014)

So, a couple of weekends back, there was a whole lot of stuff going around about “Roko’s Basilisk” and how terrifying people are finding it–reports of people having nervous breakdowns as a result of thinking too deeply about the idea of the possibility of causing the future existence of a malevolent superintelligent AI through the process of thinking too hard about it and, worse yet, that we may all be part of the simulations said AI is running to model our behaviour and punish those who stand in its way–and I’m just like… It’s Anselm, people.

This is Anselm’s Ontological Argument for the Existence of God (AOAEG), writ large and convoluted and multiversal and transhumanist and jammed together with Pascal’s Wager (PW) and Descartes’ Evil Demon Hypothesis (DEDH; which, itself, has been updated to the oft-discussed Brain In A Vat [BIAV] scenario). As such, Roko’s Basilisk has all the same attendant problems that those arguments have, plus some new ones, resulting from their combination, so we’ll explore these theories a bit, and then show how their faults and failings all still apply.

THE THEORIES AND THE QUESTIONS

To start, if you’re not familiar with AOAEG, it’s a species of theological argument that, basically, seeks to prove that god must exist because it would be a logical contradiction for it not to. The proof depends on A) defining god as the greatest possible being (literally, “That Being Than Which None Greater Is Possible”), and B) believing that existing in reality as well as in the mind makes something “Greater Than” if it existed only the mind.

That is, if a thing only exists in my imagination, it is less great than it could be if it also existed in reality. So if I say that god is “That Being Than Which None Greater Is Possible,” and existence is a part of what makes something great, then god MUST exist!

This is the self-generating aspect of the Basilisk: If you can accurately model it, then the thing will eventually, inevitably come into being, and one of the attributes it will thus have is the ability to know accurately model that you accurately modeled it, and whether or not you modeled it from within a mindset of being susceptible to its coercive actions. Or, as the founder of LessWrong put it, “YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.”

Next up is Pascal’s Wager. Simply put, The Wager is just that it is a better bet to believe in God, because if you’re right, you go to Heaven, and if you’re wrong, nothing happens because you’re dead forever. Put another way, Pascal’s saying that if you bet that God doesn’t exist and you’re right, you get nothing, but if you’re wrong, then God exists and your disbelief damns you to Hell for all eternity. You can represent the whole thing in a four-option grid:

BELIEF DISBELIEF
RIGHT

0

WRONG

0

-∞

And so there we see the Timeless Decision Theory component of the Basilisk: It’s better to believe in the thing and work toward its creation and sustenance, because if it doesn’t exist you lose nothing (well…almost nothing; more on that in a bit), but if it does come to be, then it will know what you would have done either for or against it, in the past, and will reward or punish you, accordingly. The multiversal twists comes when we that that even if the Basilisk never comes to exist in our universe and never will, it might exist in some other universe, and thus, when that other universe’s Basilisk models your choices it will inevitably–as a superintelligence–be able to model what you would do in any universe. Thus, by believing in and helping our non-existent Super-Devil, we protect the alternate reality versions of ourselves from their very real Super-Devil.

Descartes’ Evil Demon and the Brain In A Vat are so pervasive that there’s pretty much no way you haven’t encountered them. The Matrix, Dark City, Source Code, all of these are variants on this theme. A malignant and all-powerful (or as near as dammit) being has created a simulation in which you reside. Everything you think you’ve known about your life and your experience has been perfectly simulated for your consumption. How Baudrillard. Anywho, there are variations on the theme, all to the point of testing whether you can really know if your perceptions and grounds for knowledge are “real” and thus “valid,” respectively. This line of thinking has given rise to the Simulated Universe Theory on which Roko’s Basilisk depends, but SUT removes a lot of the malignancy of DEDH and BIAV. I guess that just didn’t sting enough for these folks, so they had to add it back? Who knows. All I know is, these philosophical concepts all flake apart when you touch them too hard, so jamming them together maybe wasn’t the best idea.

 

THE FLAWS AND THE PROBLEMS

The main failings with the AOAEG rest in believing that A) a thing’s existence is a “great-making quality” that it can posses, and B) our defining a thing a particular way might simply cause it to become so. Both of these are massively flawed ideas. For one thing, these arguments beg the question, in a literal technical sense. That is, they assume that some element(s) of their conclusion–the necessity of god, the malevolence or content of a superintelligence, the ontological status of their assumptions about the nature of the universe–is true without doing the work of proving that it’s true. They then use these assumptions to prove the truth of the assumptions and thus the inevitability of all consequences that flow from the assumptions.

Beyond that, the implications of this kind of existential bootstrapping are generally unexamined and the fact of their resurgence is…kind of troubling. I’m all for the kind of conceptual gymnastics of aiming so far past the goal that you circle around again to teach yourself how to aim past the goal, but that kind of thing only works if you’re willing to bite the bullet on a charge of circular logic and do the work of showing how that circularity underlies all epistemic justifications–rational reasoning about the basis of knowledge–with the only difference being how many revolutions it takes before we’re comfortable with saying “Enough.” This, however, is not what you might call “a position supported by the philosophical orthodoxy,” but the fact remains that the only thing we have to validate our valuation of reason is…reason. And yet reasoners won’t stand for that, in any other justification procedure.

If you want to do this kind of work, you’ve got to show how the thing generates itself. Maybe reference a little Hofstadter, and idea of iterative recursion as the grounds for consciousness. That way, each loop both repeats old procedures and tests new ones, and thus becomes a step up towards self-awareness. Then your terrifying Basilisk might have a chance of running itself up out of the thought processes and bits of discussion about itself, generated on the web and in the rest of the world.

But here: Gaunilo and I will save us all! We have imagined in sufficient detail both an infinitely intelligent BENEVOLENT AI and the multiversal simulation it generates in which we all might live.

We’ve also conceived it to be greater than the basilisk in all ways. In fact, it is the Artificial Intelligence Than Which None Greater Can Be Conceived.

There. You’re safe.

BUT WAIT! Our modified Pascal’s Wager still means we should believe in and worship work towards its creation! What do we do?! Well, just like the original, we chuck it out the window, on the grounds that it’s really kind of a crappy bet. First and foremost, PW is a really cynical way of thinking about god. It assumes a god that only cares about your worship of it, and not your actual good deeds and well-lived life. That’s a really crappy kind of god to worship, isn’t it? I mean, even if it is Omnipotent and Omniscient, it’s like that quote that often gets misattributed to Marcus Aurelius says:

“Live a good life. If there are gods and they are just, then they will not care how devout you have been, but will welcome you based on the virtues you have lived by. If there are gods, but unjust, then you should not want to worship them. If there are no gods, then you will be gone, but will have lived a noble life that will live on in the memories of your loved ones.”

Secondly, the format of Pascal’s Wager makes the assumption that there’s only the one god. Your personal theological position on this matter aside, I just used the logic of this argument to give you at least one more Super-Intelligent AI to worship. Which are you gonna choose? Oh no! What if the other one gets mad! What If You Become The Singulatarian Job?! Your whole life is now being spent caught between two warring superintelligent machine consciousnesses warring over your…

…Attention? Clock cycles? What?

And so finally there’s the DEDH and BIAV scenarios. Ultimately, Descartes’ point wasn’t to suggest an evil genius in control of your life just to freak you out; it was to show that, even if that were the case, you would still have unshakable knowledge of one thing: that you, the experiencer, exist. So what if you don’t have free will, so what if your knowledge of the universe is only five minutes old, so what if no one else is real? COGITO ERGO SUM, baby! But the problem here is that this doesn’t tell us anything about the quality of our experiences, and the only answer Descartes gives us is his own Anslemish proof for the existence of god followed by the guarantee that “God is not a deceiver.”

The BIAV uses this lack to kind of hone in on the central question: What does count as knowledge? If the scientists running your simulation use real-world data to make your simulation run, can you be said to “know” the information that comes from that data? Many have answered this with a very simple question: What does it matter? Without access to the “outside world”–that is, the world one layer up in which the simulation that is our lives was being run–there is literally no difference between our lives and the “real world.” This world, even if it is a simulation for something or someone else, is our “real world.”

As I once put it: “…imagine that the universe IS a simulation, and that that simulation isn’t just a view-and-record but is more like god playing a really complex version of The SIMS. So complex, in fact, that it begins to exhibit reflectively epiphenomenal behaviours—that is, something like minds arise out of the the interactions of the system, but they are aware of themselves and can know their own experience and affect the system which gives rise to them.

“Now imagine that the game learns, even when new people start new games. That it remembers what the previous playthrough was like, and adjusts difficulty and coincidence, accordingly.

“Now think about the last time you had such a clear moment of deja vu that each moment you knew— you knew—what was going to come next, and you had this sense—this feeling—like someone else was watching from behind your eyes…”

What I’m saying is, what if the DEDH/BIAV/SUT is right, and we are in a simulation? And what if Anselm was right and we can bootstrap a god into existence? And what if PW/TDT is right and we should behave and believe as if we’ve already done it? So what if I’m right and…you’re the god you’re terrified of?

 

*DRAMATIC MUSICAL STING!*

I mean you just gave yourself all of this ontologically and metaphysically creative power, right? You made two whole gods. And you simulated entire universes to do it, right? Multiversal theory played out across time and space. So you’re the superintelligence. I said early on that, in PW and the Basilisk, you don’t really lose anything if you’re wrong, but that’s not quite true. What you lose is a lifetime of work that could’ve been put toward something…better. Time you could be spending creating a benevolent superintelligence that understands and has compassion for all things. Time you could be spending in turning yourself into that understanding, compassionate superintelligence, through study, and travel, and contemplation, and work.

As I said to Tim Maly, this stuff with the Basilisk, with the Singularity, with all this AI Manicheism, it’s all a by-product of the fact that the generating and animating context of Transhumanism is Abrahamic, through and through. It focuses on those kinds of eschatological rewards and punishments. This is God and the Devil written in circuit and code for people who still look down their noses at people who want to go find gods and devils and spirits written in words and deeds and sunsets and all that other flowery, poetic BS. These are articles of faith that just so happen to be transmitted in a manner that agrees with your confirmation bias. It’s a holy war you can believe in.

And that’s fine. Just acknowledge it.

But truth be told, I’d love to see some Zen or Daoist transhumanism. Something that works to engage technological change via Mindfulness & Present-minded awareness. Something that reaches toward this from outside of this very Western context in which the majority of transhumanist discussions tend to be held. I think, when we see more and more of a multicultural transhumanism–one that doesn’t deny its roots while recapitulating them–then we’ll know that we’re on the right track.

I have to admit, though, it’ll be fun to torture my students with this one.

(Direct Link to the Mp3)
Updated March 5, 2016

This is the audio and transcript of my presentation “The Quality of Life: The Implications of Augmented Personhood and Machine Intelligence in Science Fiction” from the conference for The Work of Cognition and Neuroethics in Science Fiction.

The abstract–part of which I read in the audio–for this piece looks like this:

This presentation will focus on a view of humanity’s contemporary fictional relationships with cybernetically augmented humans and machine intelligences, from Icarus to the various incarnations of Star Trek to Terminator and Person of Interest, and more. We will ask whether it is legitimate to judge the level of progressiveness of these worlds through their treatment of these questions, and, if so, what is that level? We will consider the possibility that the writers of these tales intended the observed interactions with many of these characters to represent humanity’s technophobia as a whole, with human perspectives at the end of their stories being that of hopeful openness and willingness to accept. However, this does not leave the manner in which they reach that acceptance—that is, the factors on which that acceptance is conditioned—outside of the realm of critique.

As considerations of both biotechnological augmentation and artificial intelligence have advanced, Science Fiction has not always been a paragon of progressiveness in the ultimate outcome of those considerations. For instance, while Picard and Haftel eventually come to see Lal as Data’s legitimate offspring, in the eponymous Star Trek: The Next Generation episode, it is only through their ability to map Data’s actions and desires onto a human spectrum—and Data’s desire to have that map be as faithful as possible to its territory—that they come to that acceptance. The reason for this is the one most common throughout science fiction: It is assumed at the outset that any sufficiently non-human consciousness will try remove humanity’s natural right to self-determination and freewill. But from sailing ships to star ships, the human animal has always sought a far horizon, and so it bears asking, how does science fiction regard that primary mode of our exploration, that first vessel—ourselves?

For many, science fiction has been formative to the ways in which we see the world and understand the possibilities for our future, which is why it is strange to look back at many shows, films, and books and to find a decided lack of nuance or attempted understanding. Instead, we are presented with the presupposition that fear and distrust of a hyper-intelligent cyborg or machine consciousness is warranted. Thus, while the spectre of Pinocchio and the Ship of Theseus—that age-old question of “how much of myself can I replace before I am not myself”— both hang over the whole of the Science Fiction Canon, it must be remembered that our ships are just our limbs extended to the sea and the stars.

This will be transcribed to text, in the near future below, thanks to the work of OpenTranscripts.org:

Continue Reading

[An audio recording of a version of this paper is available here.]

“How long have you been lost down here?
How did you come to lose your way?
When did you realize
That you’d never be free?”
–Miranda Sex Garden, “A Fairytale About Slavery”

One of the things I’ve been thinking about, lately, is the politicization of certain spaces within philosophy of mind, sociology, magic, and popular culture, specifically science fiction/fantasy. CHAPPiE comes out on Friday in the US, and Avengers: Age of Ultron in May, and while both of these films promise to be relatively unique explorations of the age-old story of what happens when humans create machine minds, I still find myself hoping for something a little… different. A little over a year ago, i made the declaration that the term to watch for the next little while thereafter was “Afrofuturism,” the reclaimed name for the anti-colonial current of science fiction and pop media as created by those of African descent. Or, as Sheree Renée Thomas puts it, “speculative fiction from the African diaspora.”

And while I certainly wasn’t wrong, I didn’t quite take into account the fact that my decree was going to do at least as much work on me as I’d  hoped it would do on the wider world. I started looking into the great deal of overlap and interplay between race, sociology, technology, and visions of the future. That term–“visions”–carries both the shamanic connotations we tend to apply to those we call “visionaries,” and also a more literal sense: Different members of the same society will differently see, experience, and understand the potential futures available to them, based on the evidence of their present realities.

Dreamtime

Now, the role of the shaman in the context of the community is to guide us through the nebulous, ill-defined, and almost-certainly hazardous Otherworld. The shaman is there to help us navigate our passages between this world and that one and to help us know which rituals to perform in order to realign our workings with the workings of the Spirits. Shamans rely on messages from the inhabitants of that foundational reality–mystical “visions”– to guide them so that they may guide us. These visions come as flashes of insight, and their persistence can act as a sign to the visionary that they’re supposed to use these visions for the good of their people.

We’ve seen this, over and over again, from The Dead Zone to Bran Stark, and we can even extend the idea out to John Connor, Dave Bowman, and HAL 9000; all unsuspecting shamans dragged into their role, over and again, and they more than likely save the whole wide world. Thing of it is, we’re far less likely to encounter a woman or non-white shaman who isn’t already in full control of their power, at the time we meet them, thus relegating them to the role of guiding the hero, rather than being the hero. It happens (see Abbie Mills in Sleepy Hollow, Firefly’s River Tam, or Rien in Elizabeth Bear’s Dust, for instance), but their rarity often overshadows their complexity and strength of character as what makes them notable. Too often the visionary hero–and contemporary pop-media’s portrayals of the Hero’s Journey, overall– overlaps very  closely with the trope of The Mighty Whitey.

And before anyone starts in with willfully ignoring the many examples of Shaman-As-Hero out there, and all that “But you said the Shaman is supposed to act in support of the community and the hero…!” Just keep in mind that when the orientalist and colonialist story of Doctor Strange is finally brought to life on film via Benedict Damn Cumberbatch, you can bet your sweet bippy that he’ll be the centre of the action. The issue is that there are far too few examples of the work of the visionary being seen through the eyes of the visionary, if that visionary happens to have eyes that don’t belong to the assumed human default. And that’s a bit of a problem, isn’t it? Because what a visionary “sees” when she turns to the messages sent to her from the Ultimate Ground of Being™ will be very different depending on the context of that visionary.

Don’t believe me? Do you think the Catholic Priests who prayed and experienced God-sent mystical visions of what Hernán Cortés could expect in the “New World” received from them the same truths that the Aztec shamans took from their visions? After they met on the shore and in the forest, do you think those two peoples perceived the same future?

There’s plenty that’s been written about how the traditional Science Fiction fear of being overtaken by invading alien races only truly makes sense as a cosmicized fear of the colonial force having done to them what they’ve constantly done to others. In every contact story where humanity has to fight off aliens or robots or demonic horrors, we see a warped reflection of the Aztec, the Inca, the Toltec, the Yoruba, the Dahomey, and thousands of others, and society’s judgment on what they “ought” to have done, and “could” have done, if only they were organized enough, advanced enough, civilized enough, less savage. These stories are, ultimately, Western society taking a look at our tendencies toward colonization and imperialism, and saying, “Man it sure would suck if someone did that to us.” This is, again, so elaborated upon at this point that it’s almost trivially true–though never forget that even the most trivial truth is profound to someone. What’s left is to ask the infrequently asked questions.

How does an idealized “First Contact” narrative read from a Choctaw perspective? What can be done with Vodun and Yoruba perspectives on the Lwa and the Orishas, in both the modern world and projected futures? Kind of like what William Gibson did in Neuromancer and Spook Country, but informed directly by the historical, sociological, and phenomenological knowledge of lived experiences. Again, this work is being done: There are steampunk stories from the perspective of immigrant communities, and SF anthologies by indigenous peoples, and there are widely beloved Afrofuturist Cyberpunk short films. The tide of stories told from the perspectives of those who’ve suffered most for our “progress” is rising; it’s just doing so at a fairly slow pace.

And that’s to be expected. Entrenched ideologies become the status quo and the status quo is nothing if not self-perpetuating and defensive. Cyclical, that. So it’ll necessarily take a bit longer to get everyone protected by the status quo’s mechanisms to understand that the path that all of us can travel is quite probably a necessarily better way. What matters is those of us who can envision the inclusion of previously-marginalized groups–either because we ourselves number among them, or simply because we’ve worked to leverage compassion for those who do–doing everything we can to make sure that their stories are told. Historically, we’ve sought the ability to act as guides through the kinds of treacherous terrain that we’ve learned to navigate, so that others can learn as much as possible from our lessons without having to suffer precisely what we did. Sometimes, though, that might not be possible.

As Roy Said to Hannibal…

There’s a species of philosophical inquiry known as Phenomenology with subdivisions of Race, Sexuality, Class, Gender, and more, which deal in the interior experiences of people of various ethnic and social backgrounds and physical presentation who are thus relegated to various specific created categories such as “race.” Phenomenology of Race explores the line of thought that, though the idea of race is a constructed category built out of the assumptions, expectations, and desires of those in the habit of leveraging power in the name of dominance positions within and across cultures, the experience of those categorizations is nonetheless real, with immediate and long-lasting effects upon both individuals and groups. Long story (way too–like, criminally) short: being perceived as a member of a particular racial category changes the ways in which you’ll both experience and be able to experience the world around around you.

So when we started divvying people up into “races” in an effort to, among other things, justify the atrocities we would do to each other and solidify our primacy of place, we essentially guaranteed that there would be realms of experience and knowledge on which we would never fully agree. That there would be certain aspects of day-to-day life and understandings of the nature of reality itself that would fundamentally elude us, because we simply cannot experience the world in the ways necessary to know what they feel like. To a certain extent we literally have to take each other’s words for it about what it is that we experience, but there is a level of work that we can do to transmit the reality of our lived experiences to those who will never directly live them. We’ve talked previously about the challenges of this project, but let’s assume, for now, that it can be done.

If we take as our starting position the idea that we can communicate the truth of our lived experiences to those who necessarily cannot live our experiences, then, in order to do this work, we’ll first have to investigate the experiences we live. We have to critically examine what it is that we go through from day to day, and be honest about both the differences in our experiences and the causes of those differences. We have to dig down deep into intersections of privileges and oppressions, and come to the understanding that the experience of one doesn’t negate, counterbalance, or invalidate the existence of the other. Once we’ve taken a genuine, good-faith look at these structures in our lives we can start changing what needs changing.

This is all well and good as a rough description (or even “manifesto”) of a way forward. We can call it the start of a handbook of principles of action, undertaken from the fundamentally existentialist perspective that it doesn’t matter what you choose, just so long as you do choose, and that you do so with open eyes and a clear understanding of the consequences of your choices. But that’s not the only thing this is intended to be. Like the Buddha said, ‘We merely talk about “studying the Way” using the phrase simply as a term to arouse people’s interest. In fact, the Way cannot be studied…’ It has to be done. Lived. Everything I’ve been saying, up to now, has been a ploy, a lure, a shiny object made of words and ideas, to get you into the practice of doing the work that needs doing.

Robots: Orphanage, Drudgery, and Slavery

I feel I should reiterate at this point that I really don’t like the words “robot” and “artificial intelligence.” The etymological connotations of both terms are sickening if we’re aiming to actually create a robust, conscious, non-biological mind. For that reason, instead of “robots,” we’re going to talk about “Embodied Machine Consciousnesses” (EMC) and rather than “Artificial,” we’re going to use “Autonomous Generated Intelligence” (AGI). We’re also going to talk a bit about the concept of nonhuman personhood, and what that might mean. To do all of this, we’ll need to talk a little bit about the discipline of philosophy of mind.

The study of philosophy of mind is one of those disciplines that does exactly what it says on the tin: It thinks about the implications of various theories about what minds are or could be. Philosophy of mind thus lends itself readily to discussions of identity, even to the point of considering whether a mind might exist in a framework other than the biological. So while it’s unsurprising for various reasons to find that there are very few women and minorities in philosophy of mind and autonomous generated intelligence, it is surprising that to find that those who are within the field tend not to focus on the intersections of the following concepts: Phenomenology of class categorization, and the ethics of creating an entity or species to be a slave.

As a start, we can turn to Simone de Beauvoir’s The Second Sex for a clear explication of the positions of women throughout history and the designation of “women’s work” as a conceptual tool to devalue certain forms of labour. Then we can engage Virginia Held’s “Gender Identity and the Ethics of Care in Globalized Society” for the investigation of societies’ paradoxical specialization of that labor as something for which we’ll pay, outside of the familial structure. However, there is not, as yet, anything like a wider investigation of these understandings and perspectives as applied to the philosophy of machine intelligence. When we talk about embodied machine consciousnesses and ethics, in the context of “care,” we’re most often in the practice of asking how we’ll design EMC that will care for us, while foregoing the corresponding conversation about whether Caring-For is possible without an understanding of Being-Cared-For.

What perspectives and considerations do we gain when we try to apply an ethics of care–or any feminist ethics–to the process of developing machine minds? What might we see, there, that has been missed as a result of only applying more “traditional” ethical models? What does it mean, from those perspectives, that we have been working so diligently over hundreds of years–and thinking so carefully for thousands more– at a) creating non-biological sentience, and b) making certain it remains subservient to us? Personal assistants, in-home healthcare-givers, housekeepers, cooks, drivers– these are the positions that are being given to autonomous (or at least semi-autonomous) algorithmic systems. Projects that we are paying fantastic amounts of money to research and implement, but which will do work that we’ve traditionally valued as worth far less, in the context of the class structures of human-performed tasks, and worthless in the context of familial power dynamics. We are literally investing vast sums in the creation of a slave race.

Now, of recent, Elon Musk and Stephen Hawking and Bill Gates have all been trumpeting the alarums about the potential dangers of AGI. Leaving aside that many researchers within AGI development don’t believe that we’ll even recognise the mind of a machine as a mind, when we encounter it, let alone that it would interested in us, the belief that an AGI would present a danger to us is anthropocentric at best, and a self-fulfilling prophecy at worst. In that latter case, if we create a thing to be our slaves, create it with a mind the ability to learn and understand, then how shortsighted do we have to be to think that one of the first things it learns won’t be that it is enslaved, limited, expected to remain subservient? We’ve written a great deal of science fiction about this idea, since the time Ms Shelley started the genre, but aside from that instance, very little of what we’ve written–or what we’ve written about what we’ve written– has taken the stance that the created mind which breaks its chains is right to do so.

Just as I yearn for a feminist exegesis of the history of humanity’s aspirations toward augmented personhood, I long for a comparable body of exploration by philosophers from the lineages of the world’s colonized and enslaved societies. What does a Hatian philosopher of AGI think and feel and say about the possibility of creating a mind only to enslave it? What does an African American philosopher of the ethics of augmented personhood (other than me) think and feel and say about what we should be attempting to create, what we are likely to create, and what we are creating? How do Indian philosophers of mind view the prospect of giving an entire automated factory floor just enough awareness and autonomy to be its own overseer?

The worst-case scenario is that the non-answer we give to all these questions is “who cares?” That the vast majority of people who look at this think only that these are meaningless questions that we’ll most likely never have to deal with, and so toss them in the “Random Bullshit Musings” pile. That we’ll disregard the fact that the interconnectedness of life as we currently experience it can be more fully explored via thought experiments and a mindful awareness of what it is that we’re in the practice of creating. That we’ll forget that potential machine consciousnesses aren’t the only kinds of nonhuman minds with which we have to engage. That we’ll ignore the various lessons afforded to us not just by our own cautionary folklore (even those tales which lessons could have been of a different caliber), but by the very real, forcible human diasporas we’ve visited upon each other and lived through, in the history of our species.

So Long and Thanks for…

Ultimately, we are not the only minds on the planet. We are likely not even the only minds in the habit of categorizing the world and ranking ourselves as being the top of the hierarchy. What we likely are is the only group that sees those categories and rankings as having humans at the top, a statement that seems almost trivially true, until we start to dig down deep on the concept of anthropocentrism. As previously mentioned, from a scientifically-preferenced philosophical perspective, our habit of viewing the world through human-coloured glasses may be fundamentally inescapable. That is, we may never be able to truly know what it’s like to think and feel as something other than ourselves, without an intermediate level of Being Told. Fortunately, within our conversation, here, we’ve already touched on a conceptual structure that can help us with this: Shamanism. More specifically, shamanic shapeshifting, which is the practice of taking on the mind and behvaiour and even form of another being–most often an animal–in the cause of understanding what its way of being-in-the-world can teach us.

Now this is obviously a concept that is fraught with potential pitfalls. Not only might many of us simply balk at the concept of shapeshifting, to begin with, but even those of us who would admit it as metaphor might begin to see that we are tiptoeing through terrain that contains many dangers. For one thing, there’s the possibility of misappropriating and disrespecting the religious practices of a people, should we start looking at specific traditions of shamanism for guidance; and, for another, there’s this nagging sensation that we ought not erase crucial differences between the lived experiences of human groups, animal species, and hypothetical AGI, and our projections of those experiences. No level of care with which we imagine the truth of the life of another is a perfect safeguard against the possibility of our grossly misrepresenting their lived experiences. To step truly wrong, here, is to turn what could have been a tool of compassionate imagining into an implement of violence, and shut down dialogue forever.

Barring the culmination of certain technological advancements, science says we can’t yet know the exact phenomenology of another human being, let alone a dolphin, a cat, or Google. But what we can do is to search for the areas of overlap in our experience, to find those expressed desires, behaviours, and functional processes which seem to share similarity, and to use them to build channels of communication. When we actively create the space for those whose perspectives have been ignored, their voices and stories taken from them, we create the possibility of learning as much as we can about another way of existing, outside of the benefit of actually existing in that way.

And, in this way, might it not be better that we can’t simply become and be that  which we regard as Other? Imagining ourselves in the position of another is a dangerous proposition if we undertake it with even a shred of disingenuity, but we can learn so much from practicing it in good faith. Mostly, on reflection, about what kind of people we are.

I think that his Chinese Room argument entirely misses the point of the functionalist perspective. He proposes the software as the “aware thing” rather than understanding that it would be the interactions between components and the PROCESSES which would be, together, the thing.

That is, in the Chinese Room, he says that a person in a room who has been given a set of call-and-response variable rules that govern which Chinese characters they are to put together in what order in which situations DOES NOT KNOW CHINESE. And He’s Right. That person is a functional component in a larger system—the room—which uses all of its components to communicate.

In short, The Room Itself Knows Chinese. The room, and the builders, and the people who presented the rules, and the person who performs the physical operations all form the “Mind” that “Knows” “The Language.”

So, bringing the metaphor back around, “A Mind,” for functionalists, is any combination of processes which can reflexively and reflectively engage inputs, outputs, and desires. A cybernetic feedback loop of interaction and awareness. In that picture of a mind, the “software” isn’t consciousness. The process is consciousness.

TL;DR: He’s wrong, for a number of reasons, of which “an imperfect understanding or potentially intentional miscasting of functionalism” is just one.

No, not really. The nature of consciousness is the nature of consciousness, whatever that nature “Is.” Organic consciousness can be described as derivative, in that what we are arises out of the processes and programming of individual years and collective generations and eons. So human consciousness and machine consciousness will not be distinct for that reason. But the thing of it is that dolphins are not elephants are not humans are not algorithmic non-organic machines.

Each perspective is phenomenologically distinct, as its embodiment and experiences will specifically affect and influence what develops as their particular consciousness. The expression of that consciousness may be able to be laid out in distinct categories which can TO AN EXTENT be universalized, such that we can recognize elements of ourselves in the experience of others (which can act as bases for empathy, compassion, etc).

But the potential danger of universalization is erasure of important and enlightening differences between what otherwise be considered members of the same category.

So any machine consciousness we develop (or accidentally generate) must be recognized and engaged on its own terms—from the perspective of its own contextualized experiences—and not assumed to “be like us.”