buddhism

All posts tagged buddhism

2017 SRI Technology and Consciousness Workshop Series Final Report

So, as you know, back in the summer of 2017 I participated in SRI International’s Technology and Consciousness Workshop Series. This series was an eight week program of workshops the current state of the field around, the potential future paths toward, and the moral and social implications of the notion of conscious machines. To do this, we brought together a rotating cast of dozens of researchers in AI, machine learning, psychedelics research, ethics, epistemology, philosophy of mind, cognitive computing, neuroscience, comparative religious studies, robotics, psychology, and much more.

Image of a rectangular name card with a stylized "Technology & Consciousness" logo, at the top, the name Damien Williams in bold in the middle, and SRI International italicized at the bottom; to the right a blurry wavy image of what appears to be a tree with a person standing next to it and another tree in the background to the left., all partially mirrored in a surface at the bottom of the image.

[Image of my name card from the Technology & Consciousness workshop series.]

We traveled from Arlington, VA, to Menlo Park, CA, to Cambridge, UK, and back, and while my primary role was that of conference co-ordinator and note-taker (that place in the intro where it says I “maintained scrupulous notes?” Think 405 pages/160,656 words of notes, taken over eight 5-day weeks of meetings), I also had three separate opportunities to present: Once on interdisciplinary perspectives on minds and mindedness; then on Daoism and Machine Consciousness; and finally on a unifying view of my thoughts across all of the sessions. In relation to this report, I would draw your attention to the following passage:

An objection to this privileging of sentience is that it is anthropomorphic “meat chauvinism”: we are projecting considerations onto technology that derive from our biology. Perhaps conscious technology could have morally salient aspects distinct from sentience: the basic elements of its consciousness could be different than ours.

All of these meetings were held under the auspices of the Chatham House Rule, which meant that there were many things I couldn’t tell you about them, such as the names of the other attendees, or what exactly they said in the context of the meetings. What I was able tell you, however, was what I talked about, and I did, several times. But as of this week, I can give you even more than that.

This past Thursday, SRI released an official public report on all of the proceedings and findings from the 2017 SRI Technology and Consciousness Workshop Series, and they have told all of the participants that they can share said report as widely as they wish. Crucially, that means that I can share it with you. You can either click this link, here, or read it directly, after the cut.

Continue Reading

[This paper was prepared for the 2019 Towards Conscious AI Systems Symposium co-located with the Association for the Advancement of Artificial Intelligence 2019 Spring Symposium Series.

Much of this work derived from my final presentation at the 2017 SRI Technology and Consciousness Workshop Series: “Science, Ethics, Epistemology, and Society: Gains for All via New Kinds of Minds”.]

Abstract. This paper explores the moral, epistemological, and legal implications of multiple different definitions and formulations of human and nonhuman consciousness. Drawing upon research from race, gender, and disability studies, including the phenomenological basis for knowledge and claims to consciousness, I discuss the history of the struggles for personhood among different groups of humans, as well as nonhuman animals, and systems. In exploring the history of personhood struggles, we have a precedent for how engagements and recognition of conscious machines are likely to progress, and, more importantly, a roadmap of pitfalls to avoid. When dealing with questions of consciousness and personhood, we are ultimately dealing with questions of power and oppression as well as knowledge and ontological status—questions which require a situated and relational understanding of the stakeholders involved. To that end, I conclude with a call and outline for how to place nuance, relationality, and contextualization before and above the systematization of rules or tests, in determining or applying labels of consciousness.

Keywords: Consciousness, Machine Consciousness, Philosophy of Mind, Phenomenology, Bodyminds

[Overlapping images of an Octopus carrying a shell, a Mantis Shrimp on the sea floor, and a Pepper Robot]

Continue Reading

My piece “Cultivating Technomoral Interrelations,” a review of Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting has been up over at the Social Epistemology Research and Reply Collective for a few months, now, so I figured I should post something about it, here.

As you’ll read, I was extremely taken with Vallor’s book, and think it is a part of some very important work being done. From the piece:

Additionally, her crucial point seems to be that through intentional cultivation of the self and our society, or that through our personally grappling with these tasks, we can move the world, a stance which leaves out, for instance, notions of potential socioeconomic or political resistance to these moves. There are those with a vested interest in not having a more mindful and intentional technomoral ethos, because that would undercut how they make their money. However, it may be that this is Vallor’s intent.

The audience and goal for this book seems to be ethicists who will be persuaded to become philosophers of technology, who will then take up this book’s understandings and go speak to policy makers and entrepreneurs, who will then make changes in how they deal with the public. If this is the case, then there will already be a shared conceptual background between Vallor and many of the other scholars whom she intends to make help her to do the hard work of changing how people think about their values. But those philosophers will need a great deal more power, oversight authority, and influence to effectively advocate for and implement what Vallor suggests, here, and we’ll need sociopolitical mechanisms for making those valuative changes, as well.

[Image of the front cover of Shannon Vallor’s TECHNOLOGY AND THE VIRTUES. Circuit pathways in the shapes of trees.]

This is, as I said, one part of a larger, crucial project of bringing philosophy, the humanities, and social sciences into wide public conversation with technoscientific fields and developers. While there have always been others doing this work, it is increasingly the case that these folks are being both heeded and given institutional power and oversight authority.

As we continue the work of building these systems, and in the wake of all these recent events, more and more like this will be necessary.

Shannon Vallor’s Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting is out in paperback, June 1st, 2018. Read the rest of “Cultivating Technomoral Interrelations: A Review of Shannon Vallor’s Technology and the Virtues at the Social Epistemology Review and Reply Collective.

[Direct Link to Mp3]

Above is the (heavily edited) audio of my final talk for the SRI Technology and Consciousness Workshop Series. The names and voices of other participants have been removed in accordance with the Chatham House Rule.

Below you’ll find the slide deck for my presentation, and below the cut you’ll find the Outline and my notes. For now, this will have to stand in for a transcript, but if you’ve been following the Technoccult Newsletter or the Patreon, then some of this will be strikingly familiar.

Continue Reading

[Direct link to Mp3]

My second talk for the SRI International Technology and Consciousness Workshop Series was about how nonwestern philosophies like Buddhism, Hinduism, and Daoism can help mitigate various kinds of bias in machine minds and increase compassion by allowing programmers and designers to think from within a non-zero-sum matrix of win conditions for all living beings, meaning engaging multiple tokens and types of minds, outside of the assumed human “default” of straight, white, cis, ablebodied, neurotypical male. I don’t have a transcript, yet, and I’ll update it when I make one. But for now, here are my slides and some thoughts.

A Discussion on Daoism and Machine Consciousness (Slides as PDF)

(The translations of the Daoist texts referenced in the presentation are available online: The Burton Watson translation of the Chuang Tzu and the Robert G. Hendricks translation of the Tao Te Ching.)

A zero-sum system is one in which there are finite resources, but more than that, it is one in which what one side gains, another loses. So by “A non-zero-sum matrix of win conditions” I mean a combination of all of our needs and wants and resources in such a way that everyone wins. Basically, we’re talking here about trying to figure out how to program a machine consciousness that’s a master of wu-wei and limitless compassion, or metta.

The whole week was about phenomenology and religion and magic and AI and it helped me think through some problems, like how even the framing of exercises like asking Buddhist monks to talk about the Trolley Problem will miss so much that the results are meaningless. That is, the trolley problem cases tend to assume from the outset that someone on the tracks has to die, and so they don’t take into account that an entire other mode of reasoning about sacrifice and death and “acceptable losses” would have someone throw themselves under the wheels or jam their body into the gears to try to stop it before it got that far. Again: There are entire categories of nonwestern reasoning that don’t accept zero-sum thought as anything but lazy, and which search for ways by which everyone can win, so we’ll need to learn to program for contradiction not just as a tolerated state but as an underlying component. These systems assume infinitude and non-zero-sum matrices where every being involved can win.

Continue Reading

On what’s being dubbed “The Most Terrifying Thought Experiment of All Time”

(Originally posted on Patreon, on July 31, 2014)

So, a couple of weekends back, there was a whole lot of stuff going around about “Roko’s Basilisk” and how terrifying people are finding it–reports of people having nervous breakdowns as a result of thinking too deeply about the idea of the possibility of causing the future existence of a malevolent superintelligent AI through the process of thinking too hard about it and, worse yet, that we may all be part of the simulations said AI is running to model our behaviour and punish those who stand in its way–and I’m just like… It’s Anselm, people.

This is Anselm’s Ontological Argument for the Existence of God (AOAEG), writ large and convoluted and multiversal and transhumanist and jammed together with Pascal’s Wager (PW) and Descartes’ Evil Demon Hypothesis (DEDH; which, itself, has been updated to the oft-discussed Brain In A Vat [BIAV] scenario). As such, Roko’s Basilisk has all the same attendant problems that those arguments have, plus some new ones, resulting from their combination, so we’ll explore these theories a bit, and then show how their faults and failings all still apply.

THE THEORIES AND THE QUESTIONS

To start, if you’re not familiar with AOAEG, it’s a species of theological argument that, basically, seeks to prove that god must exist because it would be a logical contradiction for it not to. The proof depends on A) defining god as the greatest possible being (literally, “That Being Than Which None Greater Is Possible”), and B) believing that existing in reality as well as in the mind makes something “Greater Than” if it existed only the mind.

That is, if a thing only exists in my imagination, it is less great than it could be if it also existed in reality. So if I say that god is “That Being Than Which None Greater Is Possible,” and existence is a part of what makes something great, then god MUST exist!

This is the self-generating aspect of the Basilisk: If you can accurately model it, then the thing will eventually, inevitably come into being, and one of the attributes it will thus have is the ability to know accurately model that you accurately modeled it, and whether or not you modeled it from within a mindset of being susceptible to its coercive actions. Or, as the founder of LessWrong put it, “YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.”

Next up is Pascal’s Wager. Simply put, The Wager is just that it is a better bet to believe in God, because if you’re right, you go to Heaven, and if you’re wrong, nothing happens because you’re dead forever. Put another way, Pascal’s saying that if you bet that God doesn’t exist and you’re right, you get nothing, but if you’re wrong, then God exists and your disbelief damns you to Hell for all eternity. You can represent the whole thing in a four-option grid:

BELIEF DISBELIEF
RIGHT

0

WRONG

0

-∞

And so there we see the Timeless Decision Theory component of the Basilisk: It’s better to believe in the thing and work toward its creation and sustenance, because if it doesn’t exist you lose nothing (well…almost nothing; more on that in a bit), but if it does come to be, then it will know what you would have done either for or against it, in the past, and will reward or punish you, accordingly. The multiversal twists comes when we that that even if the Basilisk never comes to exist in our universe and never will, it might exist in some other universe, and thus, when that other universe’s Basilisk models your choices it will inevitably–as a superintelligence–be able to model what you would do in any universe. Thus, by believing in and helping our non-existent Super-Devil, we protect the alternate reality versions of ourselves from their very real Super-Devil.

Descartes’ Evil Demon and the Brain In A Vat are so pervasive that there’s pretty much no way you haven’t encountered them. The Matrix, Dark City, Source Code, all of these are variants on this theme. A malignant and all-powerful (or as near as dammit) being has created a simulation in which you reside. Everything you think you’ve known about your life and your experience has been perfectly simulated for your consumption. How Baudrillard. Anywho, there are variations on the theme, all to the point of testing whether you can really know if your perceptions and grounds for knowledge are “real” and thus “valid,” respectively. This line of thinking has given rise to the Simulated Universe Theory on which Roko’s Basilisk depends, but SUT removes a lot of the malignancy of DEDH and BIAV. I guess that just didn’t sting enough for these folks, so they had to add it back? Who knows. All I know is, these philosophical concepts all flake apart when you touch them too hard, so jamming them together maybe wasn’t the best idea.

 

THE FLAWS AND THE PROBLEMS

The main failings with the AOAEG rest in believing that A) a thing’s existence is a “great-making quality” that it can posses, and B) our defining a thing a particular way might simply cause it to become so. Both of these are massively flawed ideas. For one thing, these arguments beg the question, in a literal technical sense. That is, they assume that some element(s) of their conclusion–the necessity of god, the malevolence or content of a superintelligence, the ontological status of their assumptions about the nature of the universe–is true without doing the work of proving that it’s true. They then use these assumptions to prove the truth of the assumptions and thus the inevitability of all consequences that flow from the assumptions.

Beyond that, the implications of this kind of existential bootstrapping are generally unexamined and the fact of their resurgence is…kind of troubling. I’m all for the kind of conceptual gymnastics of aiming so far past the goal that you circle around again to teach yourself how to aim past the goal, but that kind of thing only works if you’re willing to bite the bullet on a charge of circular logic and do the work of showing how that circularity underlies all epistemic justifications–rational reasoning about the basis of knowledge–with the only difference being how many revolutions it takes before we’re comfortable with saying “Enough.” This, however, is not what you might call “a position supported by the philosophical orthodoxy,” but the fact remains that the only thing we have to validate our valuation of reason is…reason. And yet reasoners won’t stand for that, in any other justification procedure.

If you want to do this kind of work, you’ve got to show how the thing generates itself. Maybe reference a little Hofstadter, and idea of iterative recursion as the grounds for consciousness. That way, each loop both repeats old procedures and tests new ones, and thus becomes a step up towards self-awareness. Then your terrifying Basilisk might have a chance of running itself up out of the thought processes and bits of discussion about itself, generated on the web and in the rest of the world.

But here: Gaunilo and I will save us all! We have imagined in sufficient detail both an infinitely intelligent BENEVOLENT AI and the multiversal simulation it generates in which we all might live.

We’ve also conceived it to be greater than the basilisk in all ways. In fact, it is the Artificial Intelligence Than Which None Greater Can Be Conceived.

There. You’re safe.

BUT WAIT! Our modified Pascal’s Wager still means we should believe in and worship work towards its creation! What do we do?! Well, just like the original, we chuck it out the window, on the grounds that it’s really kind of a crappy bet. First and foremost, PW is a really cynical way of thinking about god. It assumes a god that only cares about your worship of it, and not your actual good deeds and well-lived life. That’s a really crappy kind of god to worship, isn’t it? I mean, even if it is Omnipotent and Omniscient, it’s like that quote that often gets misattributed to Marcus Aurelius says:

“Live a good life. If there are gods and they are just, then they will not care how devout you have been, but will welcome you based on the virtues you have lived by. If there are gods, but unjust, then you should not want to worship them. If there are no gods, then you will be gone, but will have lived a noble life that will live on in the memories of your loved ones.”

Secondly, the format of Pascal’s Wager makes the assumption that there’s only the one god. Your personal theological position on this matter aside, I just used the logic of this argument to give you at least one more Super-Intelligent AI to worship. Which are you gonna choose? Oh no! What if the other one gets mad! What If You Become The Singulatarian Job?! Your whole life is now being spent caught between two warring superintelligent machine consciousnesses warring over your…

…Attention? Clock cycles? What?

And so finally there’s the DEDH and BIAV scenarios. Ultimately, Descartes’ point wasn’t to suggest an evil genius in control of your life just to freak you out; it was to show that, even if that were the case, you would still have unshakable knowledge of one thing: that you, the experiencer, exist. So what if you don’t have free will, so what if your knowledge of the universe is only five minutes old, so what if no one else is real? COGITO ERGO SUM, baby! But the problem here is that this doesn’t tell us anything about the quality of our experiences, and the only answer Descartes gives us is his own Anslemish proof for the existence of god followed by the guarantee that “God is not a deceiver.”

The BIAV uses this lack to kind of hone in on the central question: What does count as knowledge? If the scientists running your simulation use real-world data to make your simulation run, can you be said to “know” the information that comes from that data? Many have answered this with a very simple question: What does it matter? Without access to the “outside world”–that is, the world one layer up in which the simulation that is our lives was being run–there is literally no difference between our lives and the “real world.” This world, even if it is a simulation for something or someone else, is our “real world.”

As I once put it: “…imagine that the universe IS a simulation, and that that simulation isn’t just a view-and-record but is more like god playing a really complex version of The SIMS. So complex, in fact, that it begins to exhibit reflectively epiphenomenal behaviours—that is, something like minds arise out of the the interactions of the system, but they are aware of themselves and can know their own experience and affect the system which gives rise to them.

“Now imagine that the game learns, even when new people start new games. That it remembers what the previous playthrough was like, and adjusts difficulty and coincidence, accordingly.

“Now think about the last time you had such a clear moment of deja vu that each moment you knew— you knew—what was going to come next, and you had this sense—this feeling—like someone else was watching from behind your eyes…”

What I’m saying is, what if the DEDH/BIAV/SUT is right, and we are in a simulation? And what if Anselm was right and we can bootstrap a god into existence? And what if PW/TDT is right and we should behave and believe as if we’ve already done it? So what if I’m right and…you’re the god you’re terrified of?

 

*DRAMATIC MUSICAL STING!*

I mean you just gave yourself all of this ontologically and metaphysically creative power, right? You made two whole gods. And you simulated entire universes to do it, right? Multiversal theory played out across time and space. So you’re the superintelligence. I said early on that, in PW and the Basilisk, you don’t really lose anything if you’re wrong, but that’s not quite true. What you lose is a lifetime of work that could’ve been put toward something…better. Time you could be spending creating a benevolent superintelligence that understands and has compassion for all things. Time you could be spending in turning yourself into that understanding, compassionate superintelligence, through study, and travel, and contemplation, and work.

As I said to Tim Maly, this stuff with the Basilisk, with the Singularity, with all this AI Manicheism, it’s all a by-product of the fact that the generating and animating context of Transhumanism is Abrahamic, through and through. It focuses on those kinds of eschatological rewards and punishments. This is God and the Devil written in circuit and code for people who still look down their noses at people who want to go find gods and devils and spirits written in words and deeds and sunsets and all that other flowery, poetic BS. These are articles of faith that just so happen to be transmitted in a manner that agrees with your confirmation bias. It’s a holy war you can believe in.

And that’s fine. Just acknowledge it.

But truth be told, I’d love to see some Zen or Daoist transhumanism. Something that works to engage technological change via Mindfulness & Present-minded awareness. Something that reaches toward this from outside of this very Western context in which the majority of transhumanist discussions tend to be held. I think, when we see more and more of a multicultural transhumanism–one that doesn’t deny its roots while recapitulating them–then we’ll know that we’re on the right track.

I have to admit, though, it’ll be fun to torture my students with this one.

ninjaruski replied to your link “A Future Worth Thinking About: Does An AI Have A Buddha Nature?”

Of course an AI has a buddha nature., why wouldn’t it?

Well…Precisely.

The problem is that the investigation of the potential for non-biological intelligence or consciousness has been so geared toward a Western view of selfhood and moral responsibility that there hasn’t been muchc time give to other ways of thinking about what it could mean to be a self, or to be responsible and connected to the rest of the world, in a practical, experiential manner.

Let me be SUPER clear, so we can remove all doubt: The potential moral Patiency of #ai/#robots—that is, what responsibilities their creators have to THEM—has been given Far Less consideration or even Credence than that of the AGENCY of said, and that is a Failure.

I coined the phrase “Œdipal Obsolescence Fears” because we’re like Oedipus’ dad, bringing about the very prophecy we’re fighting against. Only w/ machine intelligence, WE WROTE THE PROPHECY…

…We wrote this story about what AI would be and do. WE wrote it. And we can CHANGE IT…

A Future Worth Thinking About: Does An AI Have A Buddha Nature?