My second talk for the SRI International Technology and Consciousness Workshop Series was about how nonwestern philosophies like Buddhism, Hinduism, and Daoism can help mitigate various kinds of bias in machine minds and increase compassion by allowing programmers and designers to think from within a non-zero-sum matrix of win conditions for all living beings, meaning engaging multiple tokens and types of minds, outside of the assumed human “default” of straight, white, cis, ablebodied, neurotypical male. I don’t have a transcript, yet, and I’ll update it when I make one. But for now, here are my slides and some thoughts.
A Discussion on Daoism and Machine Consciousness (Slides as PDF)
(The translations of the Daoist texts referenced in the presentation are available online: The Burton Watson translation of the Chuang Tzu and the Robert G. Hendricks translation of the Tao Te Ching.)
A zero-sum system is one in which there are finite resources, but more than that, it is one in which what one side gains, another loses. So by “A non-zero-sum matrix of win conditions” I mean a combination of all of our needs and wants and resources in such a way that everyone wins. Basically, we’re talking here about trying to figure out how to program a machine consciousness that’s a master of wu-wei and limitless compassion, or metta.
The whole week was about phenomenology and religion and magic and AI and it helped me think through some problems, like how even the framing of exercises like asking Buddhist monks to talk about the Trolley Problem will miss so much that the results are meaningless. That is, the trolley problem cases tend to assume from the outset that someone on the tracks has to die, and so they don’t take into account that an entire other mode of reasoning about sacrifice and death and “acceptable losses” would have someone throw themselves under the wheels or jam their body into the gears to try to stop it before it got that far. Again: There are entire categories of nonwestern reasoning that don’t accept zero-sum thought as anything but lazy, and which search for ways by which everyone can win, so we’ll need to learn to program for contradiction not just as a tolerated state but as an underlying component. These systems assume infinitude and non-zero-sum matrices where every being involved can win.
Metta or loving kindness compassion is an exercise in extending to people (including ourselves) the kindness and compassion that they need and want in exactly the way they need and want it. It requires a specific engagement with each and every individual involved and specifically cannot solely rely on any one abstracted model. Practical, experiential, lived compassion, that works to place specific contexts in conversation with various abstracted perspectives about desire and needs.
My starting positions, here, are that, 1) in order to do the work correctly, we must refrain from resting in abstraction, or else our most egregious failure states will be represented by models which decide to do something “for someone’s own good” before they actually engage with the lived experience of the stakeholders in question. That is, we have to try to understand each other well enough to perform mutually modeled interfaces of what you’d have done unto you and what they’d have you do unto them.I know it doesn’t have the same snap as “do unto others,” but it’s the only way we’ll make it through.
2) There are multiple types of consciousness, even within the framework of the human spectrum, and that the expression of or search for any one type is in no way meant to discount, demean, or erase any of the others. In fact, it is the case that we will need to seek to recognize and learn to communicate with as many types of consciousness as may exist, in order to survive and thrive in any meaningful way. Again, not doing so represents an egregious failure condition. With that in mind, I use “machine consciousness” to mean a machine with the capability of modelling a sense of interiority and selfness similar enough to what we know of biological consciousnesses to communicate it with us, not just a generalized computational functionalist representation, as in “AGI.”For the sake of this, as I’ve related elsewhere, I (perhaps somewhat paradoxically) think the term “artificial intelligence” is problematic. Anything that does the things we want machine minds to do is genuinely intelligent, not “artificially” so, where we use “artificial” to mean “fake,” or “contrived.” To be clear, I’m specifically problematizing the “natural/technological” divide that gives us “art vs artifice,” for reasons previously outlined here.
And so, we have to recognise the needs and ontological status of other minds, in such a way that their operation and expression can come to be understood by us, and we can seek to make ourselves understood. Some minds/consciousnesses/intelligences will have a harder time communicating with each other than others, but that’s not to say that one is any more “real” or “natural” than the others; rather it is merely indicative of the near tautology that we use anthropocentric modelling because we are anthropos. Our anthropocentrism is a place to start generating a perspective from which we must modify as we come to understand how flawed and wrong our human-based understandings are. Our anthropocentrism is not a dispositive proof that any and all types of minds must be like “ours.”
This is a presentation on why Daoism’s concept of wu-wei might be crucial to doing all of this. It entails “knowing when and how not to act” and knowing why that can’t just be an excuse to complacency or laziness. If we are to engage these tools, then we have to go about critically applying compassion and nondoing.
My talk, after this, is about why strict legalist definitions of personhood might not be the best way to go about engaging the moral and ethical engagement of nonhuman minds.
Until Next Time.
Pingback: 2017 SRI Technology and Consciousness Workshop Series Final Report | A Future Worth Thinking About