{"id":5204,"date":"2017-10-21T15:35:41","date_gmt":"2017-10-21T19:35:41","guid":{"rendered":"https:\/\/afutureworththinkingabout.com\/?p=5204"},"modified":"2017-12-23T02:05:15","modified_gmt":"2017-12-23T07:05:15","slug":"a-discussion-on-daoism-and-machine-consciousness","status":"publish","type":"post","link":"https:\/\/afutureworththinkingabout.com\/?p=5204","title":{"rendered":"Audio &#038; Presentation: A Discussion on Daoism and Machine Consciousness"},"content":{"rendered":"<audio class=\"wp-audio-shortcode\" id=\"audio-5204-1\" preload=\"none\" style=\"width: 100%;\" controls=\"controls\"><source type=\"audio\/mpeg\" src=\"https:\/\/s3-us-west-1.amazonaws.com\/patreon-posts\/13067997299595990810.mp3?_=1\" \/><a href=\"https:\/\/s3-us-west-1.amazonaws.com\/patreon-posts\/13067997299595990810.mp3\">https:\/\/s3-us-west-1.amazonaws.com\/patreon-posts\/13067997299595990810.mp3<\/a><\/audio>\n<p>[<a href=\"https:\/\/s3-us-west-1.amazonaws.com\/patreon-posts\/13067997299595990810.mp3\">Direct link to Mp3<\/a>]<\/p>\n<p>My second talk for the SRI International Technology and Consciousness Workshop Series was about how nonwestern philosophies like Buddhism, Hinduism, and Daoism can help mitigate various kinds of bias in machine minds and increase compassion by allowing programmers and designers to think from within a non-zero-sum matrix of win conditions for all living beings, meaning engaging multiple tokens and types of minds, outside of the assumed human &#8220;default&#8221; of straight, white, cis, ablebodied, neurotypical male. I don&#8217;t have a transcript, yet, and I&#8217;ll update it when I make one. But for now, here are my slides and some thoughts.<br \/>\n<iframe loading=\"lazy\" src=\"https:\/\/docs.google.com\/presentation\/d\/e\/2PACX-1vQw52jM7hOoaovnzQx5Isyu3BLvWLXBPq-h4Ezr4-HVVDzg1F7_hxZh20sPIlyWTn6-TZGoNrx0nNYS\/embed?start=false&amp;loop=false&amp;delayms=3000\" width=\"640\" height=\"500\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"><\/iframe><\/p>\n<p style=\"text-align: center;\"><a href=\"https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2017\/10\/A-Discussion-on-Daoism-and-Machine-Consciousness.pdf\">A Discussion on Daoism and Machine Consciousness (Slides as PDF)<\/a><\/p>\n<p>(The translations of the Daoist texts referenced in the presentation are available online: The <a href=\"https:\/\/terebess.hu\/english\/chuangtzu.html\">Burton Watson translation of the Chuang Tzu<\/a>\u00a0and the <a href=\"https:\/\/terebess.hu\/english\/tao\/henricks.html\">Robert G. Hendricks translation of the Tao Te Ching<\/a>.)<\/p>\n<p>A zero-sum system is one in which there are finite resources, but more than that, it is one in which what one side gains, another loses. So by &#8220;A non-zero-sum matrix of win conditions&#8221; I mean a combination of all of our needs and wants and resources in such a way that everyone wins. Basically, we&#8217;re talking here about trying to figure out how to program a machine consciousness that&#8217;s a master of wu-wei and limitless compassion, or <i>metta<\/i>.<\/p>\n<p>The whole week was about phenomenology and religion and magic and AI and it helped me think through some problems, like how even the <b><i>framing<\/i><\/b> of exercises like <a href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2017\/06\/how-do-buddhist-monks-think-about-the-trolley-problem\/532092\/\">asking Buddhist monks to talk about the Trolley Problem<\/a> will miss so much that the results are meaningless. That is, the trolley problem cases tend to assume from the outset that someone on the tracks has to die, and so they don&#8217;t take into account that an entire other mode of reasoning about sacrifice and death and &#8220;acceptable losses&#8221; would have someone throw themselves under the wheels or <a href=\"https:\/\/www.google.com\/search?q=snowpiercer&amp;ie=utf-8&amp;oe=utf-8\">jam their body into the gears<\/a> to try to stop it before it got that far. Again: There are entire categories of nonwestern reasoning that don&#8217;t accept zero-sum thought as anything but lazy, and which search for ways by which everyone can win, so we&#8217;ll need to learn to program for contradiction not just as a tolerated state but as an underlying component. These systems assume infinitude and non-zero-sum matrices where every being involved can win.<\/p>\n<p><!--more--><\/p>\n<p>Metta or loving kindness compassion is an exercise in extending to people (including ourselves) the kindness and compassion that they need and want in exactly the way they need and want it. It <b><i>requires<\/i><\/b> a specific engagement with each and every individual involved and specifically <b><i>cannot<\/i><\/b> solely rely on any one abstracted model. Practical, experiential, lived compassion, that works to place specific contexts in conversation with various abstracted perspectives about desire and needs.<\/p>\n<p>My starting positions, here, are that, 1) in order to do the work correctly, we must refrain from resting in abstraction, or else our most egregious failure states\u00a0 will be represented by models which decide to do something \u201cfor someone\u2019s own good\u201d before they actually engage with the lived experience of the stakeholders in question. That is, we have to try to understand each other well enough to perform mutually modeled interfaces of what you\u2019d have done unto you and what they\u2019d have you do unto them.I know it doesn&#8217;t have the same snap as &#8220;do unto others,&#8221; but it&#8217;s the only way we&#8217;ll make it through.<\/p>\n<p><div id=\"attachment_5211\" style=\"width: 278px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-5211\" class=\"size-medium wp-image-5211\" src=\"https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2017\/10\/Yin-YangRing2-268x300.jpg\" alt=\"\" width=\"268\" height=\"300\" srcset=\"https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2017\/10\/Yin-YangRing2-268x300.jpg 268w, https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2017\/10\/Yin-YangRing2-768x860.jpg 768w, https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2017\/10\/Yin-YangRing2.jpg 889w\" sizes=\"auto, (max-width: 268px) 100vw, 268px\" \/><p id=\"caption-attachment-5211\" class=\"wp-caption-text\">[And image of a traditional Yin-Yang carved in a silver ring]<\/p><\/div>2) There are multiple types of consciousness, even within the framework of the human spectrum, and that the expression of or search for any one type is in <b><i>no way<\/i><\/b> meant to discount, demean, or erase any of the others. In fact, it is the case that we will need to seek to recognize and learn to communicate with as many types of consciousness as may exist, in order to survive and thrive in any meaningful way. Again, not doing so represents an egregious failure condition. With that in mind, I use &#8220;machine consciousness&#8221; to mean a machine with the capability of modelling a sense of interiority and selfness similar enough to what we know of biological consciousnesses to communicate it with us, not just a generalized computational functionalist representation, as in &#8220;AGI.&#8221;<\/p>\n<p>For the sake of this, as I&#8217;ve related elsewhere, I (perhaps somewhat paradoxically) think the term &#8220;artificial intelligence&#8221; is problematic. Anything that does the things we want machine minds to do is <em><strong>genuinely<\/strong><\/em> intelligent, not &#8220;artificially&#8221; so, where we use &#8220;artificial&#8221; to mean &#8220;fake,&#8221; or &#8220;contrived.&#8221; To be clear, I&#8217;m specifically problematizing the &#8220;natural\/technological&#8221; divide that gives us &#8220;art vs artifice,&#8221; for reasons previously outlined <a href=\"http:\/\/scholarworks.gsu.edu\/philosophy_theses\/37\/\">here<\/a>.<\/p>\n<p>And so, we have to recognise the needs and ontological status of <a href=\"https:\/\/afutureworththinkingabout.com\/?p=5182\">other minds<\/a>, in such a way that their operation and expression can come to be understood by us, and we can seek to make ourselves understood. Some minds\/consciousnesses\/intelligences will have a harder time communicating with each other than others, but that&#8217;s not to say that one is any more &#8220;real&#8221; or &#8220;natural&#8221; than the others; rather it is\u00a0 merely indicative of the near tautology that we use anthropocentric modelling because we are <em>anthropos.<\/em> Our anthropocentrism is a place to start generating a perspective from which we <b><i>must<\/i><\/b> modify as we come to understand how flawed and wrong our human-based understandings are. Our anthropocentrism is <em><strong>not<\/strong><\/em> a dispositive proof that any and all types of minds must be like &#8220;ours.&#8221;<\/p>\n<p>This is a presentation on why Daoism&#8217;s concept of <em>wu-wei<\/em> might be crucial to doing all of this. It entails &#8220;knowing when and how not to act&#8221; and knowing why that can&#8217;t just be an excuse to complacency or laziness. If we are to engage these tools, then we have to go about critically applying compassion and nondoing.<\/p>\n<p>My talk, after this, is about why strict legalist definitions of personhood might not be the best way to go about engaging the moral and ethical engagement of nonhuman minds.<\/p>\n<p>Until Next Time.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>https:\/\/s3-us-west-1.amazonaws.com\/patreon-posts\/13067997299595990810.mp3 [Direct link to Mp3] My second talk for the SRI International Technology and Consciousness Workshop Series was about how nonwestern philosophies like Buddhism, Hinduism, and Daoism can help mitigate various kinds of bias in machine minds and increase compassion by allowing programmers and designers to think from within a non-zero-sum matrix of win conditions [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[1],"tags":[8,959,73,1198,85,1004,101,135,1221,186,1129,190,218,245,1216,271,278,1220,1218,1215,492,493,496,1214,944,560,584,1217,627,628,1026,701,730,803,1030,1223,1222,1219],"class_list":["post-5204","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-a-future-worth-thinking-about","tag-algorithmic-systems","tag-artificial-intelligence","tag-audio","tag-autonomous-created-intelligence","tag-autonomously-creative-intelligence","tag-bias","tag-buddhism","tag-chuang-tzu","tag-comparative-religion","tag-compassion","tag-consciousness","tag-daoism","tag-distributed-machine-consciousness","tag-eastern-philosophy","tag-embodied-machine-consciousness","tag-ethics","tag-lao-tzu","tag-laozi","tag-loving-kindness","tag-machine-consciousness","tag-machine-ethics","tag-machine-minds","tag-metta","tag-my-voice","tag-my-words","tag-nonhuman-personhood","tag-nonwestern-philosophy","tag-philosophy","tag-philosophy-of-mind","tag-philosophy-of-technology","tag-religious-studies","tag-scholar-of-comparative-religion","tag-taoism","tag-technological-ethics","tag-technology-and-religion","tag-wu-wei","tag-zhuangzi"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p5WByP-1lW","jetpack_likes_enabled":true,"jetpack-related-posts":[{"id":5375,"url":"https:\/\/afutureworththinkingabout.com\/?p=5375","url_meta":{"origin":5204,"position":0},"title":"2017 SRI Technology and Consciousness Workshop Series Final Report","author":"Damien P. Williams","date":"March 8, 2019","format":false,"excerpt":"So, as you know, back in the summer of 2017 I participated in SRI International\u2019s Technology and Consciousness Workshop Series. This series was an eight week program of workshops the current state of the field around, the potential future paths toward, and the moral and social implications of the notion\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"Image of a rectangular name card with a stylized \"Technology & Consciousness\" logo, at the top, the name Damien Williams in bold in the middle, and SRI International italicized at the bottom; to the right a blurry wavy image of what appears to be a tree with a person standing next to it and another tree in the background to the left., all partially mirrored in a surface at the bottom of the image.","src":"https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/03\/20190308_021228.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/03\/20190308_021228.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/03\/20190308_021228.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/03\/20190308_021228.jpg?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/03\/20190308_021228.jpg?resize=1050%2C600&ssl=1 3x, https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/03\/20190308_021228.jpg?resize=1400%2C800&ssl=1 4x"},"classes":[]},{"id":5316,"url":"https:\/\/afutureworththinkingabout.com\/?p=5316","url_meta":{"origin":5204,"position":1},"title":"My Appearance on The Machine Ethics Podcast&#8217;s A.I. Retreat Episode","author":"Damien P. Williams","date":"October 23, 2018","format":false,"excerpt":"As you already know, we went to the second Juvet A.I. Retreat, back in September. If you want to hear several of us talk about what we got up to at the then you're in luck because here are several conversations conducted by Ben Byford of the Machine Ethics Podcast.\u2026","rel":"","context":"In \"algorithmic bias\"","block_context":{"text":"algorithmic bias","link":"https:\/\/afutureworththinkingabout.com\/?tag=algorithmic-bias"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/img.youtube.com\/vi\/ownE2zxTN2U\/0.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":5249,"url":"https:\/\/afutureworththinkingabout.com\/?p=5249","url_meta":{"origin":5204,"position":2},"title":"&#8220;We Built Them From Us&#8221;: My Appearance on the TEAM HUMAN Podcast","author":"Damien P. Williams","date":"February 22, 2018","format":false,"excerpt":"Earlier this month I was honoured to have the opportunity to sit and talk to Douglas Rushkoff on his TEAM HUMAN podcast. If you know me at all, you know this isn't by any means the only team for which I play, or even the only way I think about\u2026","rel":"","context":"In \"algorithmic bias\"","block_context":{"text":"algorithmic bias","link":"https:\/\/afutureworththinkingabout.com\/?tag=algorithmic-bias"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":4966,"url":"https:\/\/afutureworththinkingabout.com\/?p=4966","url_meta":{"origin":5204,"position":3},"title":"BBC: &#8220;Tech giants pledge $1bn for &#8216;altruistic AI&#8217; venture, OpenAI&#8221;","author":"Damien P. Williams","date":"December 12, 2015","format":false,"excerpt":"This headline comes from a piece over at the BBC that opens as follows: Prominent tech executives have pledged $1bn (\u00a3659m) for OpenAI, a non-profit venture that aims to develop artificial intelligence (AI) to benefit humanity. The venture's backers include Tesla Motors and SpaceX CEO Elon Musk, Paypal co-founder Peter\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":5276,"url":"https:\/\/afutureworththinkingabout.com\/?p=5276","url_meta":{"origin":5204,"position":4},"title":"Nonhuman and Nonbiological Phenomenology","author":"Damien P. Williams","date":"May 15, 2018","format":false,"excerpt":"Late last month, I was at Theorizing the Web, in NYC, to moderate Panel B3, \"Bot Phenomenology,\" in which I was very grateful to moderate a panel of people I was very lucky to be able to bring together. Johnathan Flowers, Emma Stamm, and Robin Zebrowski were my interlocutors in\u2026","rel":"","context":"In \"alterity\"","block_context":{"text":"alterity","link":"https:\/\/afutureworththinkingabout.com\/?tag=alterity"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":4812,"url":"https:\/\/afutureworththinkingabout.com\/?p=4812","url_meta":{"origin":5204,"position":5},"title":"Someone Asked &#8220;I think I read on your tumblr recently that there would probably be a difference between human consciousness and machine consciousness.  Would this be due to the immanent nature of human consciousness and the derivative nature of a machines consciousness?&#8221;","author":"Damien P. Williams","date":"February 9, 2015","format":false,"excerpt":"No, not really. The nature of consciousness is the nature of consciousness, whatever that nature \u201cIs.\u201d Organic consciousness can be described as derivative, in that what we are arises out of the processes and programming of individual years and collective generations and eons. So human consciousness and machine consciousness will\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"_links":{"self":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/5204","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5204"}],"version-history":[{"count":10,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/5204\/revisions"}],"predecessor-version":[{"id":5230,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/5204\/revisions\/5230"}],"wp:attachment":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5204"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5204"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5204"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}