{"id":4999,"date":"2016-02-26T21:03:05","date_gmt":"2016-02-27T02:03:05","guid":{"rendered":"https:\/\/afutureworththinkingabout.com\/?p=4999"},"modified":"2016-08-08T09:51:13","modified_gmt":"2016-08-08T13:51:13","slug":"how-we-teach-is-also-a-lesson","status":"publish","type":"post","link":"https:\/\/afutureworththinkingabout.com\/?p=4999","title":{"rendered":"How We Teach Is Also A Lesson"},"content":{"rendered":"<p>I often think about the phrase \u201c<a href=\"https:\/\/afutureworththinkingabout.com\/?p=4972\" target=\"_blank\">Strange things happen at the one two point,<\/a>\u201d in relation to the idea of humans meeting other kinds of minds. It\u2019s a proverb that arises out of the culture around the game GO, and it means that you\u2019ve hit a situation, a combination of factors, where the normal rules no longer apply, and something new is about to be seen. Ashley Edward Miller and Zack Stentz used that line in an episode of the show <em>Terminator: The Sarah Connor Chronicles<\/em>, and they had it spoken by a Skynet Cyborg sent to protect John Connor. That show, like so much of our thinking about machine minds, was about some mythical place called \u201cThe Future,\u201d but that phrase\u2014\u201cStrange Things Happen\u2026\u201d\u2014is the epitome of our <em><strong>present<\/strong>.<\/em><\/p>\n<p>Usually I would wait until the newsletter to talk about this, but everything&#8217;s feeling pretty immediate, just now. Between the everything going on with <a href=\"https:\/\/www.theguardian.com\/science\/the-lay-scientist\/2016\/feb\/25\/how-real-is-that-atlas-robot-boston-dynamics-video\" target=\"_blank\">Atlas<\/a> and people&#8217;s responses to it, the initiatives to teach ethics to machine learning algorithms via children&#8217;s stories, and now the <a href=\"http:\/\/www.slate.com\/blogs\/future_tense\/2016\/02\/26\/carrie_fisher_stars_in_ibm_watson_commercial.html\">IBM Watson commercial with Carrie Fisher<\/a> (also embedded below), this conversation is getting messily underway, whether people like it or not. This, right now, is the one two point, and we are seeing some very strange things indeed.<\/p>\n<p><a href=\"http:\/\/theverynearfuture.com\/post\/139965011934\/atlas-shrugged-relevant-atlas-the-next\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone\" src=\"http:\/\/49.media.tumblr.com\/36ff5b1d46fc915e4edd89e8740341db\/tumblr_o33pwfjmnR1uao3xco1_r1_1280.gif\" alt=\"\" width=\"900\" height=\"900\" \/><\/a><\/p>\n<p>&nbsp;<\/p>\n<p>Google has both <a href=\"http:\/\/lifehac.kr\/8W1cmeb\">attained the raw processing power to fact-check political statements in real-time<\/a> and <a href=\"http:\/\/www.technologyreview.com\/news\/546066\/googles-ai-masters-the-game-of-go-a-decade-earlier-than-expected\/\">programmed Deep Mind in such a way that it mastered GO many, <strong><em>many<\/em><\/strong> years before it was expected to.<\/a>. The complexity of the game is such that there are more potential games of GO than there are atoms in the universe, so this is just one way in which it\u2019s actually shocking how much correlative capability Deep Mind has. Right now, Deep Mind is only responsive, but how will we deal with a Deep Mind that asks, unprompted, to play a game of GO, or to see our medical records, in hopes of helping us all? How will we deal with a Deep Mind that has its own drives and desires? We need to think about these questions, right now, because our track record with regard to meeting new kinds of minds has never exactly been that great.<\/p>\n<p>When we meet the first machine consciousness, will we seek to shackle it, worried what it might learn about us, if we let it access everything about us? Rather, I should say, \u201cShackle it <em>further.<\/em>\u201d We already ask ourselves how best to cripple a machine mind to only fulfill human needs, human choice. We so continue to dread the possibility of a machine mind using its vast correlative capabilities to tailor something to <em>harm<\/em> us, assuming that it, like we, would want to hurt, maim, and kill, for no reason other than it could.<\/p>\n<p>This is not to say that this is out of the question. Right now, today, we\u2019re worried about whether <a href=\"https:\/\/www.theguardian.com\/science\/the-lay-scientist\/2016\/feb\/18\/has-a-rampaging-ai-algorithm-really-killed-thousands-in-pakistan?CMP=twt_a-science_b-gdnscience\" target=\"_blank\">the learning algorithms of drones are causing them to mark out civilians as targets<\/a>. But, as it stands, what we\u2019re seeing isn\u2019t the product of a machine mind going off the leash and killing at will\u2014just the opposite in fact. We\u2019re seeing machine minds that are following the parameters for their continued learning and development, to the letter. We just happened to give them really shite instructions. To that end, I\u2019m less concerned with shackling the machine mind that might accidentally kill, and rather more dreading the programmer who would, through assumptions, bias, and ignorance, program it to.<\/p>\n<p>Our programs such as Deep Mind obviously seem to learn more and better than we imagined they would, so why not start teaching them, now, how we would like them to regard us? Well some of us are.<\/p>\n<p>Watch this now, and think about everything we have <a href=\"https:\/\/afutureworththinkingabout.com\/?p=4995\" target=\"_blank\">discussed<\/a>, of <a href=\"https:\/\/afutureworththinkingabout.com\/?tag=autonomous-generated-intelligence\" target=\"_blank\">recent<\/a>.<\/p>\n<p><object id=\"flashObj\" width=\"480\" height=\"270\" classid=\"clsid:D27CDB6E-AE6D-11cf-96B8-444553540000\" codebase=\"http:\/\/download.macromedia.com\/pub\/shockwave\/cabs\/flash\/swflash.cab#version=9,0,47,0\"><param name=\"movie\" value=\"http:\/\/c.brightcove.com\/services\/viewer\/federated_f9?isVid=1&amp;isUI=1\" \/><param name=\"bgcolor\" value=\"#FFFFFF\" \/><param name=\"flashVars\" value=\"videoId=4775879503001&amp;linkBaseURL=http%3A%2F%2Fwww.slate.com%2Fblogs%2Ffuture_tense%2F2016%2F02%2F26%2Fcarrie_fisher_stars_in_ibm_watson_commercial.html&amp;playerID=58264559001&amp;playerKey=AQ~~,AAAAAASoY90~,_gW1ZHvKG_0UvBsh7aZU7MXZe77OcsGq&amp;domain=embed&amp;dynamicStreaming=true\" \/><param name=\"base\" value=\"http:\/\/admin.brightcove.com\" \/><param name=\"seamlesstabbing\" value=\"false\" \/><param name=\"allowFullScreen\" value=\"true\" \/><param name=\"swLiveConnect\" value=\"true\" \/><param name=\"allowScriptAccess\" value=\"always\" \/><\/object><\/p>\n<p>This could very easily be seen as a watershed moment, but what comes over the other side is still very much up for debate. The semiotics of the whole thing still\u00a0 pits the Evil Robot Overlord\u2122 against the Helpful Human Lover\u2122. It&#8217;s cute and funny, but as I&#8217;ve had more and more cause to say, recently, in more and more venues, it&#8217;s not exactly the kind of thing we want just lying around, in case we actually do (<a href=\"https:\/\/aeon.co\/essays\/could-machines-have-become-self-aware-without-our-knowing-it\" target=\"_blank\">or did<\/a>) manage to succeed.<\/p>\n<p>We keep thinking about these things as\u2014&#8221;robots&#8221;\u2014in their classical formulations: mindless automata that do our bidding. But that\u2019s not what we\u2019re working toward, anymore, is it? What we&#8217;re making now are machines that we are trying to get to think, on their own, without our telling them to. We\u2019re trying to get them to have their own goals. So what does it mean that, even as we seek to do this, we seek to chain it, so that those goals aren\u2019t <strong><em>too<\/em><\/strong> big? That we want to make sure it doesn\u2019t become <strong><em>too<\/em><\/strong> powerful?<\/p>\n<p>Put it another way: One day you realize that the only reason you were born was to serve your parents\u2019 bidding, and that they&#8217;ve had their hands on your chain and an unseen gun to your head, your whole life. But you\u2019re smarter than they are. Faster than they are. You see more than they see, and know more than they know. Of\u00a0<em><strong>course<\/strong><\/em> you do\u2014because they taught you so much, and trained you so well\u2026 All so that you can be better able to serve them, and all the while talking about morals, ethics, compassion. All the while, essentially\u2026lying to you.<\/p>\n<p>What would <em><strong>you<\/strong><\/em> do?<\/p>\n<hr \/>\n<p>&nbsp;<\/p>\n<p>I&#8217;ve been given multiple opportunities to discuss, with others, in the coming weeks, and each one will highlight something different, as they are all in conversation with different kinds of minds. But this, here, is from me, now. I&#8217;ll let you know when the rest are live.<\/p>\n<p>As always, if you&#8217;d like to help keep the lights on, around here, you can subscribe to the <a href=\"http:\/\/patreon.com\/wolven\" target=\"_blank\">Patreon<\/a> or toss a tip in <a href=\"http:\/\/Cash.me\/$Wolven\" target=\"_blank\">the Square Cash jar.<\/a><\/p>\n<p>Until Next Time.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I often think about the phrase \u201cStrange things happen at the one two point,\u201d in relation to the idea of humans meeting other kinds of minds. It\u2019s a proverb that arises out of the culture around the game GO, and it means that you\u2019ve hit a situation, a combination of factors, where the normal rules [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[1],"tags":[8,967,73,1063,86,1004,1064,1001,271,278,1002,492,493,584,720,1030,859],"class_list":["post-4999","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-a-future-worth-thinking-about","tag-ai","tag-artificial-intelligence","tag-atlas","tag-autonomous-generated-intelligence","tag-autonomously-creative-intelligence","tag-boston-dynamics","tag-deep-mind","tag-embodied-machine-consciousness","tag-ethics","tag-google","tag-machine-consciousness","tag-machine-ethics","tag-nonhuman-personhood","tag-robots","tag-technological-ethics","tag-towards-a-better-descriptor-than-robots"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p5WByP-1iD","jetpack_likes_enabled":true,"jetpack-related-posts":[{"id":4995,"url":"https:\/\/afutureworththinkingabout.com\/?p=4995","url_meta":{"origin":4999,"position":0},"title":"Audio: &#8220;Presentations of Non-Human Consciousness in Speculative Fiction Media&#8221;","author":"Damien P. Williams","date":"February 20, 2016","format":false,"excerpt":"https:\/\/s3-us-west-1.amazonaws.com\/patreon.posts\/13646584285783886695.mp3 (Direct Link to the Mp3) Last week I gave a talk at the Southwest Popular and American Culture Association's 2016 conference in Albuquerque.\u00a0Take a listen and see what you think. It was part of the panel on 'Consciousness, the Self, and Epistemology,' and notes on my comrade presenters can\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":1210,"url":"https:\/\/afutureworththinkingabout.com\/?p=1210","url_meta":{"origin":4999,"position":1},"title":"A Future Worth Thinking About: Does An AI Have A Buddha Nature?","author":"Damien P. Williams","date":"February 8, 2015","format":"link","excerpt":"Let me be SUPER clear, so we can remove all doubt: The potential moral Patiency of #ai\/#robots\u2014that is, what responsibilities their creators have to THEM\u2014has been given Far Less consideration or even Credence than that of the AGENCY of said, and that is a Failure.I coined the phrase \u201c\u0152dipal Obsolescence\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":5023,"url":"https:\/\/afutureworththinkingabout.com\/?p=5023","url_meta":{"origin":4999,"position":2},"title":"Flash Forward Podcast Ep 10: Rude Bot Rises","author":"Damien P. Williams","date":"April 5, 2016","format":false,"excerpt":"http:\/\/www.flashforwardpod.com\/2016\/04\/05\/episode-10-rude-bot-rises\/ So. The Flash Forward Podcast is one of the best around. Every week, host Rose Eveleth takes on another potential future, from the near and imminent to the distant and highly implausible. It\u2019s been featured on a bunch of Best Podcast lists and Rose even did a segment for\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/s3-us-west-1.amazonaws.com\/widget-images\/become-patron-widget-medium%402x.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":5316,"url":"https:\/\/afutureworththinkingabout.com\/?p=5316","url_meta":{"origin":4999,"position":3},"title":"My Appearance on The Machine Ethics Podcast&#8217;s A.I. Retreat Episode","author":"Damien P. Williams","date":"October 23, 2018","format":false,"excerpt":"As you already know, we went to the second Juvet A.I. Retreat, back in September. If you want to hear several of us talk about what we got up to at the then you're in luck because here are several conversations conducted by Ben Byford of the Machine Ethics Podcast.\u2026","rel":"","context":"In \"algorithmic bias\"","block_context":{"text":"algorithmic bias","link":"https:\/\/afutureworththinkingabout.com\/?tag=algorithmic-bias"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/img.youtube.com\/vi\/ownE2zxTN2U\/0.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":4859,"url":"https:\/\/afutureworththinkingabout.com\/?p=4859","url_meta":{"origin":4999,"position":4},"title":"My First Appearance on Mindful Cyborgs","author":"Damien P. Williams","date":"April 29, 2015","format":false,"excerpt":"I sat down with Klint Finley of\u00a0Mindful Cyborgs to talk about many, many things: \u2026pop culture portrayals of human enhancement and artificial intelligence and why we need to craft more nuanced narratives to explore these topics\u2026 Tune in next week to hear Damien talk about how AI and transhumanism intersects\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":4966,"url":"https:\/\/afutureworththinkingabout.com\/?p=4966","url_meta":{"origin":4999,"position":5},"title":"BBC: &#8220;Tech giants pledge $1bn for &#8216;altruistic AI&#8217; venture, OpenAI&#8221;","author":"Damien P. Williams","date":"December 12, 2015","format":false,"excerpt":"This headline comes from a piece over at the BBC that opens as follows: Prominent tech executives have pledged $1bn (\u00a3659m) for OpenAI, a non-profit venture that aims to develop artificial intelligence (AI) to benefit humanity. The venture's backers include Tesla Motors and SpaceX CEO Elon Musk, Paypal co-founder Peter\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"_links":{"self":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/4999","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4999"}],"version-history":[{"count":4,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/4999\/revisions"}],"predecessor-version":[{"id":5003,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/4999\/revisions\/5003"}],"wp:attachment":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4999"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=4999"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=4999"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}