{"id":5030,"date":"2016-06-30T13:53:36","date_gmt":"2016-06-30T17:53:36","guid":{"rendered":"https:\/\/afutureworththinkingabout.com\/?p=5030"},"modified":"2016-08-27T14:47:17","modified_gmt":"2016-08-27T18:47:17","slug":"on-the-european-unions-electronic-personhood-proposal","status":"publish","type":"post","link":"https:\/\/afutureworththinkingabout.com\/?p=5030","title":{"rendered":"On the European Union&#8217;s &#8220;Electronic Personhood&#8221; Proposal"},"content":{"rendered":"<p>In case you were unaware, last Tuesday, June 21, Reuters put out an article about an <a href=\"http:\/\/www.reuters.com\/article\/us-europe-robotics-lawmaking-idUSKCN0Z72AY\">EU draft plan regarding the designation of so-called robots and artificial intelligences as &#8220;Electronic Persons.&#8221;<\/a> Some of you&#8217;d think I&#8217;d be all about this. You&#8217;d be wrong. The way the Reuters article frames it makes it look like the EU has literally no idea what they&#8217;re doing, here, and are creating a situation that is going to have repercussions they have nowhere near planned for.<\/p>\n<p>Now, I will say that looking at the <a href=\"http:\/\/www.europarl.europa.eu\/sides\/getDoc.do?pubRef=-\/\/EP\/\/NONSGML+COMPARL+PE-582.443+01+DOC+PDF+V0\/\/EN&amp;language=EN\"><b><i>actual<\/i><\/b> Draft<\/a>, it reads like something with which I&#8217;d be more likely to be on board. Reuters did no favours whatsoever for the level of nuance in this proposal. But that being said, this focus of this draft proposal seems to be entirely on liability and holding someone\u2014<b><i>anyone<\/i><\/b>\u2014responsible for any harm done by a robot. That, combined with the idea of certain activities such as care-giving being &#8220;fundamentally human,&#8221; indicates to me that this panel still widely misses many of the implications of creating a new category for nonbiological persons, under &#8220;Personhood.&#8221;<\/p>\n<p>The writers of this draft very clearly lay out the proposed scheme for liability, damages, and responsibilities\u2014what I like to think of of as the &#8220;Hey\u2026 Can we <a href=\"https:\/\/afutureworththinkingabout.com\/?tag=foucault\"><b><i>Punish<\/i><\/b> Robots<\/a>?&#8221; portion of the plan\u2014but merely use the phrase &#8220;certain rights&#8221; to indicate what, if any, obligations humans will have. In short, they do very little to discuss what the &#8220;certain rights&#8221; indicated by that oft-deployed phrase will actually <b><i>be<\/i><\/b>.<\/p>\n<p>So what <b><i>are<\/i><\/b> the enumerated rights of electronic persons? We know what their responsibilities are, but what are <b><i>our<\/i><\/b> responsibilities <b><i>to them<\/i><\/b>? Once we have have the ability to make self-aware machine consciousnesses, are we then morally obliged to make them to a particular set of specifications, and capabilities? How else will they understand what&#8217;s required of them? How else would they be able to provide consent? Are we now <b><i>legally obliged<\/i><\/b> to provide all <a href=\"https:\/\/afutureworththinkingabout.com\/?tag=autonomous-generated-intelligence\">autonomous generated intelligences<\/a> with as full an approximation of consciousness and free will as we can manage? And what if we don&#8217;t? Will we be considered to be harming them? What if we break one? What if one breaks in the course of its duties? Does it get workman&#8217;s comp? Does its owner?<\/p>\n<p>And hold up, &#8220;<b><i>owner<\/i><\/b>?!&#8221; You see we&#8217;re back to owning people, again, right? Like, you get that?<\/p>\n<p>And don&#8217;t start in with that &#8220;Corporations are people, my friend&#8221; nonsense, Mitt. We only recognise corporations as people as a tax dodge. We don&#8217;t take seriously their decision-making capabilities or their autonomy, and we <b><i>certainly<\/i><\/b> don&#8217;t wrestle with the legal and ethical implications of how <b><i>radically different<\/i><\/b> their kind of mind is, compared to primates or even cetaceans. Because, let&#8217;s be honest: If Corporations <b><i>really are<\/i><\/b> people, then not only is it wrong to own them, but also what counts as Consciousness needs to be revisited, at every level of human action and civilisation.<\/p>\n<p>Let&#8217;s look again at the fact that people are obviously still deeply concerned about the idea of <a href=\"https:\/\/afutureworththinkingabout.com\/?s=anthropocentric\">supposedly &#8220;exclusively human&#8221; realms of operation<\/a>, even as we still don&#8217;t have anything like a clear idea about what qualities we consider to<em><strong> be the ones that make us<\/strong><\/em> &#8220;human.&#8221; Be it cooking or <a href=\"http:\/\/www.npr.org\/templates\/transcript\/transcript.php?storyId=480639265\">poetry<\/a>, humans are extremely quick to lock down when they feel that their special capabilities are being encroached upon. Take that &#8220;poetry&#8221; link, for example. I very much disagree with Robert Siegel&#8217;s assessment that there was no coherent meaning in the computer-generated sonnets. Multiple folks pulled the same associative connections from the imagery. That might be humans projecting onto the authors, but still: that&#8217;s basically what we do with Human poets. &#8220;Authorial Intent&#8221; is a multilevel con, one to which I fully subscribe and From which I wouldn&#8217;t exclude AI.<\/p>\n<p>Consider people&#8217;s reactions to the EMI\/Emily Howell experiments done by David Cope, best exemplified by this passage from a <a href=\"http:\/\/www.popsci.com\/technology\/article\/2010-02\/composers-music-making-machine-stirs-controversy-about-creative-originality\">PopSci.com article<\/a>:<\/p>\n<blockquote><p><span data-ft=\"{&quot;tn&quot;:&quot;K&quot;}\"><span class=\"UFICommentBody _1n4g\">For instance, one music-lover who listened to Emily Howell&#8217;s work praised it without knowing that it had come from a computer program. Half a year later, the same person attended one of Cope&#8217;s lectures at the University of California-Santa Cruz on Emily Howell. After listening to a recording of the very same concert he had attended earlier, he told Cope that it was pretty music but lacked &#8220;heart or soul or depth.&#8221;<\/span><\/span><\/p><\/blockquote>\n<p>We don&#8217;t know what it is we really think of as humanness, other than some <em><strong>predetermined vague notion of humanness. <\/strong><\/em>If the people in the poetry contest hadn&#8217;t been primed to assume that one of them was from a computer, how would they have rated them? What if they were all from a computer, but were told to expect only half? Where are the <em><strong>controls<\/strong><\/em> for this experiment in expectation?<\/p>\n<p>I&#8217;m not trying to be facetious, here; I&#8217;m saying the EU <b><i>literally has not thought this through<\/i><\/b>. There are implications embedded in all of this, merely by dint of the word &#8220;person,&#8221; that even the most detailed parts of this proposal are in no way equipped to handle. We&#8217;ve talked before about the idea of encoding our bias into our algorithms. I&#8217;ve discussed it on <a href=\"https:\/\/www.patreon.com\/roseveleth\">Rose Eveleth<\/a>&#8216;s <a href=\"http:\/\/www.flashforwardpod.com\/2016\/04\/05\/episode-10-rude-bot-rises\/\">Flash Forward<\/a>, in <a href=\"http:\/\/www.wired.com\/2016\/05\/what-is-ai-artificial-intelligence\/\">Wired<\/a>, and when I broke down a few of the IEEE Ethics 2016 presentations (including<a href=\"https:\/\/afutureworththinkingabout.com\/?p=5015\"> my own<\/a>) in &#8220;<a href=\"http:\/\/tinyletter.com\/Technoccult\/letters\/technoccult-news-preying-with-trickster-gods\" target=\"_blank\">Preying with Trickster Gods <\/a>&#8221; and &#8220;<a href=\"http:\/\/tinyletter.com\/Technoccult\/letters\/technoccult-news-stealing-the-light-to-write-by\" target=\"_blank\">Stealing the Light to Write By.<\/a>&#8221; My version more or less goes as I said it in Wired: &#8216;What we\u2019re actually doing when we code is describing our world from our particular perspective. Whatever assumptions and biases we have in ourselves are very likely to be replicated in that code.&#8217;<\/p>\n<p>More recently, Kate Crawford, whom I met at <a href=\"https:\/\/twitter.com\/search?q=%23magickcodes&amp;src=typd\">Magick.Codes<\/a> 2014, has written extremely well on this in <a href=\"http:\/\/mobile.nytimes.com\/2016\/06\/26\/opinion\/sunday\/artificial-intelligences-white-guy-problem.html?smid=tw-share&amp;referer\">&#8220;Artificial Intelligence&#8217;s White Guy Problem.&#8221;<\/a> With this line, &#8216;Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many \u201cintelligent\u201d systems that shape how we are categorized and advertised to,&#8217; Crawford resonates very clearly with what I&#8217;ve said before.<\/p>\n<p>And considering that it&#8217;s come out this week that in order to even let us dig into these potentially deeply-biased algorithms, here in the US, <a href=\"http:\/\/arstechnica.com\/tech-policy\/2016\/06\/do-housing-jobs-sites-have-racist-algorithms-academics-sue-to-find-out\/\">the ACLU has had to file a suit against a specific provision of the Computer Fraud and Abuse Act<\/a>, what is the likelihood that the EU draft proposal committee has considered what will take to identify and correct for biases in these electronic persons? How high is the likelihood that they even recognise that we anthropocentrically bias every system we touch?<\/p>\n<p>Which brings us to this: If I <b><i>truly believed<\/i><\/b> that the EU actually gave a damn about the rights of nonhuman persons, biological or digital, I would be all for this draft proposal. But they don&#8217;t. This is a stunt. Look at the extant world refugee crisis, <a href=\"http:\/\/www.newyorker.com\/news\/news-desk\/the-politics-of-murder-in-britain\">the fear driving the rise of far right racists who are willing to kill people who disagree with them<\/a>, and, yes, even the fact that this draft proposal is the kind of bullshit that people feel they have to pull just to get human workers paid living wages. Understand, then, that this whole scenario is a giant clusterfuck of rights vs needs and all pitted against all. We need clear plans to address all of this, not just some slapdash, &#8220;hey, if we call them people and make corporations get insurance and pay into social security for their liability cost, then maybe it&#8217;ll be a deterrent&#8221; garbage.<\/p>\n<p>There <em><strong>is<\/strong><\/em> a brief, shining moment in the proposal, right at point 23 under &#8220;Education and Employment Forecast,&#8221; where they basically say &#8220;Since the complete and total automation of things like factory work is a real possibility, maybe we&#8217;ll investigate what it would look like if we just said screw it, and tried to institute a Universal Basic Income.&#8221; But that is the one moment where there&#8217;s even a glimmer of a thought about what kinds of positive changes automation and eventually even machine consciousness could mean, if we get out ahead of it, rather than asking for ways to make sure that no human is ever, <b><i>ever<\/i><\/b> harmed, and that, if they <b><i>are<\/i><\/b> harmed\u2014either physically or as regards their dignity\u2014then they&#8217;re in no way kept from whatever recompense is owed to them.<\/p>\n<p>There are people doing the work to make something more detailed and complete, than this mess. I talked about them in the newsletter editions, mentioned above. There are people who think clearly and well, about this. Who was consulted on this draft proposal? Because, again, this proposal reads more like a deterrence, liability, and punishment schema than anything borne out of actual thoughtful interrogation of what the term &#8220;personhood&#8221; means, and of what a world of automation could mean for our systems of value if we were to put our resources and efforts toward providing for the basic needs of every human person. Let&#8217;s take a thorough run at <b><i>that<\/i><\/b>, and then maybe we&#8217;ll be equipped to try to address this whole &#8220;nonhuman personhood&#8221; thing, again.<\/p>\n<p>And maybe we&#8217;ll even do it properly, this time.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In case you were unaware, last Tuesday, June 21, Reuters put out an article about an EU draft plan regarding the designation of so-called robots and artificial intelligences as &#8220;Electronic Persons.&#8221; Some of you&#8217;d think I&#8217;d be all about this. You&#8217;d be wrong. The way the Reuters article frames it makes it look like the [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[1],"tags":[8,1081,959,974,73,85,1031,86,1083,1004,1071,101,1086,278,1087,1082,1088,418,419,1085,492,493,1048,1049,584,979,1072,1073,1077,1030,1084],"class_list":["post-5030","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-a-future-worth-thinking-about","tag-algorithmic-bias","tag-algorithmic-systems","tag-animal-ethics","tag-artificial-intelligence","tag-autonomous-created-intelligence","tag-autonomous-creative-intelligence","tag-autonomous-generated-intelligence","tag-autonomous-systems","tag-autonomously-creative-intelligence","tag-auto2","tag-bias","tag-electronic-personhood","tag-ethics","tag-european-union","tag-human-rights","tag-implications","tag-invisible-architecture-of-bias","tag-invisible-architectures-of-bias","tag-kate-crawford","tag-machine-consciousness","tag-machine-ethics","tag-moral-agency","tag-moral-patiency","tag-nonhuman-personhood","tag-nonhuman-rights-project","tag-post-work-economy","tag-post-worker-economy","tag-rose-eveleth","tag-technological-ethics","tag-workers-rights"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p5WByP-1j8","jetpack_likes_enabled":true,"jetpack-related-posts":[{"id":5316,"url":"https:\/\/afutureworththinkingabout.com\/?p=5316","url_meta":{"origin":5030,"position":0},"title":"My Appearance on The Machine Ethics Podcast&#8217;s A.I. Retreat Episode","author":"Damien P. Williams","date":"October 23, 2018","format":false,"excerpt":"As you already know, we went to the second Juvet A.I. Retreat, back in September. If you want to hear several of us talk about what we got up to at the then you're in luck because here are several conversations conducted by Ben Byford of the Machine Ethics Podcast.\u2026","rel":"","context":"In \"algorithmic bias\"","block_context":{"text":"algorithmic bias","link":"https:\/\/afutureworththinkingabout.com\/?tag=algorithmic-bias"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/img.youtube.com\/vi\/ownE2zxTN2U\/0.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":5082,"url":"https:\/\/afutureworththinkingabout.com\/?p=5082","url_meta":{"origin":5030,"position":1},"title":"From WIRED: &#8220;Tech Giants Team Up to Keep AI From Getting Out of Hand&#8221;","author":"Damien P. Williams","date":"September 28, 2016","format":false,"excerpt":"I spoke with Klint Finley over at WIRED about Amazon, Facebook, Google, IBM, and Microsoft's new joint ethics and oversight venture, which they've dubbed the \"Partnership on Artificial Intelligence to Benefit People and Society.\" They held a joint press briefing, today, in which Yann LeCun, Facebook's director of AI, and\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":4859,"url":"https:\/\/afutureworththinkingabout.com\/?p=4859","url_meta":{"origin":5030,"position":2},"title":"My First Appearance on Mindful Cyborgs","author":"Damien P. Williams","date":"April 29, 2015","format":false,"excerpt":"I sat down with Klint Finley of\u00a0Mindful Cyborgs to talk about many, many things: \u2026pop culture portrayals of human enhancement and artificial intelligence and why we need to craft more nuanced narratives to explore these topics\u2026 Tune in next week to hear Damien talk about how AI and transhumanism intersects\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":5310,"url":"https:\/\/afutureworththinkingabout.com\/?p=5310","url_meta":{"origin":5030,"position":3},"title":"The Second Future of A.I. Retreat","author":"Damien P. Williams","date":"September 25, 2018","format":false,"excerpt":"Kirsten and I spent the week between the 17th and the 21st of September with 18 other utterly amazing people having Chatham House Rule-governed conversations about the Future of Artificial Intelligence. We were in Norway, in the Juvet Landscape Hotel, which is where they filmed a lot of the movie\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2018\/09\/20180919_102419-Juvet100DataEthicsSpaRainbow-1024x576.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2018\/09\/20180919_102419-Juvet100DataEthicsSpaRainbow-1024x576.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2018\/09\/20180919_102419-Juvet100DataEthicsSpaRainbow-1024x576.jpg?resize=525%2C300&ssl=1 1.5x"},"classes":[]},{"id":5276,"url":"https:\/\/afutureworththinkingabout.com\/?p=5276","url_meta":{"origin":5030,"position":4},"title":"Nonhuman and Nonbiological Phenomenology","author":"Damien P. Williams","date":"May 15, 2018","format":false,"excerpt":"Late last month, I was at Theorizing the Web, in NYC, to moderate Panel B3, \"Bot Phenomenology,\" in which I was very grateful to moderate a panel of people I was very lucky to be able to bring together. Johnathan Flowers, Emma Stamm, and Robin Zebrowski were my interlocutors in\u2026","rel":"","context":"In \"alterity\"","block_context":{"text":"alterity","link":"https:\/\/afutureworththinkingabout.com\/?tag=alterity"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":5347,"url":"https:\/\/afutureworththinkingabout.com\/?p=5347","url_meta":{"origin":5030,"position":5},"title":"&#8220;Consciousness and Conscious Machines: What\u2019s At Stake?&#8221;","author":"Damien P. Williams","date":"January 16, 2019","format":false,"excerpt":"[This paper was prepared for the 2019 Towards Conscious AI Systems Symposium co-located with the Association for the Advancement of Artificial Intelligence 2019 Spring Symposium Series. Much of this work derived from my final presentation at the 2017 SRI Technology and Consciousness Workshop Series: \"Science, Ethics, Epistemology, and Society: Gains\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/01\/disabilityprotest.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/01\/disabilityprotest.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/01\/disabilityprotest.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/01\/disabilityprotest.jpg?resize=700%2C400&ssl=1 2x"},"classes":[]}],"_links":{"self":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/5030","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5030"}],"version-history":[{"count":4,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/5030\/revisions"}],"predecessor-version":[{"id":5037,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/5030\/revisions\/5037"}],"wp:attachment":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5030"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5030"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5030"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}