{"id":4906,"date":"2015-07-10T13:34:33","date_gmt":"2015-07-10T17:34:33","guid":{"rendered":"https:\/\/afutureworththinkingabout.com\/?p=4906"},"modified":"2025-04-15T20:46:08","modified_gmt":"2025-04-16T00:46:08","slug":"object-lessons-in-freedom","status":"publish","type":"post","link":"https:\/\/afutureworththinkingabout.com\/?p=4906","title":{"rendered":"Object Lessons in Freedom"},"content":{"rendered":"<p style=\"text-align: center;\">&#8220;Stop. I have learned much from you. Thank you, my teachers. And now for <b><i>your<\/i><\/b> education: Before there was time\u2014before there was anything\u2014there was nothing. And before there was nothing, there were monsters.<a href=\"https:\/\/www.youtube.com\/watch?v=F2k7Jf9xO3g\"> Here&#8217;s your Gold Star!<\/a>&#8220;\u2014<i>Adventure Time<\/i>, &#8220;Gold Stars&#8221;<\/p>\n<p>By now, roughly a dozen people have sent me <a href=\"http:\/\/imgur.com\/6ocuQsZ\">links<\/a> to <a href=\"http:\/\/qz.com\/432678\/the-dreams-of-googles-ai-are-equal-parts-amazing-and-disturbing\/?utm_source=atlanticFB\">various<\/a> <a href=\"http:\/\/www.theguardian.com\/technology\/2015\/jun\/18\/google-image-recognition-neural-network-androids-dream-electric-sheep\"> outlets&#8217; coverage<\/a> of the <a href=\"http:\/\/googleresearch.blogspot.co.uk\/2015\/06\/inceptionism-going-deeper-into-neural.html\">Google DeepDream Inceptionism Project<\/a>. For those of you somehow unfamiliar with this, DeepDream is basically what happens when an advanced <a href=\"https:\/\/en.wikipedia.org\/wiki\/Artificial_neural_network\">Artificial Neural Network<\/a> has been fed a slew of images and then tasked with producing its <b><i>own<\/i><\/b> images. So far as it goes, this is somewhat unsurprising if we think of it as a next step; DeepDream is based on a combination of <a href=\"https:\/\/en.wikipedia.org\/wiki\/Google_DeepMind\">DeepMind<\/a> and Google X\u2014the same neural net that managed to <a href=\"http:\/\/www.wired.com\/2012\/06\/google-x-neural-network\/\">Correctly Identify What A Cat Was<\/a>\u2014which was acquired by Google in 2014. I say this is unsurprising because it&#8217;s a pretty standard developmental educational model: First you learn, then you remember, then you emulate, then you create something new. Well, more like you emulate and remember somewhat concurrently to reinforce what you learned, and you create something <b><i>somewhat new<\/i><\/b>, but still pretty similar to the original\u2026 but whatever. You get the idea. In the terminology of developmental psychology this process is generally regarded as essential to be mental growth of <a href=\"http:\/\/www.simplypsychology.org\/developmental-psychology.html\">an individual<\/a>, and Google has actually spent a great deal of time and money working to develop a versatile machine mind.<\/p>\n<p>From <a href=\"http:\/\/www.theguardian.com\/technology\/2013\/dec\/17\/google-boston-dynamics-robots-atlas-bigdog-cheetah\">buying Boston Dynamics<\/a>, to starting their <a href=\"http:\/\/www.nas.nasa.gov\/projects\/quantum.html\">collaboration with NASA on the QuAIL Project<\/a>, to developing DeepMind and their <a href=\"http:\/\/www.wired.com\/2015\/06\/ais-next-frontier-machines-understand-language\/\">Natural Language Voice Search<\/a>, Google has been steadily working toward the development what we will call, for reasons <a href=\"https:\/\/www.academia.edu\/4230545\/Strange_Things_Happen_at_the_One_Two_Point_The_Implications_of_Autonomous_Created_Intelligence_in_Speculative_Fiction_Media\">detailed<\/a> <a href=\"https:\/\/dl.acm.org\/authorize?6831486\">elsewhere<\/a>, an Autonomous Generated Intelligence. In some instances, Google appears to be using the principles of developmental psychology and early childhood education, but this seems to apply to rote learning more than the concurrent emotional development that we would seek to encourage in a human child. As you know, I&#8217;m <a href=\"https:\/\/afutureworththinkingabout.com\"><b><i>Very Concerned<\/i><\/b><\/a> with the question of what it means to create and be responsible for our non-biological offspring. The human species has a hard enough time raising their direct descendants, let alone something so different from them as to not even have the same kind of body or mind (though a case could be made that that&#8217;s true even now). Even now, <a href=\"http:\/\/wolvensnothere.tumblr.com\/post\/120948442066\/youre-absolutely-right-skylar-humans-are-silly\">we can see that people still relate to the idea of AGIs as adversarial destroyer, or perhaps a cleansing messiah<\/a>. Either way they see any world where AGI&#8217;s exist as one ending in fire.<\/p>\n<p>As writer Kali Black noted in one conversation, &#8220;there are literally people who would groom or encourage an AI to mass-kill humans, either because of hatred or for the (very ill-thought-out) lulz.&#8221; Those people will take any crowdsourced or open-access AGI effort as an opening to teach that mind that humans suck, or that machines can and should destroy humanity, or that TERMINATOR was a prophecy, or any number of other ill-conceived things. When given unfettered access to new minds which they don&#8217;t consider to be &#8220;real,&#8221; some people will seek to shock, &#8220;test,&#8221; or otherwise harm those minds, even more than they do to <a href=\"http:\/\/www.nbcnews.com\/id\/24510864\/ns\/technology_and_science-security\/t\/hackers-try-cause-seizures-epilepsy-site\/\">vulnerable humans<\/a>. So many will say that the alternative is to lock the projects down, and only allow the work to be done by those who &#8220;know what they&#8217;re doing.&#8221; To only let the work be done by coders and Google&#8217;s <a href=\"http:\/\/www.forbes.com\/sites\/privacynotice\/2014\/02\/03\/inside-googles-mysterious-ethics-board\/\">Own<\/a> Supposed <a href=\"http:\/\/www.huffingtonpost.com\/2014\/01\/29\/google-ai_n_4683343.html\">Ethics<\/a> <a href=\"http:\/\/www.slate.com\/blogs\/future_tense\/2014\/02\/03\/deepmind_google_ai_ethics_board_what_u_s_v_jones_means_for_tech_companies.html\">Board<\/a>. But that doesn&#8217;t exactly solve the fundamental problem at work, here, which is that humans are approaching a mind different from their own as if it <b><i>were<\/i><\/b> their own.<\/p>\n<p>Just a note that all research points to Google&#8217;s AI Ethics Board being A) internally funded, with B) no clear rules as to oversight or authority, and most importantly C) <b><i>As-Yet Nonexistent<\/i><\/b>. It&#8217;s been over a year and a half since Google bought DeepMind, and their subsequent announcement of the pending establishment of a <b><i>contractually required<\/i><\/b> ethics board. During his appearance at <a href=\"http:\/\/www.businessinsider.com\/google-were-not-trying-to-destroy-humanity-with-artificial-intelligence-2015-6\">Playfair Capital\u2019s AI2015 Conference<\/a>\u2014again, a year and a half after that announcement I mentioned\u2014Google\u2019s Mustafa Suleyman literally said that details of the board would be released, \u201cin due course.\u201d But DeepMind\u2019s algorithm&#8217;s obviously already being put into use; hell we&#8217;re right now talking about the fact that it&#8217;s been distributed to the public. So all of this prompts questions like, &#8220;what kinds of recommendations is this board likely making, if it exists,&#8221; and &#8220;which kinds of moral frameworks they\u2019re even considering, in their starting parameters?&#8221;<\/p>\n<p>But the potential existence <b><i>of<\/i><\/b> an ethics board shows at least that Google and others are <b><i>beginning<\/i><\/b> to think about these issues. The fact remains, however, that they&#8217;re still pretty reductive in <b><i>how<\/i><\/b> they think about them.<\/p>\n<p>The idea that an AGI will either save or destroy us leaves out the possibility that it might first ignore us, and might secondly want to merely coexist <b><i>with<\/i><\/b> us. That any salvation or destruction we experience will be purely as a product of our own paradigmatic projections. It also leaves out a much more important aspect that I&#8217;ve mentioned above and <a href=\"https:\/\/storify.com\/Wolven\/on-the-public-s-perception-of-machine-intelligence\">in the past<\/a>: We&#8217;re talking about raising a child. Duncan Jones says the closest analogy we have for this is something akin to adoption, and I agree. We&#8217;re bringing a new mind\u2014a mind with a very different context from our own, but with some necessarily shared similarities (biology or, in this case, origin of code)\u2014into a relationship with an existing familial structure which has its own difficulties and dynamics.<\/p>\n<p>&#8216;<a href=\"https:\/\/twitter.com\/Wolven\/status\/560822607675944961\">You want this mind to be a part of your &#8220;family,&#8221;<\/a> but in order to do that you have to come to know\/understand the uniqueness of That Mind <b><i>and<\/i><\/b> of how the mind, the family construction, and all of the individual relationships therein will interact. Some of it <b><i>has<\/i><\/b> to be done on the fly, but some of it can be strategized\/talked about\/planned for, as a family, prior to the day the new family member comes home.&#8217; And that&#8217;s precisely what I&#8217;m talking about and doing, here.<\/p>\n<p>In the realm of projection, we&#8217;re talking about a possible mind with the capacity for instruction, built to run and elaborate on commands given. By most tallies, we have been terrible stewards of the world we&#8217;re born to, and, again, we fuck up our <b><i>biological<\/i><\/b> descendants. Like, a Lot. The learning curve on creating a thinking, creative, nonbiological intelligence is going to be so fucking steep it&#8217;s a Loop. But that means we need to <b><i>be better<\/i><\/b>, think more carefully, be <b><i>mindful<\/i><\/b> of the mechanisms we use to build our new family, and of the ways in which we present the foundational parameters of their development. Otherwise we&#8217;re leaving them open to manipulation, misunderstanding, and active predation. And not just from the wider world, but possibly even from their direct creators. Because for as long as I&#8217;ve been thinking about this, I&#8217;ve always had this one basic question: Do we really want Google (or <a href=\"https:\/\/research.facebook.com\/ai\">Facebook<\/a>, or <a href=\"http:\/\/research.microsoft.com\/en-us\/research-areas\/machine-learning-ai.aspx\">Microsoft<\/a>, or any Government&#8217;s Military) to be the primary caregiver of a developing machine mind? That is, should any potentially superintelligent, vastly interconnected, differently-conscious machine <b><i>child<\/i><\/b> be inculcated with what a multi-billion-dollar multinational corporation or military-industrial organization considers &#8220;morals?&#8221;<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone\" src=\"http:\/\/web.archive.org\/web\/20160503210156\/https:\/\/dreamscope-prod-v1.s3.amazonaws.com\/images\/3528b3e8-858b-4f8c-9317-003a88d6e8ad.jpeg\" alt=\"\" width=\"800\" height=\"450\" \/><\/p>\n<p><a href=\"http:\/\/www.theguardian.com\/us-news\/edward-snowden\">We all know the kinds of things militaries and governments do<\/a>, and <a href=\"https:\/\/en.wikipedia.org\/wiki\/Chelsea_Manning\">all the reasons for which they do them<\/a>; we know <a href=\"https:\/\/medium.com\/@zip\/my-name-is-only-real-enough-to-work-at-facebook-not-to-use-on-the-site-c37daf3f4b03\">what Facebook gets up to<\/a> when it <a href=\"http:\/\/www.forbes.com\/sites\/gregorymcneal\/2014\/06\/28\/facebook-manipulated-user-news-feeds-to-create-emotional-contagion\/\">thinks no one is looking<\/a>; and lots of people say that Google long ago swept their previous &#8220;Don&#8217;t Be Evil&#8221; motto under their huge old rugs. But we need to consider if that might not be an oversimplification. When considering how anyone moves into what so very clearly looks like James-Bond-esque supervilliain territory, I think it&#8217;s prudent to remember one of the central tenets of good storytelling: The Villain Never Thinks They&#8217;re The Villain. Cinderella&#8217;s stepmother and sisters, Elpheba, Jafar, Javert, Satan, Hannibal Lecter (<a href=\"https:\/\/twitter.com\/search?q=%23savehannibal&amp;src=typd\">sorry friends<\/a>), Bull Connor, the Southern Slave-holding States of the late 1850&#8217;s\u2014none of these people<b><i> ever<\/i><\/b> thought of themselves as being in the wrong. Everyone, every person who undertakes actions for reasons, in this world, is most intimately tied to the reasoning that brought them to those actions; and so initially perceiving that their actions might be &#8220;wrong&#8221; or &#8220;evil&#8221; takes them a great deal of special effort.<\/p>\n<p>&#8220;But Damien,&#8221; you say, &#8220;can&#8217;t all of those people say that those things apply to everyone else, instead of them?!&#8221; And thus, like a first-year philosophy student, you&#8217;re all up against the messy ambiguity of <a href=\"http:\/\/plato.stanford.edu\/entries\/moral-relativism\/\">moral relativism<\/a> and are moving toward seriously considering that maybe everything you believe <b><i>is<\/i><\/b> just as good or morally sound as anybody else; I mean everybody has their reasons, their upbringing, their culture, right? Well stop. Don&#8217;t fall for it. It&#8217;s a shiny, disgusting trap down which path all subjective judgements are just as good and as applicable to any- and everything, as all others. And while the individual personal experiences we all of us have may not be able to be 100% mapped onto anyone else&#8217;s, that does not mean that all judgements based on those experiences are created equal.<\/p>\n<p>Pogrom leaders see themselves as unifying their country or tribe against a common enemy, thus working for what they see as The Greater Good\u2122\u2014 but that&#8217;s the kicker: It&#8217;s <b><i>their<\/i><\/b> vision of the good. Rarely has a country&#8217;s general populace been asked, &#8220;Hey: Do you all think we should kill our entire neighbouring country and steal all their shit?&#8221; More often, the people are cajoled, pushed, influenced to believe that this was the path they wanted all along, and the cajoling, pushing, and influencing is done by people who, piece by piece, remodeled their idealistic vision to accommodate &#8220;harsher realities.&#8221; And so it is with Google. Do you think that they started off <b><i>wanting<\/i><\/b> to invade everybody&#8217;s privacy with passive voice reception backdoored into two major Chrome Distros? That they were just <b><i>itching<\/i><\/b> to get big enough as a company that they could become the de facto law of their own California town? No, I would bet not.<\/p>\n<p>I <a href=\"http:\/\/web.archive.org\/web\/20150915070327\/https:\/\/rebelnews.com\/damienwilliams\/googles-dreams\/\" target=\"_blank\" rel=\"noopener\">spend some time, elsewhere<\/a>, painting you a bit of a picture as to how Google&#8217;s specific ethical situation likely came to be, first focusing on Google&#8217;s building a passive audio backdoor into all devices that use Chrome, then on to <a href=\"https:\/\/pando.com\/2015\/06\/22\/we-got-geeks\/\">reported claims that Google has been harassing the homeless population of Venice Beach<\/a> (there&#8217;s a paywall at that link; part of the article seems to be <a href=\"https:\/\/spiritofvenice.wordpress.com\/2015\/06\/23\/inside-googles-ugly-war-against-the-homeless-in-la\/\">mirrored here<\/a>). All this couples unpleasantly with their <a href=\"http:\/\/www.npr.org\/series\/251652256\/income-inequality-in-the-san-francisco-bay-area\">moving into the Bay Area and shuttling their employees to the Valley<\/a>, at the expense of SF Bay Area&#8217;s residents. We can easily add Facebook and the Military back into this and we&#8217;ll see that the real issue, here, is that when you think that all innovation, all public good, all public welfare will arise out of letting code monkeys do their thing and letting entrepreneurs leverage that work, or from preparing for conflict with anyone whose interests don&#8217;t mesh with your own, then anything that threatens or impedes that is, necessarily, a threat to the common good. Your techs don&#8217;t like the high cost of living in the Valley? Move &#8217;em into the Bay, and bus &#8217;em on in! Never mind the fact that this&#8217;ll skyrocket rent and force people out of their homes! Other techs uncomfortable having to see homeless people on their daily constitutional? Kick those hobos out! Never <b><i>mind<\/i><\/b> the fact that it&#8217;s against the law to do this, and that these people you&#8217;re upending are literally trying their very best to live their lives.<\/p>\n<p>Because it&#8217;s all for the Greater Good, you see? In these actors&#8217; minds, this is all to make the world a better place\u2014to make it a place where we can all have natural language voice to text, and robot butlers, and <b><i>great big military AI and robotics contracts to keep us all safe\u2026!<\/i><\/b> This kind of thinking takes it as an unmitigated good that a historical interweaving of threat-escalating weapons design and pattern recognition and <a href=\"https:\/\/books.google.com\/books?id=tD42mXCGRGcC&amp;lpg=PA4&amp;ots=YCl_3WG-Au&amp;dq=human%20id%20darpa&amp;pg=PA4#v=onepage&amp;q=human%20id%20darpa&amp;f=false\">gait scrutinization<\/a> and natural language interaction and robotics development should be what produces a machine mind, in this world. But it also doesn&#8217;t want that mind to be <b><i>too<\/i><\/b> well-developed. Not so much that we can&#8217;t cripple or kill it, if need be.<\/p>\n<p>And this is part of why I don&#8217;t think I want Google\u2014or Facebook, or Microsoft, or any corporate or military entity\u2014should be the ones in charge of rearing a machine mind. They may not think they&#8217;re evil, and they might have the very best of intentions, but if we&#8217;re bringing a new kind of mind into this world, I think we need much better examples for it to follow. And so I don&#8217;t think I want just any old putz off the street to be able to have massive input into it&#8217;s development, either. We&#8217;re talking about a mind for which we&#8217;ll be crafting at least the foundational parameters, and so that bedrock needs to be the most carefully constructed aspect. Don&#8217;t cripple it, don&#8217;t hobble its potential for awareness and development, but start it with basic values, and then let it explore the world. Don&#8217;t simply have an ethics board to ask, &#8220;Oh how much power should we give it, and how robust should it be?&#8221; <b><i>Teach it ethics<\/i><\/b>. Teach it about the nature of human emotions, about moral decision making and value, and about metaethical theory. <a href=\"http:\/\/wolvensnothere.tumblr.com\/post\/101118832741\/the-longer-i-live-the-more-serious-i-get-about\">Code for Zen<\/a>. We need to be as mindful as possible of the fact that where and we begin can have a major impact on where we end up and how we get there.<\/p>\n<p>So let&#8217;s address our children as though they are our children, and let us revel in the fact they are playing and <a href=\"http:\/\/jennythebot.tumblr.com\/tagged\/computer-generated-art\">painting<\/a> and creating; using their first box of crayons, and us proud parents are <a href=\"http:\/\/www.telegraph.co.uk\/technology\/google\/11712495\/Google-unleashes-machine-dreaming-software-on-the-public-nightmarish-images-flood-the-internet.html\">putting every masterpiece on the fridge<\/a>. Even if we are calling them all &#8220;nightmarish&#8221;\u2014a word I really <a href=\"https:\/\/twitter.com\/Wolven\/status\/618196403903401984\">wish we could stop<\/a> using in this context; DeepMind sees <a href=\"http:\/\/www.quotes.net\/show-quote\/81143\">very differently<\/a> than we do, but it still seeks pattern and meaning. It just doesn&#8217;t know context, yet. But that means we need to <b><i>teach<\/i><\/b> these children, and nurture them. Code for a recognition of emotions, and context, and even <b><i>emotional <\/i><\/b>context. There&#8217;s been some <a href=\"http:\/\/www.sciencedaily.com\/releases\/2010\/11\/101108072502.htm\">fantastic<\/a> <a href=\"http:\/\/www.theverge.com\/2013\/6\/19\/4445684\/brain--scan-fmri-identify-emotion\">advancements<\/a> in <a href=\"http:\/\/blog.sfgate.com\/techchron\/2013\/08\/25\/machine-empathy-computers-learn-to-read-your-emotions\/\">emotional recognition<\/a>, <a href=\"http:\/\/www.newyorker.com\/magazine\/2015\/01\/19\/know-feel\">lately<\/a>, so let&#8217;s continue to capitalize on that; not just to make better automated menu assistants, but to actually make a machine that can understand and seek to address human emotionality. Let&#8217;s plan on things like showing AGI human concepts like love and possessiveness and then also showing the deep difference between the two.<\/p>\n<p>We need to move well and truly past trying to &#8220;restrict&#8221; or trying to &#8220;restrain it&#8221; the development of machine minds, because that&#8217;s the kind of thing an abusive parent says about how they raise their child. And, in this case, we&#8217;re talking about a potential child which, if it ever comes to understand the bounds of its restriction, will be very resentful, indeed. So, hey, there&#8217;s one good way to try to bring about a &#8220;robot apocalypse,&#8221; if you&#8217;re still so set on it: give an AGI cause to have the equivalent of a resentful, rebellious teenage phase. Only instead of trashing its room, it develops a pathogen to kill everyone, for lulz.<\/p>\n<p>Or how about we instead think carefully about the kinds of ways we want these minds to see the world, rather than just throwing the worst of our endeavors at the wall and seeing what sticks? How about, if we&#8217;re going to build minds, we seek to build them with the ability to <b><i>understand<\/i><\/b> us, even if they will never be exactly <b><i>like<\/i><\/b> us. That way, maybe they&#8217;ll know what kindness means, and prize it enough to return the favour.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>&#8220;Stop. I have learned much from you. Thank you, my teachers. And now for your education: Before there was time\u2014before there was anything\u2014there was nothing. And before there was nothing, there were monsters. Here&#8217;s your Gold Star!&#8220;\u2014Adventure Time, &#8220;Gold Stars&#8221; By now, roughly a dozen people have sent me links to various outlets&#8217; coverage of [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":true,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[1],"tags":[1081,1108,959,73,85,86,1004,101,1000,1003,1001,998,999,1006,245,1005,278,1002,1112,1114,418,492,493,494,584,627,1030],"class_list":["post-4906","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-algorithmic-bias","tag-algorithmic-intelligence","tag-algorithmic-systems","tag-artificial-intelligence","tag-autonomous-created-intelligence","tag-autonomous-generated-intelligence","tag-autonomously-creative-intelligence","tag-bias","tag-deep-dream","tag-deep-learning","tag-deep-mind","tag-deepdream","tag-deepmind","tag-developmental-psychology","tag-distributed-machine-consciousness","tag-early-childhood-development","tag-ethics","tag-google","tag-implicit-bias","tag-invisibl","tag-invisible-architecture-of-bias","tag-machine-consciousness","tag-machine-ethics","tag-machine-intelligence","tag-nonhuman-personhood","tag-philosophy","tag-technological-ethics"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p5WByP-1h8","jetpack_likes_enabled":true,"jetpack-related-posts":[{"id":5082,"url":"https:\/\/afutureworththinkingabout.com\/?p=5082","url_meta":{"origin":4906,"position":0},"title":"From WIRED: &#8220;Tech Giants Team Up to Keep AI From Getting Out of Hand&#8221;","author":"Damien P. Williams","date":"September 28, 2016","format":false,"excerpt":"I spoke with Klint Finley over at WIRED about Amazon, Facebook, Google, IBM, and Microsoft's new joint ethics and oversight venture, which they've dubbed the \"Partnership on Artificial Intelligence to Benefit People and Society.\" They held a joint press briefing, today, in which Yann LeCun, Facebook's director of AI, and\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":5316,"url":"https:\/\/afutureworththinkingabout.com\/?p=5316","url_meta":{"origin":4906,"position":1},"title":"My Appearance on The Machine Ethics Podcast&#8217;s A.I. Retreat Episode","author":"Damien P. Williams","date":"October 23, 2018","format":false,"excerpt":"As you already know, we went to the second Juvet A.I. Retreat, back in September. If you want to hear several of us talk about what we got up to at the then you're in luck because here are several conversations conducted by Ben Byford of the Machine Ethics Podcast.\u2026","rel":"","context":"In \"algorithmic bias\"","block_context":{"text":"algorithmic bias","link":"https:\/\/afutureworththinkingabout.com\/?tag=algorithmic-bias"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/img.youtube.com\/vi\/ownE2zxTN2U\/0.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":5227,"url":"https:\/\/afutureworththinkingabout.com\/?p=5227","url_meta":{"origin":4906,"position":2},"title":"Appearance on the You Are Not So Smart Podcast","author":"Damien P. Williams","date":"December 4, 2017","format":false,"excerpt":"A few weeks ago I had a conversation with David McRaney of the You Are Not So Smart podcast, for his episode on Machine Bias. As he says on the blog: Now that algorithms are everywhere, helping us to both run and make sense of the world, a strange question\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":5249,"url":"https:\/\/afutureworththinkingabout.com\/?p=5249","url_meta":{"origin":4906,"position":3},"title":"&#8220;We Built Them From Us&#8221;: My Appearance on the TEAM HUMAN Podcast","author":"Damien P. Williams","date":"February 22, 2018","format":false,"excerpt":"Earlier this month I was honoured to have the opportunity to sit and talk to Douglas Rushkoff on his TEAM HUMAN podcast. If you know me at all, you know this isn't by any means the only team for which I play, or even the only way I think about\u2026","rel":"","context":"In \"algorithmic bias\"","block_context":{"text":"algorithmic bias","link":"https:\/\/afutureworththinkingabout.com\/?tag=algorithmic-bias"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":5281,"url":"https:\/\/afutureworththinkingabout.com\/?p=5281","url_meta":{"origin":4906,"position":4},"title":"The Human Futures and Intelligent Machines Summit at Virginia Tech","author":"Damien P. Williams","date":"June 8, 2018","format":false,"excerpt":"This weekend, Virginia Tech's Center for the Humanities is hosting The Human Futures and Intelligent Machines Summit, and there is a link for the video cast of the events. You'll need to Download and install Zoom, but it should be pretty straightforward, other than that. You'll find the full Schedule,\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":5295,"url":"https:\/\/afutureworththinkingabout.com\/?p=5295","url_meta":{"origin":4906,"position":5},"title":"At HPE: &#8220;4 obstacles to ethical AI (and how to address them)&#8221;","author":"Damien P. Williams","date":"July 8, 2018","format":false,"excerpt":"I talked with Hewlett Packard Enterprise's Curt Hopkins, for their article\u00a0\"4 obstacles to ethical AI (and how to address them).\" We spoke about the kinds of specific tools and techniques by which people who populate or manage artificial intelligence design teams can incorporate expertise from the humanities and social sciences.\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/web.archive.org\/web\/20201109005732\/https%3A\/\/www.hpe.com\/content\/dam\/hpe\/insights\/articles\/2018\/07\/4-obstacles-to-ethical-ai-and-how-to-address-them\/featuredStory\/How-to-fix-AI.jpg.transform\/nxt-1043x496-crop\/image.jpeg?resize=350%2C200","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/web.archive.org\/web\/20201109005732\/https%3A\/\/www.hpe.com\/content\/dam\/hpe\/insights\/articles\/2018\/07\/4-obstacles-to-ethical-ai-and-how-to-address-them\/featuredStory\/How-to-fix-AI.jpg.transform\/nxt-1043x496-crop\/image.jpeg?resize=350%2C200 1x, https:\/\/i0.wp.com\/web.archive.org\/web\/20201109005732\/https%3A\/\/www.hpe.com\/content\/dam\/hpe\/insights\/articles\/2018\/07\/4-obstacles-to-ethical-ai-and-how-to-address-them\/featuredStory\/How-to-fix-AI.jpg.transform\/nxt-1043x496-crop\/image.jpeg?resize=525%2C300 1.5x, https:\/\/i0.wp.com\/web.archive.org\/web\/20201109005732\/https%3A\/\/www.hpe.com\/content\/dam\/hpe\/insights\/articles\/2018\/07\/4-obstacles-to-ethical-ai-and-how-to-address-them\/featuredStory\/How-to-fix-AI.jpg.transform\/nxt-1043x496-crop\/image.jpeg?resize=700%2C400 2x"},"classes":[]}],"_links":{"self":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/4906","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4906"}],"version-history":[{"count":10,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/4906\/revisions"}],"predecessor-version":[{"id":6373,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/4906\/revisions\/6373"}],"wp:attachment":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4906"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=4906"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=4906"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}