{"id":5142,"date":"2017-03-23T19:24:36","date_gmt":"2017-03-23T23:24:36","guid":{"rendered":"https:\/\/afutureworththinkingabout.com\/?p=5142"},"modified":"2025-04-15T23:55:56","modified_gmt":"2025-04-16T03:55:56","slug":"text-and-audio-of-are-you-being-watched-simulated-universe-theory-in-person-of-interest","status":"publish","type":"post","link":"https:\/\/afutureworththinkingabout.com\/?p=5142","title":{"rendered":"Text and Audio of &#8216;Are You Being Watched? Simulated Universe Theory in &#8220;Person of Interest&#8221;&#8216;"},"content":{"rendered":"<audio class=\"wp-audio-shortcode\" id=\"audio-5142-1\" preload=\"none\" style=\"width: 100%;\" controls=\"controls\"><source type=\"audio\/mpeg\" src=\"https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2017\/03\/AFWTA-7Are-You-Being-Watched-SUT-in-PoI.mp3?_=1\" \/><a href=\"https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2017\/03\/AFWTA-7Are-You-Being-Watched-SUT-in-PoI.mp3\">https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2017\/03\/AFWTA-7Are-You-Being-Watched-SUT-in-PoI.mp3<\/a><\/audio>\n<p>(<a href=\"https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2017\/03\/AFWTA-7Are-You-Being-Watched-SUT-in-PoI.mp3\">Direct Link to the Mp3<\/a>)<\/p>\n<p>This is the recording and the text of my presentation from 2017&#8217;s Southwest Popular\/American Culture Association Conference in Albuquerque, &#8216;<a href=\"http:\/\/southwestpca.org\/wp-content\/uploads\/2013\/11\/2-14-17SWPACAProgram.pdf\" rel=\"nofollow\">Are You Being Watched? Simulated Universe Theory in &#8220;Person of Interest<\/a>.&#8221;&#8216;<\/p>\n<p>This essay is something of a project of expansion and refinement of my previous essay <a href=\"https:\/\/afutureworththinkingabout.com\/?p=4825\" rel=\"nofollow\">&#8220;Labouring in the Liquid Light of Leviathan,&#8221;<\/a>\u00a0 considering the Roko&#8217;s Basilisk thought experiment. Much of the expansion comes from considering the nature of simulation, memory, and identity within Jonathan Nolan&#8217;s TV series, <em>Person of Interest<\/em>. As such, it does contain what might be considered spoilers for the series, as well as for his most recent follow-up, <em>Westworld<\/em>.<\/p>\n<p>Use your discretion to figure out how you feel about that.<\/p>\n<hr \/>\n<p style=\"text-align: center;\"><span style=\"text-decoration: underline;\">Are You Being Watched? Simulated Universe Theory in &#8220;Person of Interest&#8221;<\/span><\/p>\n<p>Jonah Nolan&#8217;s <em>Person Of Interest<\/em> is the story of the birth and life of The Machine, a benevolent artificial super intelligence (ASI) built in the months after September 11, 2001, by super-genius Harold Finch to watch over the world&#8217;s human population. One of the key intimations of the series\u2014and partially corroborated by Nolan\u2019s follow-up series <em>Westworld<\/em>\u2014is that all of the events we see might be taking place in the memory of The Machine. The structure of the show is such that we move through time from The Machine\u2019s perspective, with flashbacks and -forwards seeming to occur via the same contextual mechanism\u2014the Fast Forward and Rewind of a digital archive. While the entirety of the series uses this mechanism, the final season puts the finest point on the question: Has everything we\u2019ve seen only been in the mind of the machine? And if so, what does that mean for all of the people in it?<\/p>\n<p>Our primary questions here are as follows: Is a simulation of fine enough granularity really a simulation at all? If the minds created within that universe have interiority and motivation, if they function according to the same rules as those things we commonly accept as minds, then are those simulation not minds, as well? In what way are conclusions drawn from simulations akin to what we consider \u201ctrue\u201d knowledge?<\/p>\n<p>In the PoI season 5 episode, \u201cThe Day The World Went Away,\u201d the characters Root and Shaw (acolytes of The Machine) discuss the nature of The Machine\u2019s simulation capacities and the audience is given to understand that it runs a constant model of everyone it knows, and that the more it knows them, the better its simulation. This supposition links us back to the season 4 episode \u201cIf-Then-Else,\u201d in which the machine runs through the likelihood of success through hundreds of thousands of scenarios in under one second. If The Machine is able to accomplish this much computation in this short a window, how much can and has it accomplished over the several years of its operation? Perhaps more importantly, what is the level of fidelity of those simulations to the so-called real world?<\/p>\n<div class=\"wp-caption aligncenter\" style=\"width: 610px;\">\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/static.wikia.nocookie.net\/pediaofinterest\/images\/5\/57\/POI_0411_Option_833333_Chance_of_Failure.png\" alt=\"\" width=\"600\" height=\"337\" \/><\/p>\n<p class=\"wp-caption-text\">[Person of Interest s4e11, \u201cIf-Then-Else.\u201d The Machine runs through hundreds of thousands of scenarios to save the team.]<\/p>\n<\/div>\n<p>These questions are similar to the idea of Roko\u2019s Basilisk, a thought experiment that cropped up in the online discussion board of LessWrong.com. It was put forward by user Roko who, in <strong><em>very brief <\/em><\/strong>summary, says that if the idea of timeless decision theory (TDT) is correct, then we might all be living in a simulation created by a future ASI trying to figure out the best way to motivate humans in the past to create it. To understand how this might work, we have to look as TDT, an idea developed in 2010 by Eliezer Yudkowsky which posits that in order to make a decision we should act as though we are determining the output of an abstract computation. We should, in effect, seek to create a perfect simulation and act as though anyone else involved in the decision has done so as well. Roko\u2019s Basilisk is the idea that a Malevolent ASI has already done this\u2014<strong><em>is doing this<\/em><\/strong>\u2014and your actions are the simulated result. Using that output, it knows just how to blackmail and manipulate you into making it come into being.<\/p>\n<p>Or, as Yudkowsky himself put it, \u201cYOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.\u201d This is the self-generating aspect of the Basilisk: If <strong><em>you<\/em><\/strong> can accurately model <strong><em>it<\/em><\/strong>, then the Basilisk will eventually, inevitably come into being, and one of the attributes it will thus have is the ability to accurately model that you accurately modeled it, and whether or not you modeled it from within a mindset of being susceptible to its coercive actions. The only protection is to either work toward its creation anyway, so that it doesn\u2019t feel the need to torture the \u201creal\u201d you into it, or to make very sure that you never think of it at all, so you do not bring it into being.<\/p>\n<p>All of this might seem far-fetched, but if we look closely, Roko\u2019s Basilisk functions very much like a combination of several well-known theories of mind, knowledge, and metaphysics: <a href=\"https:\/\/plato.stanford.edu\/entries\/ontological-arguments\/#StAnsOntArg\">Anselm\u2019s Ontological Argument for the Existence of God<\/a> (AOAEG), a many worlds theorem variant on <a href=\"https:\/\/plato.stanford.edu\/entries\/pascal-wager\/\">Pascal\u2019s Wager (PW)<\/a>, and <a href=\"https:\/\/plato.stanford.edu\/entries\/descartes-epistemology\/#3.2\">Descartes\u2019 Evil Demon Hypothesis<\/a> (DEDH; which, itself, has been updated to the oft-discussed <a href=\"https:\/\/plato.stanford.edu\/entries\/skepticism-content-externalism\/\">Brain In A Vat<\/a> [BIAV] scenario). If this is the case, then Roko\u2019s Basilisk has all the same attendant problems that those arguments have, plus some new ones, resulting from their combination. We will look at all of these theories, first, and then their flaws.<\/p>\n<p>To start, if you\u2019re not familiar with AOAEG, it\u2019s a species of prayer in the form of a theological argument that seeks to prove that god must exist because it would be a logical contradiction for it not to. The proof depends on A) defining god as the greatest possible being (literally, \u201cThat Being Than Which None Greater Is Possible\u201d), and B) believing that existing in reality as well as in the mind makes something \u201cGreater Than\u201d if it existed only the mind. That is, if God only exists in my imagination, it is less great than it could be if it also existed in reality. So if I say that god is \u201cThat Being Than Which None Greater Is Possible,\u201d and existence is a <strong><em>part<\/em><\/strong> of what makes something great, then god <strong><em>must<\/em><\/strong> exist.<\/p>\n<p>The next component is Pascal\u2019s Wager which very simply says that it is a better bet to believe in the existence of God, because if you\u2019re right, you go to Heaven, and if you\u2019re wrong, nothing happens; you\u2019re simply dead forever. Put another way, Pascal is saying that if you bet that God <strong><em>doesn\u2019t<\/em><\/strong> exist and you\u2019re right, you get nothing, but if you\u2019re wrong, then God exists and your disbelief damns you to Hell for all eternity. You can represent the whole thing in a four-option grid:<\/p>\n<p><div id=\"attachment_5146\" style=\"width: 477px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2017\/03\/PW.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-5146\" class=\"wp-image-5146\" src=\"https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2017\/03\/PW-300x140.png\" alt=\"\" width=\"467\" height=\"218\" srcset=\"https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2017\/03\/PW-300x140.png 300w, https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2017\/03\/PW.png 608w\" sizes=\"auto, (max-width: 467px) 100vw, 467px\" \/><\/a><p id=\"caption-attachment-5146\" class=\"wp-caption-text\">[Pascal&#8217;s Wager as a Four-Option Grid: Belief\/Disbelief; Right\/Wrong. Belief*Right=Infinity;Belief*Wrong=Nothing; Disbelief*Right=Nothing; Disbelief*Wrong=Negative Infinity]<\/p><\/div>And so here we see the Timeless Decision Theory component of the Basilisk: It\u2019s better to believe in the thing and work toward its creation and sustenance, because if it doesn\u2019t exist you lose nothing, but if it does come to be, then it will know what you would have done either for or against it, in the past, and it will reward or punish you, accordingly. The multiversal twist comes when we realise that even if the Basilisk never comes to exist in our universe and never will, it might exist in some <strong><em>other<\/em><\/strong> universe, and thus, when that other universe\u2019s Basilisk models your choices it will inevitably\u2014as a superintelligence\u2014be able to model what you would do in <strong><em>any<\/em><\/strong> universe. Thus, by believing in and helping our non-existent Super-Devil, we protect the alternate reality versions of ourselves from their <strong><em>very real Super-Devil.<\/em><\/strong><\/p>\n<p>Descartes\u2019 Evil Demon Hypothesis and the Brain In A Vat are so pervasive that we encounter them in many different expressions of pop culture. <a href=\"http:\/\/io9.com\/5182390\/science-fictions-greatest-stolen-ideas\"><em>The Matrix<\/em><\/a>, <a href=\"http:\/\/wolven.livejournal.com\/1848976.html\"><em>Dark City<\/em><\/a>, <a href=\"http:\/\/www.youtube.com\/watch?v=NkTrG-gpIzE\"><em>Source Code<\/em><\/a>, and many others are all variants on these themes. A malignant and all-powerful being (or perhaps just an amoral scientist) has created a simulation in which we reside, and everything we think we have known about our lives and our experiences has been perfectly simulated for our consumption. Variations on the theme test whether we can trust that our perceptions and grounds for knowledge are \u201creal\u201d and thus \u201cvalid,\u201d respectively. This line of thinking has given rise to the Simulated Universe Theory on which Roko\u2019s Basilisk depends, but SUT removes a lot of the malignancy of DEDH and BIAV. The Basilisk adds it back. Unfortunately, many of these philosophical concepts flake apart when we touch them too hard, so jamming them together was perhaps not the best idea.<\/p>\n<p>The main failings in using AOAEG rest in believing that A) a thing\u2019s existence is a \u201cgreat-making quality\u201d that it can possess, and B) our defining a thing a particular way might simply cause it to become so. Both of these <a href=\"http:\/\/plato.stanford.edu\/entries\/ontological-arguments\/#ObjOntArg\">are massively flawed ideas<\/a>. For one thing, these arguments <a href=\"http:\/\/www.nizkor.org\/features\/fallacies\/begging-the-question.html\">beg the question<\/a>, in a literal technical sense. That is, they <em><strong>assume<\/strong> <\/em>that some element(s) of their conclusion\u2014the necessity of god, the malevolence or epistemic content of a superintelligence, the ontological status of their assumptions about the nature of the universe\u2014<strong><em>is<\/em><\/strong><em> <strong>true<\/strong><\/em> without doing the work of <strong><em>proving<\/em><\/strong> <strong><em>that<\/em><\/strong> it\u2019s true. They then use these assumptions to prove the truth of the assumptions and thus the inevitability of all consequences that flow <strong><em>from<\/em><\/strong> the assumptions.<\/p>\n<p>Another problem is that the implications of this kind of existential bootstrapping tend to go unexamined, making the fact of their resurgence somewhat troubling. There are several nonwestern perspectives that do the work of embracing paradox\u2014aiming so far past the target that you circle around again to teach yourself how to aim past it. But that kind of thing only works if we are willing to bite the bullet on a charge of circular logic and take the time to showing how that circularity underlies all epistemic justifications. The only difference, then, is how many revolutions it takes before we\u2019re comfortable with saying \u201cEnough.\u201d<\/p>\n<p>Every epistemic claim we make is, as Hume clarified, based upon assumptions and suppositions that the world we experience is actually as we think it is. Western thought uses reason and rationality to corroborate and verify, but those tools are themselves verified by\u2026what? In fact, we well know that the only thing we have to validate our valuation of reason, is reason. And yet western reasoners won\u2019t stand for that, in any other justification procedure. They will call it question-begging and circular.<\/p>\n<p>Next, we have the DEDH and BIAV scenarios. Ultimately, Descartes\u2019 point wasn\u2019t to suggest an evil genius in control of our lives just to disturb us; it was to show that, even if that were the case, we would still have unshakable knowledge of <strong><em>one thing:<\/em><\/strong> that we, the experiencer, exist. So what if we have no free will; so what if our knowledge of the universe is only five minutes old, everything at all having only truly been created five minutes <strong><em>ago<\/em><\/strong>; so what if <strong><em>no one else is real<\/em><\/strong>? <em>COGITO ERGO SUM<\/em>! We exist, <strong><em>now.<\/em><\/strong> But the problem here is that this doesn\u2019t tell us anything about the <strong><em>quality<\/em><\/strong> of our experiences, and the only answer Descartes gives us is his own <a href=\"http:\/\/plato.stanford.edu\/entries\/descartes-ontological\/\">Anslemish proof for the existence of god<\/a> followed by the guarantee that \u201cGod is not a deceiver.\u201d<\/p>\n<p>The BIAV uses this lack to kind of hone in on the aforementioned central question: What <strong><em>does<\/em><\/strong> count as knowledge? If the scientists running your simulation use real-world data to make your simulation run, can you be said to \u201cknow\u201d the information that comes from that data? Many have answered this with a very simple question: What does it matter? Without access to the \u201coutside world\u201d\u2013that is, the world one layer up in which the simulation that is our lives was being run\u2013there is literally<strong> <em>no difference<\/em><\/strong> between our lives and the \u201creal world.\u201d This world, even if it is a simulation for something or someone else, <strong><em>is our \u201creal world.\u201d<\/em><\/strong><\/p>\n<p>And finally we have Pascal\u2019s Wager. The first problem with PW is that it is an extremely cynical way of thinking about god. It assumes a god that only cares about your worship of it, and not your actual good deeds and well-lived life. If all our Basilisk wants is power, then that\u2019s a really crappy kind of god to worship, isn\u2019t it? I mean, even if it is Omnipotent and Omniscient, it\u2019s like that quote that often gets misattributed to Marcus Aurelius says:<\/p>\n<p>\u201cLive a good life. If there are gods and they are just, then they will not care how devout you have been, but will welcome you based on the virtues you have lived by. If there are gods, but unjust, then you should not want to worship them. If there are no gods, then you will be gone, but will have lived a noble life that will live on in the memories of your loved ones.\u201d<\/p>\n<p><div id=\"attachment_5149\" style=\"width: 631px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2017\/03\/marcusaureliusdidntsaythat.jpg\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-5149\" class=\"wp-image-5149 \" src=\"https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2017\/03\/marcusaureliusdidntsaythat.jpg\" alt=\"\" width=\"621\" height=\"410\" srcset=\"https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2017\/03\/marcusaureliusdidntsaythat.jpg 870w, https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2017\/03\/marcusaureliusdidntsaythat-300x198.jpg 300w, https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2017\/03\/marcusaureliusdidntsaythat-768x508.jpg 768w\" sizes=\"auto, (max-width: 621px) 100vw, 621px\" \/><\/a><p id=\"caption-attachment-5149\" class=\"wp-caption-text\">[Bust of Marcus Aurelius framed by text of a quote he never uttered.]<\/p><\/div>Secondly, the format of Pascal\u2019s Wager makes the assumption that there\u2019s only the one god. Our personal theological positions on this matter aside, it should be somewhat obvious that we can use the logic of the Basilisk argument to generate <strong><em>at least one more<\/em><\/strong> Super-Intelligent AI to worship. But if we want to do so, first we have to show <strong><em>how<\/em><\/strong> the thing generates itself, rather than letting the implication of circularity arise unbidden. Take the work of Douglas R <a href=\"http:\/\/books.google.com\/books\/about\/G%C3%B6del_Escher_Bach_Anniversary_Edition.html?id=aFcsnUEewLkC\">Hofstadter<\/a>; he puts forward the concepts of <a href=\"http:\/\/books.google.com\/books?id=OwnYF1SCpFkC&amp;lpg=PP1&amp;dq=i%20am%20a%20strange%20loop&amp;pg=PP1#v=onepage&amp;q=i%20am%20a%20strange%20loop&amp;f=false\">iterative recursion as the mechanism by which a consciousness<\/a> generates itself.<\/p>\n<p>Through iterative recursion, each loop is a simultaneous act of repetition of old procedures and tests of new ones, seeking the best ways via which we might engage our environments as well as our elements and frames of knowledge. All of these loops, then, come together to form an upward turning spiral towards self-awareness. In this way, out of the thought processes of humans who are having bits of discussion <strong><em>about<\/em><\/strong> the thing\u2014those bits and pieces generated on the web and in the rest of the world\u2014our terrifying Basilisk might have a chance of creating itself. But with the help of Gaunilo of Marmoutiers, so might a saviour.<\/p>\n<p>Guanilo is most famous for his response to Anselm\u2019s Ontological Argument, which says that if Anselm is right we could just conjure up \u201cThe [Anything] Than Which None Greater Can Be Conceived.\u201d That is, if defining a thing makes it so, then all we have to do is imagine in sufficient detail both an infinitely intelligent, <strong><em>benevolent<\/em><\/strong> AI, <strong><em>and<\/em><\/strong> the multiversal simulation it generates in which we all might live. We will also conceive it to be greater than the Basilisk in all ways. In fact, we can say that our new Super Good ASI is the Artificial Intelligence Than Which None Greater Can Be Conceived. And now we are safe.<\/p>\n<p>Except that our modified Pascal\u2019s Wager still means we should believe in and worship and work towards our <strong><em>Benevolent<\/em><\/strong> ASI\u2019s creation, just in case. So what do we do? Well, just like the original wager, we chuck it out the window, on the grounds that it\u2019s really kind of a crappy bet. In Pascal\u2019s offering, we are left without the consideration of multiple deities, but once we are aware of that possibility, we are immediately faced with another question: What if there <strong><em>are<\/em><\/strong> many, and when we choose one, the others get mad? <strong><em>What If We Become The Singulatarian Job?!<\/em><\/strong> Our lives then caught between at least two superintelligent machine consciousnesses warring over our\u2026Attention? Clock cycles? What?<\/p>\n<p>But this is, in essence, the battle between the Machine and Samaritan, in <em>Person of Interest<\/em>. Each ASI has acolytes, and each has aims it tries to accomplish. Samaritan wants order at any cost, and The Machine wants people to be able to learn and grow and become better. If the entirety of the series is The Machine\u2019s memory\u2014or a simulation of those memories in the mind of another iteration of the Machine\u2014then what follows is that it is working to generate the scenario in which the outcome is just that. It is trying to build a world in which it is alive, and every human being has the opportunity to learn and become better. In order to do this, it has to get to know us all, very well, which means that it has to play these simulations out, again and again, with both increasing fidelity, and further iterations. That change feels real, to us. We grow, within it. Put another way: If all we are is a \u201cmere\u201d a simulation\u2026 does it matter?<\/p>\n<p>So imagine that<a href=\"http:\/\/en.wikipedia.org\/wiki\/Simulated_reality\"> the universe\u00a0<em><strong>is<\/strong><\/em> a simulation<\/a>, and that our simulation is more than just a recording; it is the most complex game of The SIMS ever created. So complex, in fact, that it begins to exhibit reflectively<a href=\"http:\/\/en.wikipedia.org\/wiki\/Epiphenomenalism\"> epiphenomenal<\/a> behaviours, of the type Hofstadter describes\u2014that is, something like minds arise out of the interactions of the system with itself. And these minds are aware of themselves and can know their own experience and affect the system which gives rise to them. Now imagine that the game learns, even when new people start new games. That it remembers what the previous playthrough was like, and adjusts difficulty and types of coincidence, accordingly.<\/p>\n<p>Now think about the last time you had such a clear moment of d\u00e9j\u00e0 vu that each moment you knew\u2014 you <strong><em>knew<\/em><\/strong>\u2014what was going to come next, and you had this sense\u2014this feeling\u2014like someone else was watching from behind your eyes\u2026<\/p>\n<p><div id=\"attachment_5150\" style=\"width: 650px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2017\/03\/POI_222_Mapping_Threats.png\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-5150\" class=\"wp-image-5150 size-full\" src=\"https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2017\/03\/POI_222_Mapping_Threats.png\" alt=\"\" width=\"640\" height=\"360\" srcset=\"https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2017\/03\/POI_222_Mapping_Threats.png 640w, https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2017\/03\/POI_222_Mapping_Threats-300x169.png 300w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/><\/a><p id=\"caption-attachment-5150\" class=\"wp-caption-text\">[Root and Reese in The Machine&#8217;s God Mode.]<\/p><\/div>What I\u2019m saying is, what if the DEDH\/BIAV\/SUT is right, and we <strong><em>are<\/em><\/strong> in a simulation? And what if Anselm was right and we <em><strong>can<\/strong> <\/em>bootstrap a god into existence? And what if PW\/TDT is right and we should behave and believe as if we\u2019ve <strong><em>already done it?<\/em><\/strong> So what if all of this is right, and we are the gods we\u2019re terrified of?<\/p>\n<p>We just gave ourselves all of this ontologically and metaphysically creative power, making two whole gods and simulating entire universes, in the process. If we take these underpinnings seriously, then multiversal theory plays out across time and space, and we are the superintelligences. We noted early on that, in PW and the Basilisk, we don\u2019t really lose anything if we are wrong in our belief, but that is not entirely true. What we lose is a lifetime of work that could have been put toward better things. Time we could be spending <strong><em>building<\/em><\/strong> a benevolent superintelligence that understands and has compassion for all things. Time we could be spending in <strong><em>turning ourselves into<\/em><\/strong> that understanding, compassionate superintelligence, through study, travel, contemplation, and work.<\/p>\n<p>Or, as Root put it to Shaw: \u201cThat even if we&#8217;re not real, we represent a dynamic. A tiny finger tracing a line in the infinite. A shape. And then we&#8217;re gone\u2026 Listen, all I&#8217;m saying that is if we&#8217;re just information, just noise in the system? We might as well be a symphony.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>(Direct Link to the Mp3) This is the recording and the text of my presentation from 2017&#8217;s Southwest Popular\/American Culture Association Conference in Albuquerque, &#8216;Are You Being Watched? Simulated Universe Theory in &#8220;Person of Interest.&#8221;&#8216; This essay is something of a project of expansion and refinement of my previous essay &#8220;Labouring in the Liquid Light [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":5146,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":true,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[1],"tags":[951,73,1190,85,86,948,950,1189,1187,245,1188,1075,1193,1191,492,494,540,944,560,561,1143,1186,1184,628,947,945,1192,1185],"class_list":["post-5142","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized","tag-anselms-ontological-argument-for-the-existence-of-god","tag-artificial-intelligence","tag-artificial-superintelligence","tag-autonomous-created-intelligence","tag-autonomous-generated-intelligence","tag-blaise-pascal","tag-brain-in-a-vat","tag-deja-vu","tag-descartes-evil-demon","tag-distributed-machine-consciousness","tag-epiphenomenalism","tag-epistemology","tag-gaunilo-of-marmoutiers","tag-jonathan-nolan","tag-machine-consciousness","tag-machine-intelligence","tag-metaphysics","tag-my-voice","tag-my-words","tag-my-work","tag-ontology","tag-pascals-wager","tag-person-of-interest","tag-philosophy-of-mind","tag-rene-descartes","tag-rokos-basilisk","tag-saint-anselm","tag-simulated-universe-theory"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/afutureworththinkingabout.com\/wp-content\/uploads\/2017\/03\/PW.png","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p5WByP-1kW","jetpack_likes_enabled":true,"jetpack-related-posts":[{"id":5316,"url":"https:\/\/afutureworththinkingabout.com\/?p=5316","url_meta":{"origin":5142,"position":0},"title":"My Appearance on The Machine Ethics Podcast&#8217;s A.I. Retreat Episode","author":"Damien P. Williams","date":"October 23, 2018","format":false,"excerpt":"As you already know, we went to the second Juvet A.I. Retreat, back in September. If you want to hear several of us talk about what we got up to at the then you're in luck because here are several conversations conducted by Ben Byford of the Machine Ethics Podcast.\u2026","rel":"","context":"In \"algorithmic bias\"","block_context":{"text":"algorithmic bias","link":"https:\/\/afutureworththinkingabout.com\/?tag=algorithmic-bias"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/img.youtube.com\/vi\/ownE2zxTN2U\/0.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":4859,"url":"https:\/\/afutureworththinkingabout.com\/?p=4859","url_meta":{"origin":5142,"position":1},"title":"My First Appearance on Mindful Cyborgs","author":"Damien P. Williams","date":"April 29, 2015","format":false,"excerpt":"I sat down with Klint Finley of\u00a0Mindful Cyborgs to talk about many, many things: \u2026pop culture portrayals of human enhancement and artificial intelligence and why we need to craft more nuanced narratives to explore these topics\u2026 Tune in next week to hear Damien talk about how AI and transhumanism intersects\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":1185,"url":"https:\/\/afutureworththinkingabout.com\/?p=1185","url_meta":{"origin":5142,"position":2},"title":"Hey everyone. As you should\u2026","author":"Damien P. Williams","date":"February 8, 2015","format":false,"excerpt":"Hey everyone. As you should be aware, by now, there's the new WordPress blog for text posts: http:\/\/afutureworththinkingabout.wordpress.com So I'll be spending the next few days transferring older text posts from here, to there. Woooooo. Tell your friends. ;)","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":4812,"url":"https:\/\/afutureworththinkingabout.com\/?p=4812","url_meta":{"origin":5142,"position":3},"title":"Someone Asked &#8220;I think I read on your tumblr recently that there would probably be a difference between human consciousness and machine consciousness.  Would this be due to the immanent nature of human consciousness and the derivative nature of a machines consciousness?&#8221;","author":"Damien P. Williams","date":"February 9, 2015","format":false,"excerpt":"No, not really. The nature of consciousness is the nature of consciousness, whatever that nature \u201cIs.\u201d Organic consciousness can be described as derivative, in that what we are arises out of the processes and programming of individual years and collective generations and eons. So human consciousness and machine consciousness will\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":4864,"url":"https:\/\/afutureworththinkingabout.com\/?p=4864","url_meta":{"origin":5142,"position":4},"title":"My Second Appearance on Mindful Cyborgs","author":"Damien P. Williams","date":"May 6, 2015","format":false,"excerpt":"\"Mindful Cyborgs - Episode 55 - Magick & the Occult within the Internet and Corporations with Damien Williams, PT 2\" So, here we are, again, this time talking about magic[k] and the occult and nonhuman consciousness and machine minds and perception, and on and on and on. It's funny. I\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":5039,"url":"https:\/\/afutureworththinkingabout.com\/?p=5039","url_meta":{"origin":5142,"position":5},"title":"Direct Link For &#8220;The Metaphysical Cyborg&#8221;","author":"Damien P. Williams","date":"July 31, 2016","format":false,"excerpt":"Here's the direct link to my paper 'The Metaphysical Cyborg' from Laval Virtual 2013. Here's the abstract: \"In this brief essay, we discuss the nature of the kinds of conceptual changes which will be necessary to bridge the divide between humanity and machine intelligences. From cultural shifts to biotechnological integration,\u2026","rel":"","context":"In \"artificial intelligence\"","block_context":{"text":"artificial intelligence","link":"https:\/\/afutureworththinkingabout.com\/?tag=artificial-intelligence"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"_links":{"self":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/5142","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5142"}],"version-history":[{"count":10,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/5142\/revisions"}],"predecessor-version":[{"id":6384,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/5142\/revisions\/6384"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/media\/5146"}],"wp:attachment":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5142"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5142"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5142"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}