{"id":5665,"date":"2022-12-03T21:37:56","date_gmt":"2022-12-04T02:37:56","guid":{"rendered":"https:\/\/afutureworththinkingabout.com\/?p=5665"},"modified":"2023-01-14T22:25:03","modified_gmt":"2023-01-15T03:25:03","slug":"further-thoughts-on-the-blueprint-for-the-ai-bill-of-rights","status":"publish","type":"post","link":"https:\/\/afutureworththinkingabout.com\/?p=5665","title":{"rendered":"Further Thoughts on the &#8220;Blueprint for an AI Bill of Rights&#8221;"},"content":{"rendered":"<p>So with the job of White House Office of Science and Technology Policy director having gone to Dr. Arati Prabhakar back in October, rather than Dr. Alondra Nelson, and the release of the &#8220;<a href=\"https:\/\/www.whitehouse.gov\/ostp\/ai-bill-of-rights\/\">Blueprint for an AI Bill of Rights<\/a>&#8221; (henceforth &#8220;BfaAIBoR&#8221; or &#8220;blueprint&#8221;) a few weeks after that, I am both very interested also pretty worried to see what direction research into &#8220;artificial intelligence&#8221; is actually going to take from here.<\/p>\n<p>To be clear, my fundamental problem with the &#8220;Blueprint for an AI bill of rights&#8221; is that while it pays pretty fine lip-service to the ideas of\u00a0 community-led oversight, transparency, and abolition of and abstaining from developing <a href=\"https:\/\/www.tandfonline.com\/doi\/full\/10.1080\/23299460.2020.1831365\">certain tools<\/a>, it <em><strong>begins<\/strong><\/em> with, and repeats throughout, the idea that sometimes law enforcement, the military, and the intelligence community might need to just\u2026 ignore these principles. Additionally, Dr. Prabhakar was director of DARPA for roughly five years, between 2012 and 2015, and considering what I know for a fact got funded within that window? Yeah.<\/p>\n<p>To put a finer point on it, 14 out of 16 uses of the phrase &#8220;law enforcement&#8221; and 10 out of 11 uses of &#8220;national security&#8221; in this blueprint are in direct reference to why those entities&#8217; or concept structures&#8217; needs might have to supersede the recommendations of the BfaAIBoR itself. The blueprint also doesn&#8217;t mention the depredations of extant military &#8220;AI&#8221; <b><i>at all<\/i><\/b>. Instead, it points to the idea that the Department Of Defense (DoD) &#8220;has adopted [AI] Ethical Principles, and tenets for Responsible Artificial Intelligence specifically tailored to its [national security and defense] activities.&#8221; And so with all of that being the case, there are several current &#8220;AI&#8221; projects in the pipe which a blueprint like this wouldn&#8217;t cover, even if it ever became policy, and frankly that just fundamentally undercuts <b><i>Much<\/i><\/b> of the real good a project like this could do.<\/p>\n<p>For instance, at present, the DoD&#8217;s ethical frames are entirely about transparency, <a href=\"https:\/\/onlinelibrary.wiley.com\/doi\/full\/10.1002\/ail2.61\">explainability<\/a>, and some lipservice around equitability and &#8220;deliberate steps to minimize unintended bias in Al \u2026&#8221; To understand a bit more of what I mean by this, here&#8217;s the <a href=\"http:\/\/https\/\/media.defense.gov\/2022\/Jun\/22\/2003022604\/-1\/-1\/0\/Department-of-Defense-Responsible-Artificial-Intelligence-Strategy-and-Implementation-Pathway.PDF\">DoD&#8217;s &#8220;Responsible Artificial Intelligence Strategy\u2026&#8221; pdf<\/a> (which is not natively searchable and I had to OCR myself, so heads-up); and here&#8217;s the <a href=\"https:\/\/intelligence.gov\/principles-of-artificial-intelligence-ethics-for-the-intelligence-community\">Office of National Intelligence&#8217;s &#8220;ethical principles&#8221; for building AI<\/a>. Note that not once do they consider the moral status of the biases and values they have <b><i>intentionally<\/i><\/b> baked into their systems.<\/p>\n<div style=\"width: 610px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.darpa.mil\/ddm_gallery\/xai-figure2-inline-graphic.png\" alt=\"An &quot;Explainable AI&quot; diagram from DARPA, showing two flowcharts, one on top of the other. The top one is labeled &quot;today&quot; and has the top level condition &quot;task&quot; branching to both a confused looking human user and state called &quot;learned function&quot; which is determined by a previous state labeled &quot;machine learning process&quot; which is determined by a state labeled &quot;training data.&quot; &quot;Learned Function&quot; feeds &quot;Decision or Recommendation&quot; to the human user, who has several questions about the model's beaviour, such as &quot;why did you do that?&quot; and &quot;when can i trust you?&quot; The bottom one is labeled &quot;XAI&quot; and has the top level condition &quot;task&quot; branching to both a happy and confident looking human user and state called &quot;explainable model\/explanation interface&quot; which is determined by a previous state labeled &quot;new machine learning process&quot; which is determined by a state labeled &quot;training data.&quot; &quot;explainable model\/explanation interface&quot; feeds choices to the human user, who can feed responses BACK to the system, and who has several confident statements about the model's beaviour, such as &quot;I understand why&quot; and &quot;I know when to trust you.&quot;\" width=\"600\" height=\"338\" \/><p class=\"wp-caption-text\">An &#8220;Explainable AI&#8221; diagram from DARPA<\/p><\/div>\n<p><!--more--><\/p>\n<p>What I mean is, neither of these supposedly guiding, foundational documents consider questions such as how best to determine the ethical status of an event in which, e.g., someone is\u2014 or several someones are\u2014 killed by an autonomous or semi-autonomous &#8220;AI,&#8221; but one the goings on inside of which we <em><strong>do<\/strong><\/em> observe and <em><strong>can<\/strong><\/em> explain; and the explanation for what we can observe turns out to be, y&#8217;know\u2026 that the system was built on the intensely <a href=\"https:\/\/vtechworks.lib.vt.edu\/bitstream\/handle\/10919\/111528\/Williams_DP_D_2022.pdf#162\">militarized<\/a> goals of fighting and killing a lot of people for variously spurious reasons. Like, that is what these systems are by and large <em><strong>intended to be for<\/strong><\/em>. That&#8217;s the questions which precipitate their commissioning, the situations they&#8217;re designed to engage, and the data they&#8217;re trained to learn from in order to do it.<\/p>\n<p>And because the connections in this country between what the military does and what local civilian police want to do are always tighter than we would prefer, right now, the San Francisco Police Department was <a href=\"https:\/\/apnews.com\/article\/police-san-francisco-government-and-politics-d26121d7f7afb070102932e6a0754aa5\">recently granted<\/a> and then <a href=\"https:\/\/apnews.com\/article\/police-san-francisco-a392e5a7c1aaac8f58387dde672a7fd1\">subsequently at least temporarily blocked from exercising<\/a> recently of the ability to use <a href=\"https:\/\/www.eff.org\/deeplinks\/2022\/11\/let-them-know-san-francisco-shouldnt-arm-robots\">semi-autonomous drones to kill people in &#8220;certain catastrophic, high-risk, high-threat, mass casualty events.&#8221;<\/a> Now I warned you <a href=\"https:\/\/afutureworththinkingabout.com\/?p=5135\">seven years ago<\/a> that this was going to happen and s<span class=\"css-901oao css-16my406 r-poiln3 r-bcqeeo r-qvutc0\">ince then some of my stances on things have changed in terms of degree, but the core elements unfortunately haven&#8217;t. That is, while I am <em><strong>more<\/strong><\/em> strident in my belief that certain technologies should be abolished and abstained from until our society gets its shit together, I am no less sure that we are a <em><strong>long<\/strong><\/em> way from said getting together of our collective shit.<\/span><\/p>\n<p><span class=\"css-901oao css-16my406 r-poiln3 r-bcqeeo r-qvutc0\">We are talking about giving institutions founded in racism the ability to deploy a semi-autonomous militarized tool to interface with and carry out the goals of a fundamentally racist technosystem. What could possibly go wrong?* J\/k, so so very much is gonna go wrong, holy fucking\u00a0<em><strong>shit<\/strong><\/em> it&#8217;s bad. <\/span><span class=\"css-901oao css-16my406 r-poiln3 r-bcqeeo r-qvutc0\">(And yes, an interlocking system of systems doing precisely what its component parts were designed to do can, should, and must be described as &#8220;going wrong&#8221; when said system-of-systems&#8217; perfect functioning results in people&#8217;s mass persecution and death.)<\/span><\/p>\n<div class=\"css-1dbjc4n\">\n<div id=\"id__1upng4mom11\" class=\"css-1dbjc4n r-1ta3fxp r-18u37iz r-1wtj0ep r-1s2bzr4 r-1mdbhws\" role=\"group\" aria-label=\"2 Retweets, Retweeted, 7 likes\">\n<div class=\"css-1dbjc4n r-18u37iz r-1h0z5md\">And then there&#8217;s the fact that <a href=\"https:\/\/afutureworththinkingabout.com\/?p=5558\">Neuralink<\/a> is actively seeking FDA approval for their supposedly &#8220;AI&#8221; controlled brain-computer interface chip (see above note about things DARPA has definitely funded). Now, at last check, 15\/23 nonhuman primate test subjects\u00a0 died from being implanted with this chip. That&#8217;s really simple math and come out to a <em><strong>sixty-five percent<\/strong><\/em> death rate. Now I don&#8217;t know about you, but those sound like extremely shitty track records on which to start gunning for phase-1 human trials. And not only that, but look at the last month of Twitter; is that really who you want making and administering a piece of technology with a literal neural interface?<\/div>\n<\/div>\n<\/div>\n<p>Additionally, there are the teams at <a class=\"rm-stats-tracked\" href=\"https:\/\/www.innereye.ai\/\" target=\"_blank\" rel=\"noopener\">InnerEye<\/a> and <a class=\"rm-stats-tracked\" href=\"https:\/\/www.emotiv.com\/about-emotiv\/\" target=\"_blank\" rel=\"noopener\">Emotiv<\/a> out of Israel and SilVal respectively, who are looking to get <a href=\"https:\/\/spectrum.ieee.org\/neurotech-workplace-innereye-emotiv\">BCI neurochips out to the public and are specifically marketing them as on-the-job augmentations<\/a>. Now, as has <a href=\"https:\/\/law.northeastern.edu\/event\/systems-for-systems-against\/\">previously<\/a> been <a href=\"https:\/\/vtechworks.lib.vt.edu\/bitstream\/handle\/10919\/111528\/Williams_DP_D_2022.pdf#86\">discussed<\/a> by myself and <a href=\"https:\/\/cdt.org\/insights\/report-warning-bossware-may-be-hazardous-to-your-health\/\">others<\/a>, there are vast and dangerous implications to algorithmically mediated job-related surveillance, generally referred to as &#8220;bossware.&#8221; Whether it&#8217;s the chronic stresses of being surveilled, or the unequal pressures and oppression of said surveillance on the bodyminds of already-marginalized individuals and groups, bossware does real harm of both an immediate and long-term nature.<\/p>\n<p>Now take the fact of all of that, and turn it into a chip implanted in your brain that not only monitors your uptime, but the direction of your gaze, your resting eye movements, and your endocrine response to certain stimuli. <em><strong>Now<\/strong><\/em> remember that these chips claim to be able to not just read brain-states, but to write them as well.<\/p>\n<p>Here are three true things:<\/p>\n<ol>\n<li>BCI&#8217;s have been a dream of mine since I was a small child. The idea of being able to connect to a computer <b><i>With My Mind?<\/i><\/b> Has always been appealing.<\/li>\n<li>BCI could be an <b><i>amazing<\/i><\/b> benefit to a <b><i>lot<\/i><\/b> of people, in terms of being able to keep tabs on chronic disabilities or even make connected implantable devices and limbs more directly operable.<br \/>\nAnd<\/li>\n<li>I will not willingly put a BCI built out of predatorily capitalist, disableist, racist, and elsewise bigoted values into my body, and any BCI built for profit and sold on market will have <b><i>at least<\/i><\/b> two of those biases, if not all of the above. BCI is already touted as a way to &#8220;fix&#8221; autistic people\u2014 i.e., to make them more &#8220;normal&#8221;\u2014 and that is a harmful and dangerous mentality from which to undertake the development of a technology which is literally meant to rewire people&#8217;s brains.<\/li>\n<\/ol>\n<p>Add to that the immediate and contemporary fact that each of these companies <a href=\"https:\/\/nothing.substack.com\/p\/big-tech-enslaves-and-murders-muslim\">uses forced labor in China to make the shit they make<\/a> and <a href=\"https:\/\/20minutesintothefuture.substack.com\/p\/facebook-is-fanning-the-flames-of\">makes their money by muddying and casting doubt on factual information and democratic processes<\/a> the history of racist and misogynist medicalization, not to mention rampant transphobia, and it&#8217;s a recipe for really bad shit to get trumpeted as miracles in WIRED or on CNET or wherever.<\/p>\n<p>Further, we have companies like Facebook <a href=\"https:\/\/twitter.com\/Abebab\/status\/1593588177117356034\">deploying half-baked &#8220;AI&#8221; large language models and then claiming that the systems were &#8220;used incorrectly&#8221; when people stress test<\/a> them to show exactly how systemically, prejudicially biased said models are. Yann LeCun had a several-hour-long, multi-thread argument with several &#8220;AI&#8221; researchers and ethicists who were quite frankly extremely generous in explaining to him that when people put &#8220;Meta&#8217;s&#8221; Galactica model through very simple paces to check for things like racism, antisemitism, ableism, misogyny, transphobia, or potential for abuse, that that wasn&#8217;t people &#8220;abusing&#8221; or &#8220;misrepresenting&#8221; the model. Rather it was people using the model exactly as it was billed to them: As an all-in-one &#8220;AI&#8221; knowledge set designed to distill, compile, or compose novel scientific documents out of what it&#8217;s been fed as training data and what it can search online.<\/p>\n<p>That&#8217;s how it was sold by LeCun and others at &#8220;Meta,&#8221; that&#8217;s how people thought of it as they tested it out, and those datasets and the operational search, sort, and generate algorithms used in that way are what provided for a wide range of truly bad results, ranging from <a href=\"https:\/\/theconversation.com\/the-galactica-ai-model-was-trained-on-scientific-knowledge-but-it-spat-out-alarmingly-plausible-nonsense-195445\">just gibberish<\/a> to the <a href=\"https:\/\/twitter.com\/interacciones\/status\/1593592265531920389\">truly<\/a> and deeply <a href=\"https:\/\/twitter.com\/interacciones\/status\/1593594309131075586\">disturbing<\/a>.<\/p>\n<p>So, <a href=\"https:\/\/afutureworththinkingabout.com\/?p=5600\">again<\/a>: When the most powerful interlocking corporations, militaries, intelligence agencies, and carceral systems on the planet refuse to even <em><strong>acknowledge<\/strong><\/em> the values and prejudices they&#8217;ve woven into their systems, and the potential policy guidance that would govern them gives them a free pass as long as they can <em><strong>show<\/strong><\/em> that they&#8217;re an interlocking system of capital <em><strong>and<\/strong><\/em> cops, spies, and soldiers, then what can possibly be done to meaningfully correct their course? (And also let me ask again, as I&#8217;ve asked <a href=\"https:\/\/afutureworththinkingabout.com\/?s=capital\">many times<\/a> before: Even if these systems ever do anything like what their creators claim, rather than merely being nothing but a hype-soaked fever dream, are these <a href=\"https:\/\/www.wired.com\/story\/effective-altruism-artificial-intelligence-sam-bankman-fried\/\">really the people and groups<\/a> we <em><strong>want<\/strong><\/em> building these tools and systems, to begin with?)<\/p>\n<p>So, yes: The BfaAIBoR contains lots of the right-<b><i>sounding<\/i><\/b> words and concepts, and those words and concepts <em><strong>could<\/strong> <\/em>facilitate some truly beneficial sociocultural and socioeconomic impacts arising from the BfaAIBoR and out into the &#8220;AI&#8221; industry and the rest of us who are in relational context with and subject to the systems that industry produces. But unfortunately it contains none of the real and meaningfully actionable frameworks or mechanisms which would allow them to come to fruition. So like I said: I&#8217;m intrigued, but also worried.<\/p>\n<p>And so, with all of that being said, it&#8217;s important to note that this &#8220;Blueprint For An AI Bill Of Rights&#8221; isn&#8217;t law or even a set of real intragovernmental policy directives yet; it&#8217;s just a (very) preliminary whitepaper. And since that is the case, it means that two further things are true:<\/p>\n<ol>\n<li>As I learned in <a href=\"https:\/\/afutureworththinkingabout.com\/?p=5375#more-5375\">my time at SRI International<\/a>, many firms take whitepapers <b><i>very<\/i><\/b> seriously, as it gives them direct lines into the kinds of concept structures their potential funding agencies are going to be looking for; that will also undoubtedly be the case with this blueprint. And so<\/li>\n<li>Now is absolutely the very best time for those of us who care about this sort of thing to <b><i>really<\/i><\/b> make some serious noise about it\u2014 to help enact meaningful changes to its structure and ideas\u2014 before it has a chance to <b><i>become<\/i><\/b> official policy.<\/li>\n<\/ol>\n<p>That is, it will be far easier to get beneficial, meaningful, and substantive oversight, regulatory, and even just values-based changes made now, than it will be to try to make them once the BfaAIBoR or something else very similar to it is fully in place. And we need to do this as soon as possible because, as they&#8217;ve shown us all time and time again, these groups really absolutely cannot be counted on to adequately regulate and police themselves. Like I said: R&amp;D firms scope out even preliminary governmental whitepapers, looking for guidance as to how best to appeal to their potential funders; and the thing to remember about that is that lacunae and loopholes are a form of guidance, too.<\/p>\n<p>A document like the Blueprint should be created, applied, enacted, and adjudicated under the supervision of those who know the shape of damage these tools can cause; it should be at the direction of those who know what these systems can do and have done that the rules for building these systems are written. It should be in the care of those who have been most often subject to and are thus most acutely aware of the oppressive, marginalizing nature of the technosocial systems that make up our culture\u2014 and who work to envision them otherwise.<\/p>\n<p><em>December 7, 2022: This post was updated with the results of the more recent vote by the San Francisco Board of Supervisors to &#8220;pause&#8221; the SFPD&#8217;s use of semi-autonomous lethal drones.<\/em><\/p>\n<hr \/>\n<h5 style=\"text-align: left;\"><em>A preliminary version of this post was originally published at the <a href=\"https:\/\/tinyletter.com\/Technoccult\/letters\/technoccult-news-the-days-of-swine-and-roses\">Technoccult Newsletter<\/a>, and a more refined but still early draft was published on <a href=\"https:\/\/www.patreon.com\/posts\/75446871\">my Patreon.<\/a><\/em><\/h5>\n","protected":false},"excerpt":{"rendered":"<p>So with the job of White House Office of Science and Technology Policy director having gone to Dr. Arati Prabhakar back in October, rather than Dr. Alondra Nelson, and the release of the &#8220;Blueprint for an AI Bill of Rights&#8221; (henceforth &#8220;BfaAIBoR&#8221; or &#8220;blueprint&#8221;) a few weeks after that, I am both very interested also [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[1],"tags":[8,967,1081,1108,1300,959,1482,1474,1499,73,1083,1183,101,1498,1501,1497,1116,1117,1180,1075,1489,279,1111,1411,1002,1491,1138,418,419,1470,495,561,562,1500,1488,627,1026,1182,645,646,975,678,684,1229,1230,1362,1124,1490,769,1233,960,1030,1179,1483,872,1493,1158,1476,1487],"class_list":["post-5665","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-a-future-worth-thinking-about","tag-ai","tag-algorithmic-bias","tag-algorithmic-intelligence","tag-algorithmic-justice","tag-algorithmic-systems","tag-algorithms","tag-alondra-nelson","tag-arati-prabhakar","tag-artificial-intelligence","tag-autonomous-systems","tag-autonomous-weapons-systems","tag-bias","tag-blueprint-for-an-ai-bill-of-rights","tag-brain-computer-interface","tag-department-of-defense","tag-disability","tag-disability-studies","tag-drones","tag-epistemology","tag-ethics-washing","tag-ethnicity","tag-facebook","tag-facial-recognition","tag-google","tag-humanities","tag-intersectionality","tag-invisible-architecture-of-bias","tag-invisible-architectures-of-bias","tag-justice","tag-machine-learning","tag-my-work","tag-my-writing","tag-neuralink","tag-oversight-board","tag-philosophy","tag-philosophy-of-technology","tag-police-militarization","tag-police-overreach","tag-politics","tag-public-policy","tag-race","tag-racism","tag-science-and-technology-studies","tag-science-technology-and-society","tag-social-construction-of-technology","tag-social-dynamics","tag-social-sciences","tag-sociology","tag-sts","tag-surveillance-culture","tag-technological-ethics","tag-the-overton-window","tag-timnit-gebru","tag-twitter","tag-u-s-politics","tag-values","tag-white-house-office-of-science-and-technology-policy","tag-youtube"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p5WByP-1tn","jetpack_likes_enabled":true,"jetpack-related-posts":[{"id":5316,"url":"https:\/\/afutureworththinkingabout.com\/?p=5316","url_meta":{"origin":5665,"position":0},"title":"My Appearance on The Machine Ethics Podcast&#8217;s A.I. Retreat Episode","author":"Damien P. Williams","date":"October 23, 2018","format":false,"excerpt":"As you already know, we went to the second Juvet A.I. Retreat, back in September. If you want to hear several of us talk about what we got up to at the then you're in luck because here are several conversations conducted by Ben Byford of the Machine Ethics Podcast.\u2026","rel":"","context":"In \"algorithmic bias\"","block_context":{"text":"algorithmic bias","link":"https:\/\/afutureworththinkingabout.com\/?tag=algorithmic-bias"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/img.youtube.com\/vi\/ownE2zxTN2U\/0.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":5249,"url":"https:\/\/afutureworththinkingabout.com\/?p=5249","url_meta":{"origin":5665,"position":1},"title":"&#8220;We Built Them From Us&#8221;: My Appearance on the TEAM HUMAN Podcast","author":"Damien P. Williams","date":"February 22, 2018","format":false,"excerpt":"Earlier this month I was honoured to have the opportunity to sit and talk to Douglas Rushkoff on his TEAM HUMAN podcast. If you know me at all, you know this isn't by any means the only team for which I play, or even the only way I think about\u2026","rel":"","context":"In \"algorithmic bias\"","block_context":{"text":"algorithmic bias","link":"https:\/\/afutureworththinkingabout.com\/?tag=algorithmic-bias"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":5082,"url":"https:\/\/afutureworththinkingabout.com\/?p=5082","url_meta":{"origin":5665,"position":2},"title":"From WIRED: &#8220;Tech Giants Team Up to Keep AI From Getting Out of Hand&#8221;","author":"Damien P. Williams","date":"September 28, 2016","format":false,"excerpt":"I spoke with Klint Finley over at WIRED about Amazon, Facebook, Google, IBM, and Microsoft's new joint ethics and oversight venture, which they've dubbed the \"Partnership on Artificial Intelligence to Benefit People and Society.\" They held a joint press briefing, today, in which Yann LeCun, Facebook's director of AI, and\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":5899,"url":"https:\/\/afutureworththinkingabout.com\/?p=5899","url_meta":{"origin":5665,"position":3},"title":"ChatGPT is Actively Marketing to Students During University Finals Season","author":"Damien P. Williams","date":"April 4, 2025","format":false,"excerpt":"It's really disheartening and honestly kind of telling that in spite of everything, ChatGPT is actively marketing itself to students in the run-up to college finals season. We've talked many (many) times before about the kinds of harm that can come from giving over too much epistemic and heuristic authority\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"Screenshot of ChatpGPT page:ChaptGPT Promo: 2 months free for students ChatGPT Plus is now free for college students through May Offer valid for students in the US and Canada [Buttons reading \"Claim offer\" and \"learn more\" An image of a pencil scrawling a scribbly and looping line] ChatGPT Plus is here to help you through finals","src":"https:\/\/cdn.bsky.app\/img\/feed_fullsize\/plain\/did:plc:ybkylffhwhn2an2ic2lxh76k\/bafkreidh6mhffosfxhbgnxx6aybjycvgj3c2ygzto2xhzvsohdsv3g6evm@jpeg","width":350,"height":200,"srcset":"https:\/\/cdn.bsky.app\/img\/feed_fullsize\/plain\/did:plc:ybkylffhwhn2an2ic2lxh76k\/bafkreidh6mhffosfxhbgnxx6aybjycvgj3c2ygzto2xhzvsohdsv3g6evm@jpeg 1x, https:\/\/cdn.bsky.app\/img\/feed_fullsize\/plain\/did:plc:ybkylffhwhn2an2ic2lxh76k\/bafkreidh6mhffosfxhbgnxx6aybjycvgj3c2ygzto2xhzvsohdsv3g6evm@jpeg 1.5x, https:\/\/cdn.bsky.app\/img\/feed_fullsize\/plain\/did:plc:ybkylffhwhn2an2ic2lxh76k\/bafkreidh6mhffosfxhbgnxx6aybjycvgj3c2ygzto2xhzvsohdsv3g6evm@jpeg 2x"},"classes":[]},{"id":5227,"url":"https:\/\/afutureworththinkingabout.com\/?p=5227","url_meta":{"origin":5665,"position":4},"title":"Appearance on the You Are Not So Smart Podcast","author":"Damien P. Williams","date":"December 4, 2017","format":false,"excerpt":"A few weeks ago I had a conversation with David McRaney of the You Are Not So Smart podcast, for his episode on Machine Bias. As he says on the blog: Now that algorithms are everywhere, helping us to both run and make sense of the world, a strange question\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":5269,"url":"https:\/\/afutureworththinkingabout.com\/?p=5269","url_meta":{"origin":5665,"position":5},"title":"My Review of Shannon Vallor&#8217;s TECHNOLOGY AND THE VIRTUES","author":"Damien P. Williams","date":"May 10, 2018","format":false,"excerpt":"My piece \"Cultivating Technomoral Interrelations,\" a review of\u00a0Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting has been up over at the Social Epistemology Research and Reply Collective for a few months, now, so I figured I should post something about it, here. As you'll read, I\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"https:\/\/socialepistemologydotcom.files.wordpress.com\/2018\/02\/shannon-vallor-technology-virtues-cover.jpg?w=350&h=200&crop=1","width":350,"height":200},"classes":[]}],"_links":{"self":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/5665","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5665"}],"version-history":[{"count":10,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/5665\/revisions"}],"predecessor-version":[{"id":5681,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/5665\/revisions\/5681"}],"wp:attachment":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5665"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5665"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5665"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}