{"id":6422,"date":"2025-12-06T17:39:34","date_gmt":"2025-12-06T22:39:34","guid":{"rendered":"https:\/\/afutureworththinkingabout.com\/?p=6422"},"modified":"2026-01-17T15:08:27","modified_gmt":"2026-01-17T20:08:27","slug":"failures-of-ai-promise-critical-thinking-misinformation-prosociality-trust","status":"publish","type":"post","link":"https:\/\/afutureworththinkingabout.com\/?p=6422","title":{"rendered":"Failures of &#8220;AI&#8221; Promise: Critical Thinking, Misinformation, Prosociality, &#038; Trust"},"content":{"rendered":"<p>So, <a href=\"https:\/\/www.nature.com\/articles\/d41586-025-03733-x\">new research<\/a> shows that a) LLM-type &#8220;AI&#8221; chatbots are extremely persuasive and able to get voters to shift their positions, and that b) the more effective they are at that, the less they hew to factual reality.<\/p>\n<p>Which: Yeah. A bunch of us told you this.<\/p>\n<p>Again: the Purpose of LLM- type &#8220;AI&#8221; is not to tell you the truth or to lie to you, but to provide you with an answer-shaped something you are statistically determined to be more likely to accept, irrespective of facts\u2014 this is the reason I call them &#8220;<a href=\"https:\/\/www.youtube.com\/watch?v=9DpM_TXq2ws\">bullshit engines<\/a>.&#8221; And it&#8217;s what makes them perfect for accelerating dis- and misinformation and persuasive propaganda; perfect for authoritarian and fascist aims of destabilizing trust in expertise. Now, the fear here isn&#8217;t necessarily that candidate A gets elected over candidate B (see commentary from the paper authors, <a href=\"https:\/\/archive.is\/pjKqI\">here<\/a>). The real problem is the loss of even the willingness to <em><strong>try<\/strong><\/em> to build shared consensus reality\u2014 i.e., the &#8220;AI&#8221; enabled epistemic crisis point we&#8217;ve been staring down for <a href=\"https:\/\/afutureworththinkingabout.com\/?p=5865\">about a decade<\/a>.<\/p>\n<p>Other <a href=\"https:\/\/arxiv.org\/abs\/2510.01395\">preliminary results<\/a> show that overreliance on &#8220;generative AI&#8221; actively harms critical thinking skills, degrading not just trust in, but the ability to critically engage with, determine the value of, categorize, and intentionally sincerely consider new ways of organizing and understanding facts to produce knowledge. Further, users actively reject <em><strong>less<\/strong><\/em> sycophantic versions of &#8220;AI&#8221; and get increasingly hostile toward\/less likely to help or be helped by <em><strong>other actual humans<\/strong><\/em> because said humans aren&#8217;t as immediately sycophantic. And thus, taken together, these factors create cycles of psychological (and emotional) dependence on tools that <em><strong>Actively Harm Critical Thinking And Human Interaction<\/strong><\/em>.<\/p>\n<p>What better dirt in which for disinformation to grow?<\/p>\n<p>The design, cultural deployment, embedded values, and structural affordances of &#8220;AI&#8221; has also been repeatedly demonstrated to harm both critical skills development and <em><strong>now also<\/strong><\/em> the <a href=\"https:\/\/www.edweek.org\/technology\/rising-use-of-ai-in-schools-comes-with-big-downsides-for-students\/2025\/10\">structure and maintenance of the fabric of\u00a0 social relationships<\/a> in terms of mutual trust and the desire and ability to learn from each other. That is, students are more suspicious of teachers who use &#8220;AI,&#8221; and teachers are still, increasingly, on edge about the idea that their students <em><strong>might<\/strong><\/em> be using &#8220;AI,&#8221; and so, in the inimitable words and delivery of Kurt Russell:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"css-9pa8cd aligncenter\" draggable=\"false\" src=\"https:\/\/cdn.bsky.app\/img\/feed_fullsize\/plain\/did:plc:hiz3i44kravwkf6hmzg42okb\/bafkreia5rzux45474gfqegihm6lu23lskhsk77r4aoesjlelvkax4gsbyi@jpeg\" alt=\"Kurt Russell as MacReady from The Thing, a white man with shoulder-length hair and a long scruff beard, wearing grey and olive drab, looking exhausted and sitting next to a bottle of J&amp;B Rare Blend Scotch whisky and a pint glass 1\/3 full of the same, saying into a microphone, \u201cNobody trusts anybody now. And we\u2019re all very tired.\u201d\" width=\"2000\" height=\"1095\" \/><\/p>\n<p>Combine all of the above with what I&#8217;ve repeatedly argued about the impact of &#8220;AI&#8221; on the spread of <a href=\"https:\/\/www.youtube.com\/watch?v=THE5IwNJ9Fk\">dis- and misinformation, consensus knowledge-making, authoritarianism<\/a>, and the <a href=\"https:\/\/futurism.com\/artificial-intelligence\/openai-sora-stephen-hawking-brutalized\">eugenicist<\/a>, fascist, and generally bigoted tendencies embedded in all of it\u2014and well\u2026 It all sounds pretty anti-pedagogical and anti-social to me.<\/p>\n<p>And I really don&#8217;t think it&#8217;s asking too much to require that all of these demonstrated problems be seriously and <em><strong>meticulously<\/strong><\/em> addressed before anyone advocating for their implementation in educational and workplace settings is allowed to go through with it.<\/p>\n<p>Like\u2026 That just seems sensible, no?<\/p>\n<p>The current paradigm of &#8220;AI&#8221; encodes and recapitulates all of these things, but previous technosocial paradigms did too, and if these facts had been addressed <em><strong>back then<\/strong><\/em>, in the culture of technology specifically and our sociotechnical culture writ large, then it might not still be like that, today.<\/p>\n<p>But it also doesn&#8217;t have to stay like this. It genuinely does not.<\/p>\n<p>We can make these tools differently. We can train people earlier and more consistently to understand the current models of &#8220;AI,&#8221; reframing notions of &#8220;AI Literacy&#8221; away from &#8220;how to use it&#8221; and toward an understanding of how they functions and what they actually can and cannot do. We can make it clear that what they produce is not truth, not facts, not even lies, but always bullshit, even when they seem to conform to factual reality. We can train people\u2014 <a href=\"https:\/\/afutureworththinkingabout.com\/?p=5899\">students<\/a>, yes, but also professionals, educators, and wider communities\u2014 to understand how bias confirmation and <a href=\"https:\/\/www.americanscientist.org\/article\/bias-optimizers\">optimization<\/a> work, how propaganda, <a href=\"https:\/\/muse.jhu.edu\/article\/916425\">marketing<\/a>, and psychological manipulation work.<\/p>\n<p>The more people learn about <a href=\"https:\/\/link.springer.com\/book\/10.1007\/978-3-032-02665-1\">what these systems do<\/a>, what they&#8217;re built from, how they&#8217;re trained, and the quite frankly alarming amount of water and energy it has taken and is projected to take to develop and maintain them, the more those same people resist the force and coercion that corporations and even universities and governments think pass for transparent, informed, meaningful consent.<\/p>\n<p>Like\u2026 researchers are highlight that the <a href=\"https:\/\/www.nature.com\/articles\/s41893-025-01681-y\">current trajectory<\/a> of &#8220;AI&#8221; energy and water use will not only undo several years of tech sector climate gains, but will also prevent corporations such as Google, Amazon, and Meta from meeting carbon-neutral and water-positive goals. And that&#8217;s without considering the infrastructural capture of those resources in the process of building said data centers, in the first place (the authors list this as being outside their scope); with that data, the picture is worse.<\/p>\n<p>As <a href=\"https:\/\/doi.org\/10.18130\/03df-zn30\">many have noted<\/a>, environmental impacts are among the major concerns of those who say that they are reticent to use or engage with all things &#8220;artificial intelligence&#8221;\u2014 even sparking <a href=\"https:\/\/www.decaturish.com\/business\/data-center-regulations-deferred-for-more-community-input\/article_b84db27c-fdb5-4c08-a806-b0dd67e480fb.html\">public outcry<\/a> across the <a href=\"https:\/\/www.wsbradio.com\/news\/local\/dekalb-county-residents-raise-concerns-officials-draft-rules-proposed-data-centers\/LMIC7RZCRJCBFFAGIF25TZQUHU\/?outputType=amp\">country<\/a>, with more people joining calls that any and all new &#8220;AI&#8221; training processes and data centers be built to run on <a href=\"https:\/\/afutureworththinkingabout.com\/?p=6406\">existing and expanded renewables<\/a>. We are increasingly finding the general public wants their neighbours and institutions to engage in meaningful consideration of how we might remediate or even prevent &#8220;AI&#8217;s&#8221; potential social, environmental, and individual intellectual harms.<\/p>\n<p>But, also increasingly, we find that institutional pushes\u2014 including the conclusions of the Nature article <em><strong>on<\/strong><\/em> energy use trends\u2014 tend toward an &#8220;adoption and dominance at all costs&#8221; model of &#8220;AI,&#8221; which in turn seem to be founded on the circular reasoning that &#8220;we have to use &#8216;AI&#8217; so that and because it will be useful.&#8221; Recurrent directives from the federal government like the <a href=\"https:\/\/arstechnica.com\/tech-policy\/2025\/12\/republicans-once-again-thwart-trumps-push-to-block-state-ai-laws\/\">threat to sue any state that regulates &#8220;AI,&#8221;<\/a> the &#8220;<a href=\"https:\/\/web.archive.org\/web\/20260117024807\/https:\/\/www.whitehouse.gov\/articles\/2025\/07\/white-house-unveils-americas-ai-action-plan\/\" target=\"_blank\" rel=\"noopener\" data-saferedirecturl=\"https:\/\/www.google.com\/url?q=https:\/\/www.whitehouse.gov\/articles\/2025\/07\/white-house-unveils-americas-ai-action-plan\/&amp;source=gmail&amp;ust=1765143243901000&amp;usg=AOvVaw0lWf6P8MrQ4cr3u9_5OTd_\">AI Action Plan<\/a>,&#8221; and the Executive Order on &#8220;<a href=\"https:\/\/web.archive.org\/web\/20260117024738\/https:\/\/www.whitehouse.gov\/presidential-actions\/2025\/07\/preventing-woke-ai-in-the-federal-government\/\" target=\"_blank\" rel=\"noopener\" data-saferedirecturl=\"https:\/\/www.google.com\/url?q=https:\/\/www.whitehouse.gov\/presidential-actions\/2025\/07\/preventing-woke-ai-in-the-federal-government\/&amp;source=gmail&amp;ust=1765143243901000&amp;usg=AOvVaw0VMT2qv-zUTFZOLo9AGVV0\">Preventing Woke AI In The Federal Government<\/a>&#8221; use term such as &#8220;woke&#8221; and &#8220;ideological bias&#8221; explicitly to mean &#8220;DEI,&#8221; &#8220;CRT,&#8221; &#8220;transgenderism,&#8221; and even the basic philosophical and sociological concept of intersectionality. Even the very idea of &#8220;Criticality&#8221; is increasingly conflated with mere &#8220;negativity,&#8221; rather than investigation, analysis, and understanding, and <a href=\"https:\/\/www.wired.com\/story\/inside-the-biden-administrations-unpublished-report-on-ai-safety\/\" target=\"_blank\" rel=\"noopener\" data-saferedirecturl=\"https:\/\/www.google.com\/url?q=https:\/\/www.wired.com\/story\/inside-the-biden-administrations-unpublished-report-on-ai-safety\/&amp;source=gmail&amp;ust=1765143243901000&amp;usg=AOvVaw3V5YIfPyAIagV7dUYs_od2\">standards-setting bodies&#8217; recommendations are shelved<\/a> before they see the light of day.<\/p>\n<p>All this even as what more and more people say they want and <em><strong>need<\/strong> <\/em>are processes which depend on and develop nuanced criticality\u2014 which allow and help them to figure out how to question when, how, and perhaps most crucially <em><strong>whether<\/strong><\/em> we should make and use &#8220;AI&#8221; tools, at all. Educators, both as <a href=\"https:\/\/openletter.earth\/open-letter-stop-the-uncritical-adoption-of-ai-technologies-in-academia-b65bba1e\" target=\"_blank\" rel=\"noopener\" data-saferedirecturl=\"https:\/\/www.google.com\/url?q=https:\/\/openletter.earth\/open-letter-stop-the-uncritical-adoption-of-ai-technologies-in-academia-b65bba1e&amp;source=gmail&amp;ust=1765143243901000&amp;usg=AOvVaw3Z9OD4ox8vJPK4j5nhg_pB\">individuals<\/a> and in <a href=\"https:\/\/www.aaup.org\/reports-publications\/aaup-policies-reports\/topical-reports\/artificial-intelligence-and-academic\" target=\"_blank\" rel=\"noopener\" data-saferedirecturl=\"https:\/\/www.google.com\/url?q=https:\/\/www.aaup.org\/reports-publications\/aaup-policies-reports\/topical-reports\/artificial-intelligence-and-academic&amp;source=gmail&amp;ust=1765143243901000&amp;usg=AOvVaw0982s4fcLaYqaju1P5E6Q7\">various professional associations<\/a>, seem to increasingly disapprove of the uncritical adoption of these same models and systems. And so far roughly 140 technology-related organizations have <a href=\"https:\/\/peoplesaiaction.com\/\" target=\"_blank\" rel=\"noopener\" data-saferedirecturl=\"https:\/\/www.google.com\/url?q=https:\/\/peoplesaiaction.com\/&amp;source=gmail&amp;ust=1765143243901000&amp;usg=AOvVaw0o1v9i3WC79tK6Wg427gxp\">joined a call<\/a> for a people- rather than business-centric model of AI development.<\/p>\n<p>Nothing about this current paradigm of &#8220;AI&#8221; is either inevitable or necessary. We can push for increased rather than decreased local, state, and national regulatory scrutiny and standards, and prioritize the development of standards, frameworks, and recommendations designed to prevent and repair the harms of &#8220;generative AI.&#8221; Working together, we can develop new paradigms of &#8220;AI&#8221; systems which are inherently integrated with and founded on different principles, like meaningful consent, sustainability, and deep understandings of the bias and harm that can arise in &#8220;AI,&#8221; even down to the sourcing and framing of training data.<\/p>\n<p>Again: Change can be made, here. When we engage as many people as possible, right at the point of their increasing resistance, in language and concepts which reflect their motivating values, we can gain ground towards new ways of building &#8220;AI&#8221; and other technologies.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>So, new research shows that a) LLM-type &#8220;AI&#8221; chatbots are extremely persuasive and able to get voters to shift their positions, and that b) the more effective they are at that, the less they hew to factual reality. Which: Yeah. A bunch of us told you this. Again: the Purpose of LLM- type &#8220;AI&#8221; is [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[1],"tags":[8,1118,1438,967,1081,1300,959,1109,73,1198,1547,101,1115,1551,1021,138,1129,190,1398,1554,1525,1543,1116,1117,1519,1350,1210,1556,1553,1555,1552,1075,1527,278,1111,1411,1529,1604,294,1131,1502,1002,1503,1545,1134,1112,1159,418,419,1557,1470,1157,1504,1544,493,1530,1531,1558,1518,1133,560,561,562,1207,942,1542,1548,978,624,1026,655,1161,678,684,1528,1229,1230,1132,1211,1235,1362,1124,1505,1233,960,1149,801,807,1030,811,1277,1135,1158,1597,1559],"class_list":["post-6422","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-a-future-worth-thinking-about","tag-ableism","tag-actor-network-theory","tag-ai","tag-algorithmic-bias","tag-algorithmic-justice","tag-algorithmic-systems","tag-amazon","tag-artificial-intelligence","tag-audio","tag-authoritarianism","tag-bias","tag-biomedical-ethics","tag-biopolitics","tag-biotech-ethics","tag-bullshit","tag-compassion","tag-consciousness","tag-damien-patrick-williams","tag-data-centers","tag-data-science","tag-democracy","tag-disability","tag-disability-studies","tag-disinformation","tag-education","tag-embodied-cognition","tag-environment","tag-environmental-ethics","tag-environmental-impacts","tag-environmental-justice","tag-epistemology","tag-equity","tag-ethics","tag-facebook","tag-facial-recognition","tag-fairness","tag-fascism","tag-feminist-ethics","tag-gender","tag-generative-pre-trained-transformer","tag-google","tag-gpt","tag-harry-frankfurt","tag-homophobia","tag-implicit-bias","tag-intersubjectivity","tag-invisible-architecture-of-bias","tag-invisible-architectures-of-bias","tag-jess-reia","tag-justice","tag-knowledge","tag-large-language-models","tag-llm","tag-machine-ethics","tag-marginalization","tag-marginalized-lived-experiences","tag-mc-forelle","tag-misinformation","tag-misogyny","tag-my-words","tag-my-work","tag-my-writing","tag-neurodiversity","tag-neuroethics","tag-openai","tag-pedagogy","tag-personhood-rights","tag-phenomenology","tag-philosophy-of-technology","tag-prejudice","tag-propaganda","tag-race","tag-racism","tag-responsibility","tag-science-and-technology-studies","tag-science-technology-and-society","tag-sexism","tag-social-cognition","tag-social-construction-of-science","tag-social-construction-of-technology","tag-social-dynamics","tag-social-shaping-of-technology","tag-sts","tag-surveillance-culture","tag-systemic-disparity","tag-systems","tag-teaching","tag-technological-ethics","tag-technology","tag-technoscience","tag-transphobia","tag-values","tag-will-straw","tag-yingchong-wang"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p5WByP-1FA","jetpack_likes_enabled":true,"jetpack-related-posts":[{"id":5899,"url":"https:\/\/afutureworththinkingabout.com\/?p=5899","url_meta":{"origin":6422,"position":0},"title":"ChatGPT is Actively Marketing to Students During University Finals Season","author":"Damien P. Williams","date":"April 4, 2025","format":false,"excerpt":"It's really disheartening and honestly kind of telling that in spite of everything, ChatGPT is actively marketing itself to students in the run-up to college finals season. We've talked many (many) times before about the kinds of harm that can come from giving over too much epistemic and heuristic authority\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"Screenshot of ChatpGPT page:ChaptGPT Promo: 2 months free for students ChatGPT Plus is now free for college students through May Offer valid for students in the US and Canada [Buttons reading \"Claim offer\" and \"learn more\" An image of a pencil scrawling a scribbly and looping line] ChatGPT Plus is here to help you through finals","src":"https:\/\/cdn.bsky.app\/img\/feed_fullsize\/plain\/did:plc:ybkylffhwhn2an2ic2lxh76k\/bafkreidh6mhffosfxhbgnxx6aybjycvgj3c2ygzto2xhzvsohdsv3g6evm@jpeg","width":350,"height":200,"srcset":"https:\/\/cdn.bsky.app\/img\/feed_fullsize\/plain\/did:plc:ybkylffhwhn2an2ic2lxh76k\/bafkreidh6mhffosfxhbgnxx6aybjycvgj3c2ygzto2xhzvsohdsv3g6evm@jpeg 1x, https:\/\/cdn.bsky.app\/img\/feed_fullsize\/plain\/did:plc:ybkylffhwhn2an2ic2lxh76k\/bafkreidh6mhffosfxhbgnxx6aybjycvgj3c2ygzto2xhzvsohdsv3g6evm@jpeg 1.5x, https:\/\/cdn.bsky.app\/img\/feed_fullsize\/plain\/did:plc:ybkylffhwhn2an2ic2lxh76k\/bafkreidh6mhffosfxhbgnxx6aybjycvgj3c2ygzto2xhzvsohdsv3g6evm@jpeg 2x"},"classes":[]},{"id":5082,"url":"https:\/\/afutureworththinkingabout.com\/?p=5082","url_meta":{"origin":6422,"position":1},"title":"From WIRED: &#8220;Tech Giants Team Up to Keep AI From Getting Out of Hand&#8221;","author":"Damien P. Williams","date":"September 28, 2016","format":false,"excerpt":"I spoke with Klint Finley over at WIRED about Amazon, Facebook, Google, IBM, and Microsoft's new joint ethics and oversight venture, which they've dubbed the \"Partnership on Artificial Intelligence to Benefit People and Society.\" They held a joint press briefing, today, in which Yann LeCun, Facebook's director of AI, and\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":5227,"url":"https:\/\/afutureworththinkingabout.com\/?p=5227","url_meta":{"origin":6422,"position":2},"title":"Appearance on the You Are Not So Smart Podcast","author":"Damien P. Williams","date":"December 4, 2017","format":false,"excerpt":"A few weeks ago I had a conversation with David McRaney of the You Are Not So Smart podcast, for his episode on Machine Bias. As he says on the blog: Now that algorithms are everywhere, helping us to both run and make sense of the world, a strange question\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":5375,"url":"https:\/\/afutureworththinkingabout.com\/?p=5375","url_meta":{"origin":6422,"position":3},"title":"2017 SRI Technology and Consciousness Workshop Series Final Report","author":"Damien P. Williams","date":"March 8, 2019","format":false,"excerpt":"So, as you know, back in the summer of 2017 I participated in SRI International\u2019s Technology and Consciousness Workshop Series. This series was an eight week program of workshops the current state of the field around, the potential future paths toward, and the moral and social implications of the notion\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"Image of a rectangular name card with a stylized \"Technology & Consciousness\" logo, at the top, the name Damien Williams in bold in the middle, and SRI International italicized at the bottom; to the right a blurry wavy image of what appears to be a tree with a person standing next to it and another tree in the background to the left., all partially mirrored in a surface at the bottom of the image.","src":"https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/03\/20190308_021228.jpg?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/03\/20190308_021228.jpg?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/03\/20190308_021228.jpg?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/03\/20190308_021228.jpg?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/03\/20190308_021228.jpg?resize=1050%2C600&ssl=1 3x, https:\/\/i0.wp.com\/afutureworththinkingabout.com\/wp-content\/uploads\/2019\/03\/20190308_021228.jpg?resize=1400%2C800&ssl=1 4x"},"classes":[]},{"id":5316,"url":"https:\/\/afutureworththinkingabout.com\/?p=5316","url_meta":{"origin":6422,"position":4},"title":"My Appearance on The Machine Ethics Podcast&#8217;s A.I. Retreat Episode","author":"Damien P. Williams","date":"October 23, 2018","format":false,"excerpt":"As you already know, we went to the second Juvet A.I. Retreat, back in September. If you want to hear several of us talk about what we got up to at the then you're in luck because here are several conversations conducted by Ben Byford of the Machine Ethics Podcast.\u2026","rel":"","context":"In \"algorithmic bias\"","block_context":{"text":"algorithmic bias","link":"https:\/\/afutureworththinkingabout.com\/?tag=algorithmic-bias"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/img.youtube.com\/vi\/ownE2zxTN2U\/0.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":5249,"url":"https:\/\/afutureworththinkingabout.com\/?p=5249","url_meta":{"origin":6422,"position":5},"title":"&#8220;We Built Them From Us&#8221;: My Appearance on the TEAM HUMAN Podcast","author":"Damien P. Williams","date":"February 22, 2018","format":false,"excerpt":"Earlier this month I was honoured to have the opportunity to sit and talk to Douglas Rushkoff on his TEAM HUMAN podcast. If you know me at all, you know this isn't by any means the only team for which I play, or even the only way I think about\u2026","rel":"","context":"In \"algorithmic bias\"","block_context":{"text":"algorithmic bias","link":"https:\/\/afutureworththinkingabout.com\/?tag=algorithmic-bias"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"_links":{"self":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/6422","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6422"}],"version-history":[{"count":10,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/6422\/revisions"}],"predecessor-version":[{"id":6442,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/6422\/revisions\/6442"}],"wp:attachment":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6422"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=6422"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=6422"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}