{"id":6406,"date":"2025-05-21T22:04:24","date_gmt":"2025-05-22T02:04:24","guid":{"rendered":"https:\/\/afutureworththinkingabout.com\/?p=6406"},"modified":"2025-05-23T12:01:03","modified_gmt":"2025-05-23T16:01:03","slug":"reimagining-ais-environmental-and-sociotechnical-materialities","status":"publish","type":"post","link":"https:\/\/afutureworththinkingabout.com\/?p=6406","title":{"rendered":"Reimagining &#8220;AI&#8217;s&#8221; Environmental and Sociotechnical Materialities"},"content":{"rendered":"<p>There&#8217;s a new open-access book of collected essays called <a href=\"https:\/\/libraopen.lib.virginia.edu\/public_view\/3n203z326\"><em>Reimagining AI for Environmental Justice and Creativity<\/em><\/a>, and I happen to have an essay in it. The collection is made of contributions from participants in the October 2024 &#8220;Reimagining AI for Environmental Justice and Creativity&#8221; panels and workshops put on by Jess Reia, MC Forelle, and Yingchong Wang, and I&#8217;ve included my essay here, for you. That said, I highly recommend checking out the rest of the book, because all the contributions are fantastic.<\/p>\n<p>This work was co-sponsored by: The Karsh Institute Digital Technology for Democracy Lab, The Environmental Institute, and The School of Data Science, all at UVA. The videos for both days of the &#8220;Reimagining AI for Environmental Justice and Creativity&#8221; talks are now available, and you can find them <a href=\"https:\/\/karshinstitute.virginia.edu\/events\/reimagining-ai-environmental-justice-and-creativity\">at the Karsh Institute website<\/a>, and also below, before the text of my essay.<\/p>\n<p>All in all, I think these these are some really great conversations on &#8220;AI&#8221; and environmental justice. They cover &#8220;AI&#8221;&#8216;s extremely material practical aspects, the deeply philosophical aspects, and the necessary and fundamental connections between the two, and these are crucial discussions to be having, especially right now.<\/p>\n<p>Hope you dig it.<\/p>\n<p><iframe loading=\"lazy\" title=\"Reimagining AI for Environmental Justice and Creativity: Part II\" width=\"627\" height=\"353\" src=\"https:\/\/www.youtube.com\/embed\/vgYOWd5o54g?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p><iframe loading=\"lazy\" title=\"Reimagining AI for Environmental Justice and Creativity- Part I\" width=\"627\" height=\"353\" src=\"https:\/\/www.youtube.com\/embed\/Pqhj8PGfK30?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/p>\n<p><!--more--><\/p>\n<p style=\"text-align: center;\"><strong>Reimagining \u201cAI\u2019s\u201d Environmental and Sociotechnical Materialities<br \/>\n<\/strong><em>Damien P. Williams<br \/>\n<\/em>UNC Charlotte<\/p>\n<p>There are numerous assumptions bundled into the current thinking around what \u201cartificial intelligence\u201d does and is, and around whether we should even be using it and, if so, how. Those pushing \u201cAI\u201d adoption tend to presuppose it necessarily will be good for something\u2014 that it will be useful and solve some problem\u2014 without ever defining exactly what that problem might be. Often, we see that there are these pushes towards paradigms of efficiency and ease of work and \u201crote\u201d tasks being taken off our hands without anyone ever asking the fundamental follow-up question of \u201c\u2026okay but does it actually do any of that?\u201d Relatedly, it&#8217;s often assumed that \u201cartificial intelligence\u201d will become or will make other things \u201cbetter\u201d in some nebulous way if only we just keep pushing, just keep building, just keep moving towards the next model of it. If we keep doing that, then eventually, we&#8217;re assured, \u201cin just ten years,\u201d \u201cAI\u201d will turn into the version of itself that will solve all our problems. But this notion that in ten years, \u201cAI\u201d will be embedded in everything and will be inescapable and perfect is something we&#8217;ve been hearing for the past 50 years.<\/p>\n<p>This recurrent technosocial paradigm of \u201cAI Summer\u201d and \u201cAI Winter\u201d exists for a reason; these hype-cycles pushing towards automation, neural nets, big data, or algorithms over and over again represent externalities which must be addressed in a deeper way through questions like, \u201cWhat are the values of the people who push &#8216;AI&#8217;s&#8217; &#8216;inevitability,&#8217; and what are their actual goals?\u201d Because, while people might think they mean the same things when they say \u201cAI,\u201d or are indicating the same kinds of needs to be met, in truth, we&#8217;re very often talking past each other. Without a clear understanding of what it is we each and all actually think of as the \u201cgood\u201d of \u201cAI\u201d technology\u2014 without confronting that question in a very direct and intentional way\u2014 different groups will just keep pushing in different directions, and whoever has the predominant access to and control over the levers of power wins the right to define the problems that \u201cAI\u201d seeks to address. But in many cases, those are problems they and their vision of \u201cAI\u201d helped to create.<\/p>\n<p>Current estimates hold that water consumption increased ~34% in areas where Microsoft and Google placed datacenters for search and \u201cAI,\u201d and that every email&#8217;s worth of text you have an LLM \u201cAI\u201d write consumes a pint of water. Put another way, imagine if every time you composed 150 of your own words, you had to just take out a 16 oz water bottle, fill it up, and dump it in the trash. We&#8217;re not just talking about water for cooling servers, either. In thermal power plants, you need water to turn into steam to run turbines, and then to cool the systems which do that, as well. So the more energy needed, the more water used in production and cooling. And while many highlight that some systems only use this water once and then release it, even that is a process and a period of capturing that water, both removing the water from use, and potentially trapping and killing organisms living in it. Additionally, the water returned after the \u201conce through\u201d process has a significantly higher temperature than when it started. It should be said that the numbers in this discussion are estimates based on known figures for chip performance, electricity production, and whatever data&#8217;s been wrenched from \u201cAI\u201d corporations. They\u2019re estimated because these companies do not release their <strong><em>actual<\/em><\/strong> resource consumption numbers.<\/p>\n<p>Further, the data centers that support \u201cAI\u201d are oftentimes built in communities that are already resource scarce, and pulling water from or putting emissions into these communities ensures that \u201cAI&#8217;s\u201d harms are necessarily disproportionately enacted on the people who can least afford <strong><em>to <\/em><\/strong>bear them. Rather than rulemakers just paying lip-service to people\u2019s grievances, logging them in a repository somewhere, and making whatever rules they intended to make to begin with, both the creation and regulation of \u201cAI\u201d must be directed by those whom it&#8217;s most likely to harm. But while marginalized communities absolutely must have meaningful input when it comes to technologies which will be wielded against them, there also has to be a centralized response in the form of some standard-setting body. And, recursively, that standard-setting body will have to be meaningfully responsive to the needs of those most likely to be harmed if said regulations and standards go wrong.<\/p>\n<p>And so, we have to ask our questions: Who is most harmed by current uses of \u201cAI\u201d? What does the energy footprint of a data center <strong><em>actually<\/em><\/strong> look like? How much water and fossil fuel does it take to run \u201cAI&#8217;s\u201d servers and their computations? What are their carbon and waste heat emissions? Because the more we dig down on this, the more we truly confront the next questions: Should we be doing \u201cAI\u201d differently? What would it take to build \u201cAI\u201d in a different way? What would it take to <strong><em>power<\/em><\/strong> \u201cAI\u201d in a truly renewable way? And what and whom do we even want \u201cAI\u201d to be for? If it helps, you can try to think of it as a game:<\/p>\n<p>First major \u201cAI\u201d firm to use only renewable energy sources, an open source and radical consent model for the collection and use of training data, and a community partnership regulatory process which centers and heeds the needs of the most marginalized, wins.<\/p>\n<hr \/>\n<p style=\"text-align: center;\"><span style=\"text-decoration: underline;\"><i>Suggested Citation:<\/i><\/span><br \/>\nWilliams, Damien P. \u201cReimagining &#8216;AI\u2019s&#8217; Environmental and Sociotechnical Materialities,\u201d appearing in <em>Reimagining AI for Environmental Justice and Creativity<\/em>, Reia, J., Forelle, MC and Wang, Y. eds. Digital Technology for Democracy Lab, University of Virginia. 2025. <a href=\"https:\/\/doi.org\/10.18130\/03df-zn30\">https:\/\/doi.org\/10.18130\/03df-zn30<\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>There&#8217;s a new open-access book of collected essays called Reimagining AI for Environmental Justice and Creativity, and I happen to have an essay in it. The collection is made of contributions from participants in the October 2024 &#8220;Reimagining AI for Environmental Justice and Creativity&#8221; panels and workshops put on by Jess Reia, MC Forelle, and [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[1],"tags":[8,1585,1118,1560,967,1081,1300,959,1599,1109,1596,1591,1588,1590,1594,73,1547,101,1115,1551,1021,1562,1592,1603,1564,1583,1565,1602,1398,1580,1554,1525,1543,1572,1116,1117,1350,1568,1556,1553,1555,1552,1527,278,1111,1411,1529,294,304,1131,1502,1002,1503,1545,1134,1112,1159,418,419,1557,1593,1569,1566,1577,1470,1600,1157,1578,1504,1563,1544,493,1530,1531,1582,1598,1586,1558,1601,1573,1518,1595,560,561,562,1542,1575,1548,1570,1589,1026,655,1161,678,1579,684,1561,1584,1528,1229,1230,1571,1587,1581,1211,1235,1362,1124,1505,1574,1233,960,1149,801,1576,807,1030,811,1277,1158,1597,1567,1559],"class_list":["post-6406","post","type-post","status-publish","format-standard","hentry","category-uncategorized","tag-a-future-worth-thinking-about","tag-aaron-martin","tag-ableism","tag-ahmed-alrawi","tag-ai","tag-algorithmic-bias","tag-algorithmic-justice","tag-algorithmic-systems","tag-amanda-wyatt-visconti","tag-amazon","tag-andre-sobral","tag-andrea-roberts","tag-andrew-mondschein","tag-anne-pasek","tag-anuti-shah","tag-artificial-intelligence","tag-authoritarianism","tag-bias","tag-biomedical-ethics","tag-biopolitics","tag-biotech-ethics","tag-blair-attard-frost","tag-bryn-seabrook","tag-caitlin-wylie","tag-celia-calhoun","tag-christine-mahoney","tag-coleen-carrigan","tag-damien-p-williams","tag-damien-patrick-williams","tag-danila-longo","tag-data-centers","tag-data-science","tag-democracy","tag-desiree-ho","tag-disability","tag-disability-studies","tag-education","tag-ella-duus","tag-environment","tag-environmental-ethics","tag-environmental-impacts","tag-environmental-justice","tag-equity","tag-ethics","tag-facebook","tag-facial-recognition","tag-fairness","tag-feminist-ethics","tag-foucault","tag-gender","tag-generative-pre-trained-transformer","tag-google","tag-gpt","tag-harry-frankfurt","tag-homophobia","tag-implicit-bias","tag-intersubjectivity","tag-invisible-architecture-of-bias","tag-invisible-architectures-of-bias","tag-jess-reia","tag-jessica-sewell","tag-jonah-fogel","tag-jonathan-colmer","tag-jonathan-kropko","tag-justice","tag-keren-weitzberg","tag-knowledge","tag-kyrill-kunakhovich","tag-large-language-models","tag-lauren-e-bridges","tag-llm","tag-machine-ethics","tag-marginalization","tag-marginalized-lived-experiences","tag-maria-lungu","tag-maria-villanueva","tag-martina-massari","tag-mc-forelle","tag-megan-wiessner","tag-mehan-jayasuriya","tag-misinformation","tag-mona-sloane","tag-my-words","tag-my-work","tag-my-writing","tag-openai","tag-owen-kitzmann","tag-pedagogy","tag-pedro-augusto-p-francisco","tag-peter-norton","tag-philosophy-of-technology","tag-prejudice","tag-propaganda","tag-race","tag-rachel-leach","tag-racism","tag-rafael-alvarado","tag-raheem-manning","tag-responsibility","tag-science-and-technology-studies","tag-science-technology-and-society","tag-sergio-guillen-grillo","tag-shalini-misra","tag-siobhan-loughney","tag-social-cognition","tag-social-construction-of-science","tag-social-construction-of-technology","tag-social-dynamics","tag-social-shaping-of-technology","tag-steven-l-johnson","tag-sts","tag-surveillance-culture","tag-systemic-disparity","tag-systems","tag-tamara-kneese","tag-teaching","tag-technological-ethics","tag-technology","tag-technoscience","tag-values","tag-will-straw","tag-yasmin-curzi","tag-yingchong-wang"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p5WByP-1Fk","jetpack_likes_enabled":true,"jetpack-related-posts":[{"id":5899,"url":"https:\/\/afutureworththinkingabout.com\/?p=5899","url_meta":{"origin":6406,"position":0},"title":"ChatGPT is Actively Marketing to Students During University Finals Season","author":"Damien P. Williams","date":"April 4, 2025","format":false,"excerpt":"It's really disheartening and honestly kind of telling that in spite of everything, ChatGPT is actively marketing itself to students in the run-up to college finals season. We've talked many (many) times before about the kinds of harm that can come from giving over too much epistemic and heuristic authority\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"Screenshot of ChatpGPT page:ChaptGPT Promo: 2 months free for students ChatGPT Plus is now free for college students through May Offer valid for students in the US and Canada [Buttons reading \"Claim offer\" and \"learn more\" An image of a pencil scrawling a scribbly and looping line] ChatGPT Plus is here to help you through finals","src":"https:\/\/cdn.bsky.app\/img\/feed_fullsize\/plain\/did:plc:ybkylffhwhn2an2ic2lxh76k\/bafkreidh6mhffosfxhbgnxx6aybjycvgj3c2ygzto2xhzvsohdsv3g6evm@jpeg","width":350,"height":200,"srcset":"https:\/\/cdn.bsky.app\/img\/feed_fullsize\/plain\/did:plc:ybkylffhwhn2an2ic2lxh76k\/bafkreidh6mhffosfxhbgnxx6aybjycvgj3c2ygzto2xhzvsohdsv3g6evm@jpeg 1x, https:\/\/cdn.bsky.app\/img\/feed_fullsize\/plain\/did:plc:ybkylffhwhn2an2ic2lxh76k\/bafkreidh6mhffosfxhbgnxx6aybjycvgj3c2ygzto2xhzvsohdsv3g6evm@jpeg 1.5x, https:\/\/cdn.bsky.app\/img\/feed_fullsize\/plain\/did:plc:ybkylffhwhn2an2ic2lxh76k\/bafkreidh6mhffosfxhbgnxx6aybjycvgj3c2ygzto2xhzvsohdsv3g6evm@jpeg 2x"},"classes":[]},{"id":5316,"url":"https:\/\/afutureworththinkingabout.com\/?p=5316","url_meta":{"origin":6406,"position":1},"title":"My Appearance on The Machine Ethics Podcast&#8217;s A.I. Retreat Episode","author":"Damien P. Williams","date":"October 23, 2018","format":false,"excerpt":"As you already know, we went to the second Juvet A.I. Retreat, back in September. If you want to hear several of us talk about what we got up to at the then you're in luck because here are several conversations conducted by Ben Byford of the Machine Ethics Podcast.\u2026","rel":"","context":"In \"algorithmic bias\"","block_context":{"text":"algorithmic bias","link":"https:\/\/afutureworththinkingabout.com\/?tag=algorithmic-bias"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/img.youtube.com\/vi\/ownE2zxTN2U\/0.jpg?resize=350%2C200","width":350,"height":200},"classes":[]},{"id":5249,"url":"https:\/\/afutureworththinkingabout.com\/?p=5249","url_meta":{"origin":6406,"position":2},"title":"&#8220;We Built Them From Us&#8221;: My Appearance on the TEAM HUMAN Podcast","author":"Damien P. Williams","date":"February 22, 2018","format":false,"excerpt":"Earlier this month I was honoured to have the opportunity to sit and talk to Douglas Rushkoff on his TEAM HUMAN podcast. If you know me at all, you know this isn't by any means the only team for which I play, or even the only way I think about\u2026","rel":"","context":"In \"algorithmic bias\"","block_context":{"text":"algorithmic bias","link":"https:\/\/afutureworththinkingabout.com\/?tag=algorithmic-bias"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":5227,"url":"https:\/\/afutureworththinkingabout.com\/?p=5227","url_meta":{"origin":6406,"position":3},"title":"Appearance on the You Are Not So Smart Podcast","author":"Damien P. Williams","date":"December 4, 2017","format":false,"excerpt":"A few weeks ago I had a conversation with David McRaney of the You Are Not So Smart podcast, for his episode on Machine Bias. As he says on the blog: Now that algorithms are everywhere, helping us to both run and make sense of the world, a strange question\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":5269,"url":"https:\/\/afutureworththinkingabout.com\/?p=5269","url_meta":{"origin":6406,"position":4},"title":"My Review of Shannon Vallor&#8217;s TECHNOLOGY AND THE VIRTUES","author":"Damien P. Williams","date":"May 10, 2018","format":false,"excerpt":"My piece \"Cultivating Technomoral Interrelations,\" a review of\u00a0Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting has been up over at the Social Epistemology Research and Reply Collective for a few months, now, so I figured I should post something about it, here. As you'll read, I\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"https:\/\/socialepistemologydotcom.files.wordpress.com\/2018\/02\/shannon-vallor-technology-virtues-cover.jpg?w=350&h=200&crop=1","width":350,"height":200},"classes":[]},{"id":5082,"url":"https:\/\/afutureworththinkingabout.com\/?p=5082","url_meta":{"origin":6406,"position":5},"title":"From WIRED: &#8220;Tech Giants Team Up to Keep AI From Getting Out of Hand&#8221;","author":"Damien P. Williams","date":"September 28, 2016","format":false,"excerpt":"I spoke with Klint Finley over at WIRED about Amazon, Facebook, Google, IBM, and Microsoft's new joint ethics and oversight venture, which they've dubbed the \"Partnership on Artificial Intelligence to Benefit People and Society.\" They held a joint press briefing, today, in which Yann LeCun, Facebook's director of AI, and\u2026","rel":"","context":"In \"A Future Worth Thinking About\"","block_context":{"text":"A Future Worth Thinking About","link":"https:\/\/afutureworththinkingabout.com\/?tag=a-future-worth-thinking-about"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]}],"_links":{"self":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/6406","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=6406"}],"version-history":[{"count":3,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/6406\/revisions"}],"predecessor-version":[{"id":6409,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=\/wp\/v2\/posts\/6406\/revisions\/6409"}],"wp:attachment":[{"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=6406"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=6406"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/afutureworththinkingabout.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=6406"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}