I know I’ve said this before, but since we’re going to be hearing increasingly more about Elon Musk and his “Anti-Woke” “A.I.” “Truth GPT” in the coming days and weeks, let’s go ahead and get some things out on the table:
All technology is political. All created artifacts are rife with values. There is no neutral tech. And there never, ever has been.
I keep trying to tell you that the political right understands this when it suits them— when they can weaponize it; and they’re very, very good at weaponizing it— but people seem to keep not getting it. So let me say it again, in a somewhat different way:
There is no ground of pure objectivity. There is no god’s-eye view.
There is no purely objective thing. Pretending there is only serves to create the conditions in which the worst people can play “gotcha” anytime they can clearly point to their enemies doing what we are literally all doing ALL THE TIME: Creating meaning and knowledge out of what we value, together.
There is no God-Trick. There is enmeshed, entangled, messy, relational, intersubjective perspective, and what we can pool and make together from what we can perceive from where we are.
And there are the tools and systems that we can make from within those understandings.
So say you know your training data is prejucidally biased— and if your training data is the internet then boy oh dang is it ever— and you not only do nothing to bracket and counterweight against those prejudices but also in fact intentionally build your system to amplify them. Well then that seems… bad. Seems like you want prejudicial biases in your training data and their systems’ operationalization and deployment of that data.But you don’t have to take logic’s word for it. Musk said it himself, out loud, that he wants “A.I.” that doesn’t fight prejudice.
Again: The right is fully capable of understanding that human values and beliefs influence the technologies we make, just so long as they can use that fact to attack the idea of building or even trying to build those technologies with progressive values.
And that’s before we get into the fact that what OpenAI is doing is nowhere near “progressive” or “woke.” Their interventions are, quite frankly, very basic, reactionary, left-libertarian post hoc “fixes” implemented to stem to tide of bad press that flooded in at the outset of its MSFT partnership.
Everything we make is filled with our values. GPT-type tools especially so. The public versions are fed and trained and tuned on the firehose of the internet, and they reproduce a highly statistically likely probability distribution of what they’ve been fed. They’re jam-packed with prejudicial bias and given few to no internal course-correction processes and parameters by which to truly and meaningfully— that is, over time, and with relational scaffolding— learn from their mistakes. Not just their factual mistakes, but the mistakes in the framing of their responses within the world.
Literally, if we’d heeded and understood all of this at the outset, GPT’s and all other “A.I.” would be significantly less horrible in terms of both how they were created to begin with, and the ends toward which we think they ought to be put.
But this? What we have now? This is nightmare shit. And we need to change it, as soon as possible, before it can get any worse.