twitter

All posts tagged twitter

So with the job of White House Office of Science and Technology Policy director having gone to Dr. Arati Prabhakar back in October, rather than Dr. Alondra Nelson, and the release of the “Blueprint for an AI Bill of Rights” (henceforth “BfaAIBoR” or “blueprint”) a few weeks after that, I am both very interested also pretty worried to see what direction research into “artificial intelligence” is actually going to take from here.

To be clear, my fundamental problem with the “Blueprint for an AI bill of rights” is that while it pays pretty fine lip-service to the ideas of  community-led oversight, transparency, and abolition of and abstaining from developing certain tools, it begins with, and repeats throughout, the idea that sometimes law enforcement, the military, and the intelligence community might need to just… ignore these principles. Additionally, Dr. Prabhakar was director of DARPA for roughly five years, between 2012 and 2015, and considering what I know for a fact got funded within that window? Yeah.

To put a finer point on it, 14 out of 16 uses of the phrase “law enforcement” and 10 out of 11 uses of “national security” in this blueprint are in direct reference to why those entities’ or concept structures’ needs might have to supersede the recommendations of the BfaAIBoR itself. The blueprint also doesn’t mention the depredations of extant military “AI” at all. Instead, it points to the idea that the Department Of Defense (DoD) “has adopted [AI] Ethical Principles, and tenets for Responsible Artificial Intelligence specifically tailored to its [national security and defense] activities.” And so with all of that being the case, there are several current “AI” projects in the pipe which a blueprint like this wouldn’t cover, even if it ever became policy, and frankly that just fundamentally undercuts Much of the real good a project like this could do.

For instance, at present, the DoD’s ethical frames are entirely about transparency, explainability, and some lipservice around equitability and “deliberate steps to minimize unintended bias in Al …” To understand a bit more of what I mean by this, here’s the DoD’s “Responsible Artificial Intelligence Strategy…” pdf (which is not natively searchable and I had to OCR myself, so heads-up); and here’s the Office of National Intelligence’s “ethical principles” for building AI. Note that not once do they consider the moral status of the biases and values they have intentionally baked into their systems.

An "Explainable AI" diagram from DARPA, showing two flowcharts, one on top of the other. The top one is labeled "today" and has the top level condition "task" branching to both a confused looking human user and state called "learned function" which is determined by a previous state labeled "machine learning process" which is determined by a state labeled "training data." "Learned Function" feeds "Decision or Recommendation" to the human user, who has several questions about the model's beaviour, such as "why did you do that?" and "when can i trust you?" The bottom one is labeled "XAI" and has the top level condition "task" branching to both a happy and confident looking human user and state called "explainable model/explanation interface" which is determined by a previous state labeled "new machine learning process" which is determined by a state labeled "training data." "explainable model/explanation interface" feeds choices to the human user, who can feed responses BACK to the system, and who has several confident statements about the model's beaviour, such as "I understand why" and "I know when to trust you."

An “Explainable AI” diagram from DARPA

Continue Reading

I’m Not Afraid of AI Overlords— I’m Afraid of Whoever’s Training Them To Think That Way

by Damien P. Williams

I want to let you in on a secret: According to Silicon Valley’s AI’s, I’m not human.

Well, maybe they think I’m human, but they don’t think I’m me. Or, if they think I’m me and that I’m human, they think I don’t deserve expensive medical care. Or that I pose a higher risk of criminal recidivism. Or that my fidgeting behaviours or culturally-perpetuated shame about my living situation or my race mean I’m more likely to be cheating on a test. Or that I want to see morally repugnant posts that my friends have commented on to call morally repugnant. Or that I shouldn’t be given a home loan or a job interview or the benefits I need to stay alive.

Now, to be clear, “AI” is a misnomer, for several reasons, but we don’t have time, here, to really dig into all the thorny discussion of values and beliefs about what it means to think, or to be a pow3rmind— especially because we need to take our time talking about why values and beliefs matter to conversations about “AI,” at all. So instead of “AI,” let’s talk specifically about algorithms, and machine learning.

Machine Learning (ML) is the name for a set of techniques for systematically reinforcing patterns, expectations, and desired outcomes in various computer systems. These techniques allow those systems to make sought after predictions based on the datasets they’re trained on. ML systems learn the patterns in these datasets and then extrapolate them to model a range of statistical likelihoods of future outcomes.

Algorithms are sets of instructions which, when run, perform functions such as searching, matching, sorting, and feeding the outputs of any of those processes back in on themselves, so that a system can learn from and refine itself. This feedback loop is what allows algorithmic machine learning systems to provide carefully curated search responses or newsfeed arrangements or facial recognition results to consumers like me and you and your friends and family and the police and the military. And while there are many different types of algorithms which can be used for the above purposes, they all remain sets of encoded instructions to perform a function.

And so, in these systems’ defense, it’s no surprise that they think the way they do: That’s exactly how we’ve told them to think.

[Image of Michael Emerson as Harold Finch, in season 2, episode 1 of the show Person of Interest, “The Contingency.” His face is framed by a box of dashed yellow lines, the words “Admin” to the top right, and “Day 1” in the lower right corner.]

Continue Reading

(Direct Link to the Mp3)

On Friday, I needed to do a thread of a thing, so if you hate threads and you were waiting until I collected it, here it is.

But this originally needed to be done in situ. It needed to be a serialized and systematized intervention and imposition into the machinery of that particular flow of time. That day…

There is a principle within many schools of magical thought known as “shielding.” In practice and theory, it’s closely related to the notion of “grounding” and the notion of “centering.” (If you need to think of magical praxis as merely a cipher for psychological manipulation toward particular behaviours or outcomes, these all still scan.)

When you ground, when you centre, when you shield, you are anchoring yourself in an awareness of a) your present moment, your temporality; b) your Self and all emotions and thoughts; and c) your environment. You are using your awareness to carve out a space for you to safely inhabit while in the fullness of that awareness. It’s a way to regroup, breathe, gather yourself, and know what and where you are, and to know what’s there with you.

You can shield your self, your home, your car, your group of friends, but moving parts do increase the complexity of what you’re trying to hold in mind, which may lead to anxiety or frustration, which kind of opposes the exercise’s point. (Another sympathetic notion, here, is that of “warding,” though that can be said to be more for objects,not people.)

So what is the point?

The point is that many of us are being drained, today, this week, this month, this year, this life, and we need to remember to take the time to regroup and recharge. We need to shield ourselves, our spaces, and those we love, to ward them against those things that would sap us of strength and the will to fight. We know we are strong. We know that we are fierce, and capable. But we must not lash out wildly, meaninglessly. We mustn’t be lured into exhausting ourselves. We must collect ourselves, protect ourselves, replenish ourselves, and by “ourselves” I also obviously mean “each other.”

Mutual support and endurance will be crucial.

…So imagine that you’ve built a web out of all the things you love, and all of the things you love are connected to each other and the strands between them vibrate when touched. And you touch them all, yes?

And so you touch them all and they all touch you and the energy you generate is cyclically replenished, like ocean currents and gravity. And you use what you build—that thrumming hum of energy—to blanket and to protect and to energize that which creates it.

And we’ll do this every day. We’ll do this like breathing. We’ll do this like the way our muscles and tendons and bones slide across and pull against and support each other. We’ll do this like heartbeats. Cyclical. Mutually supporting. The burden on all of us, together, so that it’s never on any one of us alone.

So please take some time today, tomorrow, very soon to build your shields. Because, soon, we’re going to need you to deploy them more and more.

Thank you, and good luck.


The audio and text above are modified versions of this Twitter thread. This isn’t the first time we’ve talked about the overlap of politics, psychology, philosophy, and magic, and if you think it’ll be the last, then you haven’t been paying attention.

Sometimes, there isn’t much it feels like we can do, but we can support  and shield each other. We have to remember that, in the days, weeks, month, and years to come. We should probably be doing our best to remember it forever.

Anyway, I hope this helps.

Until Next Time

So I’m quoted in this article in The Atlantic on the use of technology in leveraging sociological dynamics to combat online harassment: “Why Online Allies Matter in Fighting Harassment.”

An experiment by Kevin Munger used bots to test which groups white men responded to when being called out on their racist harassment online. Findings largely unsurprising (Powerful white men; they responded favourably to powerful white men), save for the fact that anonimity INCREASED effectiveness of treatment, and visible identity decreased it. That one was weird. But it’s still nice to see all of this codified.

Good to see use of Bertrand & Mullainathan’s “Are Emily and Greg more employable than Lakisha and Jamal?” as the idea of using “Black Sounding Names” to signal purported ethnicity of bot thus clearly models what he thought those he expected to be racist would think, rather than indicating his own belief. (However, it could be asked whether there’s a meaningful difference, here, as he still had to choose the names he thought would “sound black.”)

The Reactance study Munger discusses—the one that shows that people double down on factually incorrect prejudices—is the same one I used in “On The Invisible Architecture of Bias

A few things Ed Yong and I talked about that didn’t get into the article, due to space:

-Would like to see this experimental model applied to other forms of prejudice (racist, sexist, homophobic, transphobic, ableist, etc language), and was thus very glad to see the footnote about misogynist harassment.

-I take some exception to the use of Dovidio/Gaertner and Crandall et al definitions of racism, as those leave out the sociological aspects of power dynamics (“Racism/Sexism/Homophobia/Transphobia/Ableism= Prejudice + Power”) which seem crucial to understanding the findings of Munger’s experiment. He skirts close to this when he discusses the greater impact of “high status” individuals, but misses the opportunity to lay out the fact that:
–Institutionalised power dynamics as related to the interplay of in-group and out-group behaviour are pretty clearly going to affect why white people are more likely to listen to those they perceive as powerful white men, because
–The interplay of Power and status, interpersonally, is directly related to power and status institutionally.

-Deindividuation (loss of sense of self in favour of group identity) as a key factor and potential solution is very interesting.

Something we didn’t get to talk about but which I think is very important is the question of how we keep this from being used as a handbook. That is, what do we do in the face of people understand these mechanisms and who wish to use them to sow division and increase acceptance of racist, sexist, homophobic, transphobic, ableist, etc ideals? Do we, then, become engaged in some kind of rolling arms race of sociological pressure?

…Which, I guess, has pretty much always been true, and we call it “civilization.”

Anyway, hope you enjoy it.