machine learning

All posts tagged machine learning

So with the job of White House Office of Science and Technology Policy director having gone to Dr. Arati Prabhakar back in October, rather than Dr. Alondra Nelson, and the release of the “Blueprint for an AI Bill of Rights” (henceforth “BfaAIBoR” or “blueprint”) a few weeks after that, I am both very interested also pretty worried to see what direction research into “artificial intelligence” is actually going to take from here.

To be clear, my fundamental problem with the “Blueprint for an AI bill of rights” is that while it pays pretty fine lip-service to the ideas of  community-led oversight, transparency, and abolition of and abstaining from developing certain tools, it begins with, and repeats throughout, the idea that sometimes law enforcement, the military, and the intelligence community might need to just… ignore these principles. Additionally, Dr. Prabhakar was director of DARPA for roughly five years, between 2012 and 2015, and considering what I know for a fact got funded within that window? Yeah.

To put a finer point on it, 14 out of 16 uses of the phrase “law enforcement” and 10 out of 11 uses of “national security” in this blueprint are in direct reference to why those entities’ or concept structures’ needs might have to supersede the recommendations of the BfaAIBoR itself. The blueprint also doesn’t mention the depredations of extant military “AI” at all. Instead, it points to the idea that the Department Of Defense (DoD) “has adopted [AI] Ethical Principles, and tenets for Responsible Artificial Intelligence specifically tailored to its [national security and defense] activities.” And so with all of that being the case, there are several current “AI” projects in the pipe which a blueprint like this wouldn’t cover, even if it ever became policy, and frankly that just fundamentally undercuts Much of the real good a project like this could do.

For instance, at present, the DoD’s ethical frames are entirely about transparency, explainability, and some lipservice around equitability and “deliberate steps to minimize unintended bias in Al …” To understand a bit more of what I mean by this, here’s the DoD’s “Responsible Artificial Intelligence Strategy…” pdf (which is not natively searchable and I had to OCR myself, so heads-up); and here’s the Office of National Intelligence’s “ethical principles” for building AI. Note that not once do they consider the moral status of the biases and values they have intentionally baked into their systems.

An "Explainable AI" diagram from DARPA, showing two flowcharts, one on top of the other. The top one is labeled "today" and has the top level condition "task" branching to both a confused looking human user and state called "learned function" which is determined by a previous state labeled "machine learning process" which is determined by a state labeled "training data." "Learned Function" feeds "Decision or Recommendation" to the human user, who has several questions about the model's beaviour, such as "why did you do that?" and "when can i trust you?" The bottom one is labeled "XAI" and has the top level condition "task" branching to both a happy and confident looking human user and state called "explainable model/explanation interface" which is determined by a previous state labeled "new machine learning process" which is determined by a state labeled "training data." "explainable model/explanation interface" feeds choices to the human user, who can feed responses BACK to the system, and who has several confident statements about the model's beaviour, such as "I understand why" and "I know when to trust you."

An “Explainable AI” diagram from DARPA

Continue Reading

I’m Not Afraid of AI Overlords— I’m Afraid of Whoever’s Training Them To Think That Way

by Damien P. Williams

I want to let you in on a secret: According to Silicon Valley’s AI’s, I’m not human.

Well, maybe they think I’m human, but they don’t think I’m me. Or, if they think I’m me and that I’m human, they think I don’t deserve expensive medical care. Or that I pose a higher risk of criminal recidivism. Or that my fidgeting behaviours or culturally-perpetuated shame about my living situation or my race mean I’m more likely to be cheating on a test. Or that I want to see morally repugnant posts that my friends have commented on to call morally repugnant. Or that I shouldn’t be given a home loan or a job interview or the benefits I need to stay alive.

Now, to be clear, “AI” is a misnomer, for several reasons, but we don’t have time, here, to really dig into all the thorny discussion of values and beliefs about what it means to think, or to be a pow3rmind— especially because we need to take our time talking about why values and beliefs matter to conversations about “AI,” at all. So instead of “AI,” let’s talk specifically about algorithms, and machine learning.

Machine Learning (ML) is the name for a set of techniques for systematically reinforcing patterns, expectations, and desired outcomes in various computer systems. These techniques allow those systems to make sought after predictions based on the datasets they’re trained on. ML systems learn the patterns in these datasets and then extrapolate them to model a range of statistical likelihoods of future outcomes.

Algorithms are sets of instructions which, when run, perform functions such as searching, matching, sorting, and feeding the outputs of any of those processes back in on themselves, so that a system can learn from and refine itself. This feedback loop is what allows algorithmic machine learning systems to provide carefully curated search responses or newsfeed arrangements or facial recognition results to consumers like me and you and your friends and family and the police and the military. And while there are many different types of algorithms which can be used for the above purposes, they all remain sets of encoded instructions to perform a function.

And so, in these systems’ defense, it’s no surprise that they think the way they do: That’s exactly how we’ve told them to think.

[Image of Michael Emerson as Harold Finch, in season 2, episode 1 of the show Person of Interest, “The Contingency.” His face is framed by a box of dashed yellow lines, the words “Admin” to the top right, and “Day 1” in the lower right corner.]

Continue Reading

As you already know, we went to the second Juvet A.I. Retreat, back in September. If you want to hear several of us talk about what we got up to at the then you’re in luck because here are several conversations conducted by Ben Byford of the Machine Ethics Podcast.

I am deeply grateful to Ben Byford for asking me to sit down and talk about this with him. I talk a great deal, and am surprisingly able to (cogently?) get on almost all of my bullshit—technology and magic and the occult, nonhuman personhood, the sham of gender and race and other social constructions of expected lived categories, the invisible architecture of bias, neurodiversity, and philosophy of mind—in a rather short window of time.

So that’s definitely something…

Continue Reading

Kirsten and I spent the week between the 17th and the 21st of September with 18 other utterly amazing people having Chatham House Rule-governed conversations about the Future of Artificial Intelligence.

We were in Norway, in the Juvet Landscape Hotel, which is where they filmed a lot of the movie Ex Machina, and it is even more gorgeous in person. None of the rooms shown in the film share a single building space. It’s astounding as a place of both striking architectural sensibility and also natural integration as they built every structure in the winter to allow the dormancy cycles of the plants and animals to dictate when and where they could build, rather than cutting anything down.

And on our first full day here, Two Ravens flew directly over my and Kirsten’s heads.

Yes.

[Image of a rainbow rising over a bend in a river across a patchy overcast sky, with the river going between two outcropping boulders, trees in the foreground and on either bank and stretching off into the distance, and absolutely enormous mountains in the background]

I am extraordinarily grateful to Andy Budd and the other members of the Clear Left team for organizing this, and to Cennydd Bowles for opening the space for me to be able to attend, and being so forcefully enthused about the prospect of my attending that he came to me with a full set of strategies in hand to get me to this place. That kind of having someone in your corner means the world for a whole host of personal reasons, but also more general psychological and socially important ones, as well.

I am a fortunate person. I am a person who has friends and resources and a bloody-minded stubbornness that means that when I determine to do something, it will more likely than not get fucking done, for good or ill.

I am a person who has been given opportunities to be in places many people will never get to see, and have conversations with people who are often considered legends in their fields, and start projects that could very well alter the shape of the world on a massive scale.

Yeah, that’s a bit of a grandiose statement, but you’re here reading this, and so you know where I’ve been and what I’ve done.

I am a person who tries to pay forward what I have been given and to create as many spaces for people to have the opportunities that I have been able to have.

I am not a monetarily wealthy person, measured against my society, but my wealth and fortune are things that strike me still and make me take stock of it all and what it can mean and do, all over again, at least once a week, if not once a day, as I sit in tension with who I am, how the world perceives me, and what amazing and ridiculous things I have had, been given, and created the space to do, because and in violent spite of it all.

So when I and others come together and say we’re going to have to talk about how intersectional oppression and the lived experiences of marginalized peoples affect, effect, and are affected and effected BY the wider techoscientific/sociotechnical/sociopolitical/socioeconomic world and what that means for how we design, build, train, rear, and regard machine minds, then we are going to have to talk about how intersectional oppression and the lived experiences of marginalized peoples affect, effect, and are affected and effected by the wider techoscientific/sociotechnical/sociopolitical/socioeconomic world and what that means for how we design, build, train, rear, and regard machine minds.

So let’s talk about what that means.

Continue Reading

Previously, I told you about The Human Futures and Intelligent Machines Summit at Virginia Tech, and now that it’s over, I wanted to go ahead and put the full rundown of the events all in one place.

The goals for this summit were to start looking at the ways in which issues of algorithms, intelligent machine systems, human biotech, religion, surveillance, and more will intersect and affect us in the social, academic, political spheres. The big challenge in all of this was seen as getting better at dealing with this in the university and public policy sectors, in America, rather than the seeming worse we’ve gotten, so far.

Here’s the schedule. Full notes, below the cut.

Friday, June 8, 2018

  • Josh Brown on “the distinction between passive and active AI.”
  • Daylan Dufelmeier on “the potential ramifications of using advanced computing in the criminal justice arena…”
  • Mario Khreiche on the effects of automation, Amazon’s Mechanical Turk, and the Microlabor market.
  • Aaron Nicholson on how technological systems are used to support human social outcomes, specifically through the lens of policing  in the city of Atlanta
  • Ralph Hall on “the challenges society will face if current employment and income trends persist into the future.”
  • Jacob Thebault-Spieker on “how pro-urban and pro-wealth biases manifest in online systems, and  how this likely influences the ‘education’ of AI systems.”
  • Hani Awni on the sociopolitical of excluding ‘relational’ knowledge from AI systems.

Saturday, June 9, 2018

  • Chelsea Frazier on rethinking our understandings of race, biocentrism, and intelligence in relation to planetary sustainability and in the face of increasingly rapid technological advancement.
  • Ras Michael Brown on using the religions technologies of West Africa and the West African Diaspora to reframe how we think about “hybrid humanity.”
  • Damien Williams on how best to use interdisciplinary frameworks in the creation of machine intelligence and human biotechnological interventions.
  • Sara Mattingly-Jordan on the implications of the current global landscape in AI ethics regulation.
  • Kent Myers on several ways in which the intelligence community is engaging with human aspects of AI, from surveillance to sentiment analysis.
  • Emma Stamm on the idea that datafication of the self and what about us might be uncomputable.
  • Joshua Earle on “Morphological Freedom.”

Continue Reading

This weekend, Virginia Tech’s Center for the Humanities is hosting The Human Futures and Intelligent Machines Summit, and there is a link for the video cast of the events. You’ll need to Download and install Zoom, but it should be pretty straightforward, other than that.

You’ll find the full Schedule, below the cut.

Continue Reading

My piece “Cultivating Technomoral Interrelations,” a review of Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting has been up over at the Social Epistemology Research and Reply Collective for a few months, now, so I figured I should post something about it, here.

As you’ll read, I was extremely taken with Vallor’s book, and think it is a part of some very important work being done. From the piece:

Additionally, her crucial point seems to be that through intentional cultivation of the self and our society, or that through our personally grappling with these tasks, we can move the world, a stance which leaves out, for instance, notions of potential socioeconomic or political resistance to these moves. There are those with a vested interest in not having a more mindful and intentional technomoral ethos, because that would undercut how they make their money. However, it may be that this is Vallor’s intent.

The audience and goal for this book seems to be ethicists who will be persuaded to become philosophers of technology, who will then take up this book’s understandings and go speak to policy makers and entrepreneurs, who will then make changes in how they deal with the public. If this is the case, then there will already be a shared conceptual background between Vallor and many of the other scholars whom she intends to make help her to do the hard work of changing how people think about their values. But those philosophers will need a great deal more power, oversight authority, and influence to effectively advocate for and implement what Vallor suggests, here, and we’ll need sociopolitical mechanisms for making those valuative changes, as well.

[Image of the front cover of Shannon Vallor’s TECHNOLOGY AND THE VIRTUES. Circuit pathways in the shapes of trees.]

This is, as I said, one part of a larger, crucial project of bringing philosophy, the humanities, and social sciences into wide public conversation with technoscientific fields and developers. While there have always been others doing this work, it is increasingly the case that these folks are being both heeded and given institutional power and oversight authority.

As we continue the work of building these systems, and in the wake of all these recent events, more and more like this will be necessary.

Shannon Vallor’s Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting is out in paperback, June 1st, 2018. Read the rest of “Cultivating Technomoral Interrelations: A Review of Shannon Vallor’s Technology and the Virtues at the Social Epistemology Review and Reply Collective.

Earlier this month I was honoured to have the opportunity to sit and talk to Douglas Rushkoff on his TEAM HUMAN podcast. If you know me at all, you know this isn’t by any means the only team for which I play, or even the only way I think about the construction of our “teams,” and that comes up in our conversation. We talk a great deal about algorithms, bias, machine consciousness, culture, values, language, and magick, and the ways in which the nature of our categories deeply affect how we treat each other, human and nonhuman alike. It was an absolutely fantastic time.

From the page:

In this episode, Williams and Rushkoff look at the embedded biases of technology and the values programed into our mediated lives. How has a conception of technology as “objective” blurred our vision to the biases normalized within these systems? What ethical interrogation might we apply to such technology? And finally, how might alternative modes of thinking, such as magick, the occult, and the spiritual help us to bracket off these systems for pause and critical reflection? This conversation serves as a call to vigilance against runaway systems and the prejudices they amplify.

As I put it in the conversation: “Our best interests are at best incidental to [capitalist systems] because they will keep us alive long enough to for us to buy more things from them.” Following from that is the fact that we build algorithmic systems out of those capitalistic principles, and when you iterate out from there—considering all attendant inequalities of these systems on the merely human scale—we’re in deep trouble, fast.

Check out the rest of this conversation to get a fuller understanding of how it all ties in with language and the occult. It’s a pretty great ride, and I hope you enjoy it.

Until Next Time.

A few weeks ago I had a conversation with David McRaney of the You Are Not So Smart podcast, for his episode on Machine Bias. As he says on the blog:

Now that algorithms are everywhere, helping us to both run and make sense of the world, a strange question has emerged among artificial intelligence researchers: When is it ok to predict the future based on the past? When is it ok to be biased?

“I want a machine-learning algorithm to learn what tumors looked like in the past, and I want it to become biased toward selecting those kind of tumors in the future,” explains philosopher Shannon Vallor at Santa Clara University.  “But I don’t want a machine-learning algorithm to learn what successful engineers and doctors looked like in the past and then become biased toward selecting those kinds of people when sorting and ranking resumes.”

We talk about this,  sentencing algorithms, the notion of how to raise and teach our digital offspring, and more. You can listen to all it here:

[Direct Link to the Mp3 Here]

If and when it gets a transcript, I will update this post with a link to that.

Until Next Time.

There’s increasing reportage about IBM using Watson to correlate medical data. We’ve talked before about the potential hazards of this:

Do you know someone actually had the temerity to ask [something like] “What Does Google Having Access to Medical Records Mean For Patient Privacy?” [Here] Like…what the fuck do you think it means? Nothing good, you idiot!

Disclosures and knowledges can still make certain populations intensely vulnerable to both predation and to social pressures and judgements, and until that isn’t the case, anymore, we need to be very careful about the work we do to try to bring those patients’ records into a sphere where they’ll be accessed and scrutinized by people who don’t have to take an oath to hold that information in confidence. ‘

We are more and more often at the intersection of our biological humanity and our technological augmentation, and the integration of our mediated outboard memories only further complicates the matter. As it stands, we don’t quite yet know how to deal with the question posed by Motherboard, some time ago (“Is Harm to a Prosthetic Limb Property Damage or Personal Injury?”), but as we build on implantable technologies, advanced prostheses, and offloaded memories and augmented capacities we’re going to have to start blurring the line between our bodies, our minds, and our concept of our selves. That is, we’ll have to start intentionally blurring it, because the vast majority of us already blur it, without consciously realising that we do. At least, those without prostheses don’t realise it.

Dr Ashley Shew, out of Virginia Tech,  works at the intersection of philosophy, tech, and disability. I first encountered her work, at the 2016 IEEE Ethics Conference in Vancouver, where she presented her paper “Up-Standing, Norms, Technology, and Disability,” a discussion of how ableism, expectations, and language use marginalise disabled bodies. Dr Shew is, herself, disabled, having had her left leg removed due to cancer, and she gave her talk not on the raised dias, but at floor-level, directly in front of the projector. Her reason? “I don’t walk up stairs without hand rails, or stand on raised platforms without guards.”

Dr Shew notes that users of wheelchairs consider those to be fairly integral extensions and interventions. Wheelchair users, she notes, consider their chairs to be a part of them, and the kinds of lawsuits engaged when, for instance, airlines damage their chairs, which happens a great deal.  While we tend to think of the advents of technology allowing for the seamless integration of our technology and bodies, the fact is that well-designed mechanical prostheses, today, are capable becoming integrated into the personal morphic sphere of a person, the longer they use it. And this can extended sensing can be transferred from one device to another. Shew mentions a friend of hers:

She’s an amputee who no longer uses a prosthetic leg, but she uses forearm crutches and a wheelchair. (She has a hemipelvectomy, so prosthetics are a real pain for her to get a good fit and there aren’t a lot of options.) She talks about how people have these different perceptions of devices. When she uses her chair people treat her differently than when she uses her crutches, but the determination of which she uses has more to do with the activities she expects for the day, rather than her physical wellbeing.

But people tend to think she’s recovering from something when she moves from chair to sticks.

She has been an [amputee] for 18 years.

She has/is as recovered as she can get.

In her talk at IEEE, Shew discussed the fact that a large number of paraplegics and other wheelchair users do not want exoskeletons, and those fancy stair-climbing wheelchairs aren’t covered by health insurance. They’re classed as vehicles. She said that when she brought this up in the class she taught, one of the engineers left the room looking visibly distressed. He came back later and said that he’d gone home to talk to his brother with spina bifida, who was the whole reason he was working on exoskeletons. He asked his brother, “Do you even want this?” And the brother said, basically, “It’s cool that you’re into it but… No.” So, Shew asks, why are these technologies being developed? Transhumanists and the military. Framing this discussion as “helping our vets” makes it a noble cause, without drawing too much attention to the fact that they’ll be using them on the battlefield as well.

All of this comes back down and around to the idea of biases ingrained into social institutions. Our expectations of what a “normal functioning body” is gets imposed from the collective society, as a whole, a placed as restrictions and demands on the bodies of those whom we deem to be “malfunctioning.” As Shew says, “There’s such a pressure to get the prosthesis as if that solves all the problems of maintenance and body and infrastructure. And the pressure is for very expensive tech at that.”

So we are going to have to accept—in a rare instance where Robert Nozick is proven right about how property and personhood relate—that the answer is “You are damaging both property and person, because this person’s property is their person.” But this is true for reasons Nozick probably would not think to consider, and those same reasons put us on weirdly tricky grounds. There’s a lot, in Nozick, of the notion of property as equivalent to life and liberty, in the pursuance of rights, but those ideas don’t play out, here, in the same way as they do in conservative and libertarian ideologies.  Where those views would say that the pursuit of property is intimately tied to our worth as persons, in the realm of prosthetics our property is literally simultaneously our bodies, and if we don’t make that distinction, then, as Kirsten notes, we can fall into “money is speech” territory, very quickly, and we do not want that.

Because our goal is to be looking at quality of life, here—talking about the thing that allows a person to feel however they define “comfortable,” in the world. That is, the thing(s) that lets a person intersect with the world in the ways that they desire. And so, in damaging the property, you damage the person. This is all the more true if that person is entirely made of what we are used to thinking of as property.

And all of this is before we think about the fact implantable and bone-bonded tech will need maintenance. It will wear down and glitch out, and you will need to be able to access it, when it does.  This means that the range of ability for those with implantables? Sometimes it’s less than that of folks with more “traditional” prostheses. But because they’re inside, or more easily made to look like the “original” limb,  we observers are so much more likely to forget that there are crucial differences at play in the ownership and operation of these bodies.

There’s long been a fear that, the closer we get to being able to easily and cheaply modify humans, we’ll be more likely to think of humanity as “perfectable.” That the myth of progress—some idealized endpoint—will be so seductive as to become completely irresistible. We’ve seen this before, in the eugenics movement, and it’s reared its head in the transhumanist and H+ communities of the 20th and 21st centuries, as well. But there is the possibility that instead of demanding that there be some kind of universally-applicable “baseline,” we intently focused, instead, on recognizing the fact that just as different humans have different biochemical and metabolic needs, process, capabilities, preferences, and desires, different beings and entities which might be considered persons are drastically different than we, but no less persons?

Because human beings are different. Is there a general framework, a loosely-defined line around which we draw a conglomeration of traits, within which lives all that we mark out as “human”—a kind of species-wide butter zone? Of course. That’s what makes us a fucking species. But the kind of essentialist language and thinking towards which we tend, after that, is reductionist and dangerous. Our language choices matter, because connotative weight alters what people think and in what context, and, again, we have a habit of moving rapidly from talking about a generalized framework of humanness to talking about “The Right Kind Of Bodies,” and the “Right Kind Of Lifestyle.”

And so, again, again, again, we must address problems such as normalized expectations of “health” and “Ability.” Trying to give everyone access to what they might consider their “best” selves is a brilliant goal, sure, whatever, but by even forwarding the project, we run the risk of colouring an expectation of both what that “best” is and what we think it “Ought To” look like.

Some people need more protein, some people need less choline, some people need higher levels of phosphates, some people have echolocation, some can live to be 125, and every human population has different intestinal bacterial colonies from every other. When we combine all these variables, we will not necessarily find that each and every human being has the same molecular and atomic distribution in the same PPM/B ranges, nor will we necessarily find that our mixing and matching will ensure that everyone gets to be the best combination of everything. It would be fantastic if we could, but everything we’ve ever learned about our species says that “healthy human” is a constantly shifting target, and not a static one.

We are still at a place where the general public reacts with visceral aversion to technological advances and especially anything like an immediated technologically-augmented humanity, and this is at least in part because we still skirt the line of eugenics language, to this day. Because we talk about naturally occurring bio-physiological Facts as though they were in any way indicative of value, without our input. Because we’re still terrible at ethics, continually screwing up at 100mph, then looking back and going, “Oh. Should’ve factored that in. Oops.”

But let’s be clear, here: I am not a doctor. I’m not a physiologist or a molecular biologist. I could be wrong about how all of these things come together in the human body, and maybe there will be something more than a baseline, some set of all species-wide factors which, in the right configuration, say “Healthy Human.” But what I am is someone with a fairly detailed understanding of how language and perception affect people’s acceptance of possibilities, their reaction to new (or hauntingly-familiar-but-repackaged) ideas, and their long-term societal expectations and valuations of normalcy.

And so I’m not saying that we shouldn’t augment humanity, via either mediated or immediated means. I’m not saying that IBM’s Watson and Google’s DeepMind shouldn’t be tasked with the searching patient records and correlating data. But I’m also not saying that either of these is an unequivocal good. I’m saying that it’s actually shocking how much correlative capability is indicated by the achievements of both IBM and Google. I’m saying that we need to change the way we talk about and think about what it is we’re doing. We need to ask ourselves questions about informed patient consent, and the notions of opting into the use of data; about the assumptions we’re making in regards to the nature of what makes us humans, and the dangers of rampant, unconscious scientistic speciesism. Then, we can start to ask new questions about how to use these new tools we’ve developed.

With this new perspective, we can begin to imagine what would happen if we took Watson and DeepDream’s ability to put data into context—to turn around, in seconds, millions upon millions (billions? Trillions?) of permutations and combinations. And then we can ask them to work on tailoring genome-specific health solutions and individualized dietary plans. What if we asked these systems to catalogue literally everything we currently knew about every kind of disease presentation, in every ethnic and regional population, and the differentials for various types of people with different histories, risk factors, current statuses? We already have nanite delivery systems, so what if we used Google and IBM’s increasingly ridiculous complexity to figure out how to have those nanobots deliver a payload of perfectly-crafted medical remedies?

But this is fraught territory. If we step wrong, here, we are not simply going to miss an opportunity to develop new cures and devise interesting gadgets. No; to go astray, on this path, is to begin to see categories of people that “shouldn’t” be “allowed” to reproduce, or “to suffer.” A misapprehension of what we’re about, and why, is far fewer steps away from forced sterilization and medical murder than any of us would like to countenance. And so we need to move very carefully, indeed, always being aware of our biases, and remembering to ask those affected by our decisions what they need and what it’s like to be them. And remembering, when they provide us with their input, to believe them.