Previously, I told you about The Human Futures and Intelligent Machines Summit at Virginia Tech, and now that it’s over, I wanted to go ahead and put the full rundown of the events all in one place.
The goals for this summit were to start looking at the ways in which issues of algorithms, intelligent machine systems, human biotech, religion, surveillance, and more will intersect and affect us in the social, academic, political spheres. The big challenge in all of this was seen as getting better at dealing with this in the university and public policy sectors, in America, rather than the seeming worse we’ve gotten, so far.
Here’s the schedule. Full notes, below the cut.
Friday, June 8, 2018
- Josh Brown on “the distinction between passive and active AI.”
- Daylan Dufelmeier on “the potential ramifications of using advanced computing in the criminal justice arena…”
- Mario Khreiche on the effects of automation, Amazon’s Mechanical Turk, and the Microlabor market.
- Aaron Nicholson on how technological systems are used to support human social outcomes, specifically through the lens of policing in the city of Atlanta
- Ralph Hall on “the challenges society will face if current employment and income trends persist into the future.”
- Jacob Thebault-Spieker on “how pro-urban and pro-wealth biases manifest in online systems, and how this likely influences the ‘education’ of AI systems.”
- Hani Awni on the sociopolitical of excluding ‘relational’ knowledge from AI systems.
Saturday, June 9, 2018
- Chelsea Frazier on rethinking our understandings of race, biocentrism, and intelligence in relation to planetary sustainability and in the face of increasingly rapid technological advancement.
- Ras Michael Brown on using the religions technologies of West Africa and the West African Diaspora to reframe how we think about “hybrid humanity.”
- Damien Williams on how best to use interdisciplinary frameworks in the creation of machine intelligence and human biotechnological interventions.
- Sara Mattingly-Jordan on the implications of the current global landscape in AI ethics regulation.
- Kent Myers on several ways in which the intelligence community is engaging with human aspects of AI, from surveillance to sentiment analysis.
- Emma Stamm on the idea that datafication of the self and what about us might be uncomputable.
- Joshua Earle on “Morphological Freedom.”
The first presentation was from Joshua Brown, with “Google Translate vs Skynet.” This was research on computational neural modeling from both the biomimicry and the intelligent machines sides of the spectrum, the brain is better at computation than most made computers, but there is still much to explore via algorithms.
Looking at the difference between Google Translate and a potential Skynet, where both would use deep learning networks, but with GT learning how to be increasingly helpful, and SN taking over and nuking everything. In each case we are, or would be, talking about recurrent networks, active AI, and goals. A passive AI has no drives, goals, or volition, but an active AI would have drives and desires, external and internal motivation. This brings us to the problem of “Reward” mechanisms, for something nonbiologoical.
For humans, the nature of reward changes based on internal homeostasis, and also environment. So can we build in internal motivations that change over time? We can try building models that solve general problems, using patterns of neural activity to match specific brain regions, using information gained from fMRI studies.
Dangers in this scenario aren’t from “Self-awareness” but potentially from 4 combined factors: 1) Unchecked internal motivation, 2) goal direction (desired new state of the world), 3) agency, 4) problem solving ability. All of this, together, creates self-willing agent.
He also discussed some frankly terrifying advances in the ability to change behaviour via noninvasive brain stimulation—that is, magnetic or auditory stimulation without the need for drilling into the cranium.
There were questions about the 4 factors, noting that they are not neutral, but need to be considered based on the values and beliefs out of which they are born. Humans have all 4 of these, and we are definitely dangerous, but some of us seek to enslave and subjugate and oppress, and some of us certainly do not. The values on which a system is trained matter. He noted that thinking about Maslow’s hierarchy could map onto reinforcement learning, and make it a social creature.
Asked how he approaches the conflict between human needs and machine needs, he discussed model-based reinforcement learning and making use of predictive algorithms.
Next up was Daylan Dufelmeier discussing the intersection of health outcomes, crime, and predictive policing from algorithmic systems reproducing and iterating on existing human biases. He used the PredPol open data set, and other systems, specifically within the city of Chicago, to look at community health and predictive measuring as well as the systematic advantages and disadvantages which can accumulate over the course of someone’s life. Ultimately, there is a knock-on effect for who gets policed, and what kind of response is deployed where, with higher arrest rates in traditionally overpoliced areas. Pointing to the “Heat List”: The Chicago Strategic Subjects List.
Three main points he raised about what he thinks needs to be done, here:
1) all models and data points should be transparent
2) people should be able to challenge their score
3) before full adoption of a program, it should have to be real world tested for not fewer than 5 years, with consistent rate of betterment
Without these, machine learning and algorithmic systems will simply get us to the wrong place faster and more efficiently.
Next was Mario Khreiche with a discussion of AI and the Future of work, specifically looking at Amazon’s Mechanical Turk network. Khreiche posits that the replaceability of humans by automated systems is not as large a problem as otherwise assumed, as a profession is not just a set of tasks, but is interoperable with the intuition and knowledge of the people in that profession. More pressing, he says, is the question of how human labour and wellbeing is affected by the implementation of machines.
By and large, the majority of AI work is still done via crowdsourcing of assignments, which then uses the completion of those tasks as datasets to train the AI;
AMT is more of a social technology than a truly automated system, and companies like Acxiom organize their data by being trained by the AMT teams, and this is just one way in which Amazon leverages its workforce to global clientèle. There are many potential corporate and government applications of human tracking and surveillance, where continuous inputs are always changing and being updated, meaning that this kind of microlabour is likely going to increase before it decreases. But there are also Sousveillance operations carried out on requesters who don’t treat AMT “Turkers” fairly, a practice which gives more power to the workers, specifically, and is one of the reasons AMT cannot accurately be called a “sweatshop.”
So what? Four things:
1) Neither automation or AI happen in in tech bubbles, but are quintessentially entangled in capital and labor;
2) Actually existing capital gears toward monopoly, so corporations will increasingly set the tone of automation/AI;
3) There is no fixed endpoint to either automation or AI, meaning that “taskification” is ongoing and human labour will be continuously required;
4) “Technology is therefore social before it is technical” meaning Am’s distribution of labour must be theorized as a social, political, economic, and cultural formation.
When asked about the encoding of the AMT labour field along race and gender lines, Khreiche noted that it is pretty evenly split between men and women, and that it’s a pretty international population.
After that was Aaron Nicholson, from the Atlanta Police Foundation, which was a bit startling, honestly. He started with a stand-up circle exercise in which people were meant to try to catch the fingers of those next to them. With repeated tries, 1/3 of the room succeeded and 2/3 had failed. with greater numbers of “catches” by those who had previously caught. He tied this to data showing that 2/3 of the minority population of the city of Atlanta has been arrested, and that of the 824 juveniles arrested in ATL, last year, 801 were minorities and only 23 were white (and that the numbers and ratio were 1200/~1100, three years ago). Any algorithmic system trained on this data will increase racial disparities in the justice system
Nicholson has worked to create strategy for the mayor and chief of police in Atlanta on how to reduce crime. His foundation collects money through donations, rather than taxes, and can therefore spend it however they want. They started by looking at the 19 murders in ATL, 14 of which occurred on the west side in first 4 weeks of 2016. From here, they’ve started investing in the area by buying property and putting 25 cops in houses in the area, to have them be interactive members in the community, as well as starting mentoring, food assistance, and other youth intervention initiatives. The goal of this is training and encouraging police on the streets to use discretion and not incarcerate, but bring juvenile offenders to outreach centers; he used the phrase “Walmart of youth treatment.”
Using existing predictive policing methods and algorithms, kids without resources are more likely to be incarcerated, but the more resources they have, the more likely they are to be excluded from prosecution, so we have to keep human discretion in the picture and increase learning. We have to be looking at ways to fix the roots of the problem, and addressing issues of juvenile trauma and abuse and food insecurity and so on. From here, we can do Juvenile Detention Alternative Initiative assessments, gathering reoffense stats and addressing their needs as the very first question.
In addition to all of this, Nicholson noted that the city has begun to install cameras in the West Side area, and in fact Atlanta now has the second largest camera network in the world, with 10,000 cameras (including license plate readers) across the city, 500 of which are in the West side. And why all this mention of the west side, you ask, especially when six murders took place in the South side of Atlanta, just this year, and none of those six was picked up by these cameras? Well because, in addition to those 14 other murders, what Atlanta’s West Side has going for it is one brand new, ridiculously large, and shockingly expensive Mercedes-Benz stadium. So, without further intervention options, all of the police presence on this side of down would be focused on arrests, with predictably disproportionate effects on minority youth.
Atlanta also has a full Shotspotter sound and camera system suite, but all of these technologies are all being deployed in the wealthiest areas of the city. In fact, Nicholson says, for $22k you can pay to have one of the surveillance cameras placed directly in front of your home.
When asked about the level of integration with CPS and social services, Nicholson noted that data-sharing among these agencies is almost nonexistent, but that he wants to make it such that the APF cannot pass the youth off to make them “someone else’s problem.” He wants to have to work with them internally, with their internal and specifically tailored metrics for success. Says he’s trying to get everybody on the same pages to create infrastructure to help individuals, rather than thinking there’s a one-size-fits-all solution.
Asked how we can expect an AI system to distinguish between trauma and its effects, when we as humans don’t do it well, he said that the real problem is in thinking of the numbers as objective, and that’s not really an AI problem. This led to a conversation about how the creators of algorithmic systems often haven’t intended these tools to be used in the ways they get used, resulting in distillations and abstractions getting operationalized, rather than anyone using their discretion.
Next was Ralph Hall, discussing employment and labor in AI and Intelligent machines. Asking questions like, what should the state do in the face of AI and Globalized trade, loss of labour unions, wage stagnation, etc? Hall noted 4 troubling realities:
1) The hollowing out of the middle class, with gross domestic product and profits going up, while jobs prospects and income go down;
2) 3 decades of declines in labour’s share of total income, roughly 94% of net employment going into the “alternative work” sector, signaling the state of precarity and contingent labour, with many graphs of climbing profits and falling wages;
3) Increased cost of living in certain areas leading to a great deal of job displacement, on the whole;
4) Growing concentrations of wealth, with 1% of people making 82% of profit, and 1% owning the mechanisms of capital production, as well.
Hall also noted that potential solutions to climate change in automation and lower wages and costs in the form of more efficient systems could easily exacerbate inequalities in wealth. In the age of robots, as production becomes more capital intensive, the distribution of earnings will ne become more capital intensive. He noted 10 options for reducing inequality and promoting employment and earning capacity:
1) Transfer wealth or income from capital owners and highly paid workers to those underemployed or unemployed—a redistribution of wealth or income;
2) Engage in Keynesian spending for labour- intensive projects improving infrastructure, with government and taxpayers footing the bill;
3) Spread existing work out over a larger population by shortening the workweek, but without maintaining wage parity (decrease amount of work, keep same amount of earnings)
4) Spread existing work over larger population by shortening the work week, with maintenance of wage parity;
5) Limit elimination of jobs during econ downturns, supplement the shortfall in paid wages for workers on furlough or working shorter weeks from a government-administered employer-financed fund;
6) Increase labour’s contributions and therefore its claim on profits from prod and services by upskilling and redesigning work back into production and service;
7) Meet essential human needs in a less expensive and less resource-intensive way by redesigning products, production, service, and systems;
8) Change nature of consumer and human-centered demand, by focusing on using disposable income on services with significantly less capital and energy intensiveness and much more labour intensiveness;
9) Promote creation of cooperative economy to broaden ownership;
10) Better enable poor and middle class people to become owners by extending to them competitive market opportunities to acquire capital with the future earnings of capital, based on binary economics. Saying, “This is how economics works, so let’s figure out a way to get people a capital ownership stake in what gets created.”
After this was Jacob Thebault-Spieker and a discussion on how pro-urban and pro-wealth biases manifest in online systems, and how this likely influences the ‘education’ of AI systems. To do this, Thebault-Spieker examined AI, maps, and social media. He says that he specifically explored people use of open training sets like social media, knowing that they get used because they’re free, but it also teaches AI about the world. Much of people’s perceptions and preconceptions therefore gets geocoded to map data. He showed us Google Maps data of two different maps, one from a larger city, one from a smaller city, where the first was far more detailed and filled out than the the latter. This is because people contribute to map data in the same place over time, and tend to contribute to the same kinds of places (eg, large cities) over time, and so systematic geographic patterns pervade social media data, and these biases likely manifest in the AI systems trained on this data.
Some ask, “Isn’t this ‘just’ a data problem?” Meaning, if we had better/more complete data, wouldn’t that fix the bias issue? But the answer here is “No probably not.” Thebault-Spieker cites the 2017 paper, “The effect of population and ‘structural’ biases on social-media-based algorithms,” from Johnson et al, demonstrating that people are bad at predictions and that their algorithms perform poorly in geolocating text to images of rural places, even when they are given 100% of the data necessary. The algorithms used for this work fundamentally do not perform as well in rural areas. In fact, algorithms seem to exacerbate the problem, and that is down to biases built into the systems.
I asked about the long history of map bias, and terms like “global south”==”third world,” and whether that would have any influence on people’s perceptions and behaviour, here, and thus the algorithms, and Thebault-Spieker responded that it was not as concretely related to the historical aspects of these things, but more to the specific mental model that the social media user in question has. The less full their mental map of an area, the more they rely on preexisting biases about that place, rather than learning for themselves.
Someone asked about the assumption of universal experience for life in cities getting encoded into algorithms, with differences in responses for race depending on the algorithm used. Thebault-Spieker said that there are nuances in the assumptions that get made when algorithms get written, but above and beyond the geographical dynamics in the U.S. we have to take action to do this auditing, and build the systems. We need researchers external to the companies building the systems, in addition to employees and government oversight.
This led to a discussion about how it’d be nice to have thought notes from coders about how and why they made the algorithms they’ve made, to help us create a thoughtmodel of it. There is some work being done on this, looking at Wikipedia and how handling vandalism there has, thus far, been an ad hoc approach. The highly transparent nature of that place doesn’t yet have the input from social sciences.
Q: When bias identified, what do you do?
A: Take it as an opportunity for future academic work, and put it into the world. All caveats should be known.
Last presentation on Friday was Hani Awni exploring the politics of the technical. Awni positioned himself as a traditionally trained computer engineer, who has recently come to thinking about issues of social justice, and using his training in analogical reasoning and cognitive science to explore foundation of deep learning artificial neural networks (ANNs), working with what are known as connectivist models of cognition. Basically he is looking at what ANN’s fundamentally are and are not capable of.
He started by exploring the differences between associative and relational information, via a few examples:
Associative: “action film”; motifs, aesthetics; big, dark, clever; correlation; characteristics; free association; if you can shuffle and it doesn’t lose meaning
Relational: “Coming of age story”; tropes, themes; bigger, darker, cleverer; causation; roles; analogy; if you can’t shuffle it because it will lose meaning (Mayor Is Elected By Voters; Voters Are Not Elected By Mayor)
From here, he went into things that ANN’s Can Do, such as “mapping one kind of [associative] vector space onto another,” and noting that ANN’s functionally do not allow for relational information or communication. He referenced studies about bias in what is seen as an “engineer” in word2vec’s “analogies” (takes a word and turns it into a vector), and how that displays massive gender bias.
This has implications for recognizing categories like “violent citizens” when, it’s programmed to, after large public events—> Analyze public social media feeds for images of “Violence”—>Output Image of, eg, Masked Protester being Beaten By Cop as “Violent Image” with no differentiation, thus learning to categorize all pictures of masked protesters as “Violent.” It doesn’t know the difference between the associative expression of “Violent Scenes” and the specific relational roles of the actors involved; associative information fundamentally does not represent the relation between oppressor and oppressed.
The tools being used matter. We must have tools that can deal with relational data. As such, there is no single one “right” field for this; it’s always across fields.
Saturday the 9th of June.
First in the day was Chelsea Frazier who put the theorists Ray Kurzweil and Silvia Winter in theoretical conversation, in order to explore how our visions of the future are altered or contained by specific notions of the human subject—of what it means to be human.
Where Kurzweil is looking at “improving” the organization of and capacity for intelligence in the human subject, Winter holds that humanity cannot be reduced to our biological and physical components. The former sees our hybridity as biology and technology; the latter sees biology and mythology, where it’s the socio-narrative components that make up our senses of our selves.
In this same way, the kurzweilian stance is predominantly considered from the position of those who are white, male, ablebodied, rather than looking at the actual breadth of potential ways of being in the world. Winter thus contends that the maintenance of the techno-utopian vision is upheld almost entirely by a series of interlocking exploitative systems, such as labour and resource extraction.
Kurzweil’s vision is still biocentric even though it seems to rely on nonbiological computation, it’s still about using the latter to “enhance” the former. Winter’s work shows that this a very limited view of the kind of knowledge we can generate and engage in.
Winter and Frazier thus:
1) Resist classifying humans as discrete entities so that
2) We can resist reducing humans to their functions.
Frazier contents that this isn’t about trying to purely displace the biocentric model of the human, but to take seriously the idea of “Being Human” as lived intentional practice in the world.
This talk spawned a large conversation about the nature of energy use for technological development and the negative externalities of the techno-post-humanist dream, as well as a question about the category of the human, and then another as to whether that category is too limiting, if we’re aiming toward nonhuman, nonbiological intelligence. This led into discussion of the narrow conceptions of technology, more generally, and the ways in which transhumanism is exciting at the outset, and then shows us its many problems.
Next up was Ras Michael Brown, exploring the intersections of technology, religion, and societies, in a talk titled “Nkisi Theory for A New Era in Hybrid Humanity.” Brown frames these ideas as a Rematerialization, meaning attempting to repurpose and giving new meanings to matter, and that much of this is very old work— centuries and ages old. Basically, the man talked about studies of the Congo and theCongo-Atlantic/African Diaspora religious context, magick, and technologies, so this was my damn jam.
“Nkisi theory” is about looking at what we tend to talk about as religion and culture, and intentionally recognizing it as technology, one in which a convergence of matter and idea creates technology, just as it is in any other technological tradition. This is about working to get matter to operate with intelligence, and the intention to enhance human abilities with and through intelligent matter.
There is a tendency to allow a modernity bias to affect our understanding of the concept of “Nkisi,” a fact which filters our judgements and understanding of this framework. Unlike with our modern views on tech, the objects aren’t the point of Nkisi, but rather it’s the thinking and process behind the objects. As Brown puts it, this “offers a shared way for the entire society to wrestle with the purposes and consequences of technology.”
Nkisi deals in the intentional conjunction of nature spirits, natural matter, manufactured matter, living humans, song, and dance; it is a Composition of works into a new, made person. The name of the new person will usually be the name of the nature spirit, because the Nature Spirit is the theory—the organizing principle, the conceptual framework—by which this composition was enacted. Matter and ideas sit as processes in engagement together.
Nkisi Conjunctions fundamentally question the idea of a self, of humanity, of autonomy. These conjunctions are and are made of necessarily interconnected systems. The new person that’s composed of this process might be thought of as “intelligent matter” and this thing cannot and should not be autonomous. Not only because autonomy is actually impossible, but also because it is dangerous, in Nkisi, and to be feared. Intelligent matter must be bound to people/society/animals/objects/the world so as to be morally and ethically bound to all beings fixed within its composition. There is no separation of the human from the responsibility to its creation, and the question at the forefront of the participants minds will necessarily be “what is the interface with the natural environment?” That is, what are the negative externalities that we should be thinking about before starting this process?
In the Q& A, someone asked for a story about something that would count as a violation, in this system, and Brown noted that a historical example of such would be the distribution of matter and wealth via the slave trade. The material and wealth disparities created by these oppressions were a problem, but it also created intra- and intersocial conflicts, wherein many societies chose to get rid of Nkisi and focus on predatory material accumulation. However, these conflicts led to the creation of a particular restorative Nkisi: an attempt to undo all that material imbalance. A more contemporary example is the environmental crises in the Angola region, and we can see the death and destruction in the Congo, in fights over technological resources, as directly causally linked.
Then there was me, doing my “values and bias in machine minds and biotechnologies” thing. I took up considerations of how best to use interdisciplinary frameworks to think about the biases we bring into the development of machine intelligence and human biotechnological interventions. Tried to show a path toward working to become more intentional about which values we build into these systems. Started off with a new version of my framing questions:
- How do you travel home?
- When you travel home outside of a car, where are your keys?
- What do you do when a police officer pulls you over?
- What kinds of things about your body do you struggle with whether and when to tell a new romantic partner?
- If you are able to stand, for how long?
- How do you prepare your hair on any given morning?
- What strategies do you have for keeping yourself out of institutional mental care?
- Without looking, how many exits to the lobby are there, and how fast can you reach them, encountering the fewest people possible?
- What is the highest you can reach, unassisted?
- What is the best way to reject someone’s romantic advances such that it is less likely that they will physically assault you?
Because I’ve said it before and I’ll say it again and I’ll teach it to every fucking boardroom full of techbros if you let me that if we can learn to understand that each one of these questions represents a different set of lived experiences and phenomenological knowledges in the world, and set our mind toward accepting and incorporating them as valid knowledge, we’ll be way less shitty, all around, and specifically less likely to reproduce oppressive and prejudiced systems in our technologies.
I pointed to the following people’s articles for guidance:
- Langdon Winner: “Do Artifacts Have Politics?”
- Mary Catherine MacDonald: “…A Case Study On The Phenomenology Of A Combat Veteran’s Social Reintegration”
- Don Ihde: Instrumental Realism
- Ashley Shew: “Upstanding Norms”
- Shannon Vallor: Technology and the Virtues
- Joanna Bryson, et al: “Semantics Derived Automatically From Language Corpora Contain Human-Like Biases.”
- Bruno Latour and Steve Woolgar: “Laboratory Life”
- Donna Haraway: “Situated Knowledges”
- Lorraine Code: What Can She Know? Feminist Theory and the Construction of Knowledge
- Dylan Wittkower “Principles of Anti-Discriminatory Design” (2016)
- Dan Hon “No one’s coming. It’s up to us.” (2018)
- Anna Lauren Hoffmann “Data Violence and How Bad Engineering Choices Can Damage Society” (2018)
- Lea Tufford and Peter Newman “[Bias] Bracketing in Qualitative Research” (2010)
My goal, here, is to get people to understand that there is no such thing as pure objectivity, and as long as you draw breath you’ll have biases, but we can intentionally put our perspectives and expectations in check—bracket them out and work to mitigate them—and learn to learn from the lives and perspectives of other people.
After me was Sara Mattingly-Jordan with “Robotics and Representation.” Mattingly-Jordan, who also sits on the IEEE Society on Social Implications of Technology Work Group asked the question of “Whose Ethics will guide our robots?” Ethics policy for AI is a special case, at present, when we consider the scope of AI use, the black box nature of most algorithms, and the brute luck limitations of being an AI user or an AI subject.
Dr Mattingly-Jordan’s questions were things like, “How do we know that AI will do what we want it to? What we expect it to?” She believes that we need to be concerned, but also that it’s still a similar-enough piece of tech to the rest of humanity’s works that it might only be a design problem. So Dr Mattingly-Jordan holds that something will need to be done, but if we may already have some have frameworks for doing it then, bureaucratically speaking, this shouldn’t be a new issue.
To scope this in, she asked, when is the singularity/convergence/enhancement? That is, when will we know that AI has “escaped”? And most importantly, how do we manage the differential impacts of these systems on different communities, as they exist? Can we talk about people in terms of three types or “bins?” 1) Designers, 2) Users, 3) Subjects. There are groups dealing in ethical governance and standards making, such as the International Standards Organization, but how do we talk about creating a standard system for ethical claims? How do we know the expected ethics are in the thing? What, if any, are the effects of what we’re doing?
Group 1) Designers include professionals and laypersons involved designing AI.
Group 2) Users are any individuals or groups with sufficient capital to knowingly use—and accept the terms of use—of AI.
Group 3) Subjects are any individuals or groups without sufficient capital to… etc.
There will be many more people in Group 3 than there will be creators and knowledgeable users.
Dr Mattingly-Jordan then broke the current AI Ethics policy landscape like this:
—Global policy activity and “supergovernmental” groups: IEEE, GIEAIS, ICRAR/Future Of Life Institute; UN Group of Government Experts on Lethal Autonomous Weapons Systems; ISO and conformity assessment systems
—“National” policy activity: European Union’s GDPR; Japanese Ministry of Economy trade and Industry; DARPA’s internal review systems; Saudi Arabia
Now the main groups and people doing the work of AI policy are from IEEE and UN Laws, etc, and they and their work touch and represent a very small portion of the world, mostly in the Atlantic Corridor, and so we are all situated in a kind of post-post-colonialist moment where the West makes the AI laws for the rest of the world, and here she had an image of AI Firm Distribution and Marketshare around the world, showing the US and China way out ahead of everyone else. Which is to say that there is no decent representation in any of these policies: Not much difference in gender, race, education, physical ability, or geographic location. Oddly there is a greater representation of women in two of the UN committees, but those were also women who were Deputy or Under Secretaries. In the IEEE you have repeat influence of the same women on many committees, rather than many different women on all of them.
The conversation during the Q&A covered things like how different lived experiences do and will matter for what we’ll design; three bin issue as a great starting point, but we certainly can’t stop there. We’ll need to heed the testimony and experience of other groups and stake holders, something which is a not-insignificant problem, in itself. There were suggestion to reach out to Timnit Gebru and the Black In AI group. And this was also the portion of the even where I found out that one of my supervisors from last year’s SRI Technology and Consciousness Workshop Series had been watching on the video steam, because he piped up when I mentioned some of the ways that various private stakeholders work to do public outreach planning for those in the “Subjects” bin. So that was nice.
Next presentation up was Kent Myers, speaking about Defense Intelligence Systems and the implications of AI in the fields of global intelligence. He started off with an example to show how “I wonder if there’s a bomb in the room” is not a thing that many people in the United States have to think about every day, but around the world is a pertinent and recurrent question. In the London Tube system, being told that the train has stopped because “There is a rucksack in Piccadilly” immediately makes sense to every native, as they all know that it means a potential bomb.
Next he discussed the fact that MI6 has psychological profiles of young people who might become terrorists, which they then use to go out and speak to those kids, intervening in their lives before they’re radicalized. As many of these young men are just looking for prospects for “adventure,” MI6 can then capitalize on it—a fact which many of us found deeply fucking worrisome. You’re telling me all it takes is this one, relatively insignificant tweak, and rather than “terrorist,” now they’re “MI6 agent?” I mean, many of us have obviously seen that correlation, well before now, but doesn’t that worry you folks in the law enforcement community?
Anyway, he went on to talk about how Facial Recognition at use in London is not supported by AI, but rather by humans with a higher likelihood of knowing and understanding human people in their contexts [though this might be changing]. The US, on the other hand, is very different in that it mostly just collects everything, “junk” data and all, and so we have to apply more and more computers to process it all. We have basically two cloud services to store and move through, with the military-intelligence side being the Utah NSA collection and storage apparatus.
The main areas in which we use AI in the intelligence community are augmentation of analysts and diversification of analysis teams, and Sentiment Analysis, or influencing and distribution of public sentiment. This leads to Myers’ two big worries: 1) not enough worry about people becoming or thinking like computers, ceding their control; 2) what is the nature of human flourishing and can we achieve it by solving the economic problem?
From here we went into questions about whether it’s possible to know if we’ve offloaded too much of our responsibility to the system, and if it is possible, how will we know? I asked my question about the concerning overlap in mentalities between “terrorists” and “law enforcement,” and also asked whether this could be applied to white mass shooters in the US, with similar results? What about the other kinds of interventions that might be engaged to move them away from military intelligence policing apparatus, as a whole, and not just to one or the other end of the spectrum? Also, are there thoughts or concerns within the military intelligence community about the people who come into the it from Bose Allen and Acxiom and DARPA, in this kind of revolving door kind of way? Myers notes that there is some reflexivity about this, yes.
After this was Emma Stamm, with “The Unthinkable: Datafication, Mentality, and Politics.” She started by noting that she had some internal conflict on whether to use “consciousness” or “mentality,” but that she settled on the latter in order to provide a more concrete understanding of what we’re discussing. She framed this by noting that she’s working with people who think that those who believe that consciousness or mentality are computable are in the minority, and so she often has to explain to her colleagues there are a lot of thing taken for granted in the tech industry sector about what the mind and consciousness are and are like. There is thus a corresponding need to theorize the things that are unquantifiable, such as how intelligence itself might not even be knowable, let alone computable.
To this end, Stamm says, we need input from being from multiple academic and vocational backgrounds, and we need a position of humility about whether we can even know our own mental positioning. We need to think about what it means for something to be “Datafiable,” and be more intentional about what we mean by the term “Data.” She clarified that she is using it as “information that is computable inside an electronic system,” a framing which requires the ability to break things into irreducible discrete parts of equal size that can hold a finite set of potential values (i.e., machine learning systems that produce better models by discretization of continuous systems). This means we have to make decisions about what the cut points will be and how we will organize the cuts we make. This, Stamm says, describes the problem of the “lossiness” of digitization.
Stamm points to the book Gramophone, film, typewriter, by Friedrich Kittler—an investigation into the position of technology and bodily technologies and what will happen to human life as we technologize more and more of the bodily intersection with the world. We need an unmanifest, “dark side” of thought in order to have anything like a mind or mentality—thihngs that remain only in our minds, whether conscious or unconscious. She says that we also have to recognize that we have no real idea what “intelligence” or “consciousness” are, and so we must accept that intentionally instilling it in another thing is a doomed project.
“Data as a medium of equivocation.” We are constantly removing information from the contexts in which it is the most meaningful—DNA in a 23andMe lab is different from DNA in a body. “Datafication,” then, ascribes a specific purposiveness to subjects; when something is data it is defined by the purpose to bear knowledge/information about itself. But, Stamm says, whatever makes us not machines is the same thing that doesn’t need a justification or purpose other than “being,” in itself. Capitalism stands in oppositional relation to this ideal, and so Datafication constructs individuals as political subjects. When we become data we become members/components of a political economy. As we dig further into data, we lose touch with the things we really are.
In the Q&A, a question about thinking of DATA in terms of other mathematical formulation other than digital and binary, such as protein coding and DNA. For this, we need folding, topology, and complexity mathematics, but these are things we do have. Stamm’s reply is that a replication of anything changes it, so every methodology rests on presuppositions. Mathematical reasoning isn’t only computational reasoning, but all mathematical reasoning is constructed out of positivist presuppositions.
Someone asked whether there might be some slippage between the unthinkable and the uncomputable. “Mary in the black and white room” can’t think about colour, but colour can be computed. Might need to be more explicitly separated out, to get into specifics about which kinds of things are uncomputable, and maybe need to think about what models may be useful, even while being wrong? Stamm says she’s not saying that something has to be perfectly replicated in order to be quantified, but more that she’s looking to critique positivism from the position of a negative dialectic, and to get folx to recognize the impossibility of true knowledge. To do this, we need to say that these perspectives are and can be important to how we think about and work with consciousness, and we need to be humble about how we think about this, and what we think we’re capable of doing and knowing.
Someone raised a point about ongoing tension in public health between the epidemiologists and everyone else, with the “hard” vs “soft” sciences, and how it’s hard to move past binary thinking and states of existence, and someone else suggested N. Katherine Hayles’ discussion of the “unthought” and how old conversations in other disciplines become newly pertinent in each new field. We’re thinking about the words we use and the value judgements we place on data and systems, as well as thinking about the usefuleness of wrong models.
Closing out the day was Joshua Earle, with four major motivating questions:
1) What do we consider “enhancement”? How do we tell if something or someone is enhanced? How do we measure it?
2) What traits, if any, do we most want to enhance?
3) Who gets access to these enhancements?
4) If we do start to enhance, how do we avoid growing inequality gaps?
Earle explored the assumptions about human augmentation, starting by going around the room to question what people would “augment” about themselves, and then exploring the assumptions built into each of these choices. He looked at the work on transhumanism and “morphologoical freedom,” to explore both the framing groups who tend to be involved in these movements (mostly cis, straight, white, ablebodied, men) and how they project their presumptions as to the universality of their experiences onto others. From there, he undercuts the assumptions that “morphological freedom” will or even can have any real meaning disconnected from the power and politics of normalization.
Once more members of a society are biotechnologically altering themselves than those who choose not to, there will be social pressures acting against that latter group. They will face repercussions, unless we engage in active and meaningful choices to make “morphological freedom” more than an assumptive hope. During the Q&A we discussed Jo Freeman’s The Tyranny of Structurelessness as a starting point for these kind of power relations about bodily constructions.
And that was the end of that weekend’s talks. We’re currently in the processes of building what’s next.