philosophy of engineering

All posts tagged philosophy of engineering

Appendix A: An Imagined and Incomplete Conversation about “Consciousness” and “AI,” Across Time

Every so often, I think about the fact of one of the best things my advisor and committee members let me write and include in my actual doctoral dissertation, and I smile a bit, and since I keep wanting to share it out into the world, I figured I should put it somewhere more accessible.

So with all of that said, we now rejoin An Imagined and Incomplete Conversation about “Consciousness” and “AI,” Across Time, already (still, seemingly unendingly) in progress:

René Descartes (1637):
The physical and the mental have nothing to do with each other. Mind/soul is the only real part of a person.

Norbert Wiener (1948):
I don’t know about that “only real part” business, but the mind is absolutely the seat of the command and control architecture of information and the ability to reflexively reverse entropy based on context, and input/output feedback loops.

Alan Turing (1952):
Huh. I wonder if what computing machines do can reasonably be considered thinking?

Wiener:
I dunno about “thinking,” but if you mean “pockets of decreasing entropy in a framework in which the larger mass of entropy tends to increase,” then oh for sure, dude.

John Von Neumann (1958):
Wow things sure are changing fast in science and technology; we should maybe slow down and think about this before that change hits a point beyond our ability to meaningfully direct and shape it— a singularity, if you will.

Clynes & Klines (1960):
You know, it’s funny you should mention how fast things are changing because one day we’re gonna be able to have automatic tech in our bodies that lets us pump ourselves full of chemicals to deal with the rigors of space; btw, have we told you about this new thing we’re working on called “antidepressants?”

Gordon Moore (1965):
Right now an integrated circuit has 64 transistors, and they keep getting smaller, so if things keep going the way they’re going, in ten years they’ll have 65 THOUSAND. :-O

Donna Haraway (1991):
We’re all already cyborgs bound up in assemblages of the social, biological, and techonological, in relational reinforcing systems with each other. Also do you like dogs?

Ray Kurzweil (1999):
Holy Shit, did you hear that?! Because of the pace of technological change, we’re going to have a singularity where digital electronics will be indistinguishable from the very fabric of reality! They’ll be part of our bodies! Our minds will be digitally uploaded immortal cyborg AI Gods!

Tech Bros:
Wow, so true, dude; that makes a lot of sense when you think about it; I mean maybe not “Gods” so much as “artificial super intelligences,” but yeah.

90’s TechnoPagans:
I mean… Yeah? It’s all just a recapitulation of The Art in multiple technoscientific forms across time. I mean (*takes another hit of salvia*) if you think about the timeless nature of multidimensional spiritual architectures, we’re already—

DARPA:
Wait, did that guy just say something about “Uploading” and “Cyborg/AI Gods?” We got anybody working on that?? Well GET TO IT!

Disabled People, Trans Folx, BIPOC Populations, Women:
Wait, so our prosthetics, medications, and relational reciprocal entanglements with technosocial systems of this world in order to survive makes us cyborgs?! :-O

[Simultaneously:]

Kurzweil/90’s TechnoPagans/Tech Bros/DARPA:
Not like that.
Wiener/Clynes & Kline:
Yes, exactly.

Haraway:
I mean it’s really interesting to consider, right?

Tech Bros:
Actually, if you think about the bidirectional nature of time, and the likelihood of simulationism, it’s almost certain that there’s already an Artificial Super Intelligence, and it HATES YOU; you should probably try to build it/never think about it, just in case.

90’s TechnoPagans:
…That’s what we JUST SAID.

Philosophers of Religion (To Each Other):
…Did they just Pascal’s Wager Anselm’s Ontological Argument, but computers?

Timnit Gebru and other “AI” Ethicists:
Hey, y’all? There’s a LOT of really messed up stuff in these models you started building.

Disabled People, Trans Folx, BIPOC Populations, Women:
Right?

Anthony Levandowski:
I’m gonna make an AI god right now! And a CHURCH!

The General Public:
Wait, do you people actually believe this?

Microsoft/Google/IBM/Facebook:
…Which answer will make you give us more money?

Timnit Gebru and other “AI” Ethicists:
…We’re pretty sure there might be some problems with the design architectures, too…

Some STS Theorists:
Honestly this is all a little eugenics-y— like, both the technoscientific and the religious bits; have you all sought out any marginalized people who work on any of this stuff? Like, at all??

Disabled People, Trans Folx, BIPOC Populations, Women:
Hahahahah! …Oh you’re serious?

Anthony Levandowski:
Wait, no, nevermind about the church.

Some “AI” Engineers:
I think the things we’re working on might be conscious, or even have souls.

“AI” Ethicists/Some STS Theorists:
Anybody? These prejudices???

Wiener/Tech Bros/DARPA/Microsoft/Google/IBM/Facebook:
“Souls?” Pfffft. Look at these whackjobs, over here. “Souls.” We’re talking about the technological singularity, mind uploading into an eternal digital universal superstructure, and the inevitability of timeless artificial super intelligences; who said anything about “Souls?”

René Descartes/90’s TechnoPagans/Philosophers of Religion/Some STS Theorists/Some “AI” Engineers:

[Scene]


Read more of this kind of thing at:
Williams, Damien Patrick. Belief, Values, Bias, and Agency: Development of and Entanglement with “Artificial Intelligence.” PhD diss., Virginia Tech, 2022. https://vtechworks.lib.vt.edu/handle/10919/111528.

Previously, I told you about The Human Futures and Intelligent Machines Summit at Virginia Tech, and now that it’s over, I wanted to go ahead and put the full rundown of the events all in one place.

The goals for this summit were to start looking at the ways in which issues of algorithms, intelligent machine systems, human biotech, religion, surveillance, and more will intersect and affect us in the social, academic, political spheres. The big challenge in all of this was seen as getting better at dealing with this in the university and public policy sectors, in America, rather than the seeming worse we’ve gotten, so far.

Here’s the schedule. Full notes, below the cut.

Friday, June 8, 2018

  • Josh Brown on “the distinction between passive and active AI.”
  • Daylan Dufelmeier on “the potential ramifications of using advanced computing in the criminal justice arena…”
  • Mario Khreiche on the effects of automation, Amazon’s Mechanical Turk, and the Microlabor market.
  • Aaron Nicholson on how technological systems are used to support human social outcomes, specifically through the lens of policing  in the city of Atlanta
  • Ralph Hall on “the challenges society will face if current employment and income trends persist into the future.”
  • Jacob Thebault-Spieker on “how pro-urban and pro-wealth biases manifest in online systems, and  how this likely influences the ‘education’ of AI systems.”
  • Hani Awni on the sociopolitical of excluding ‘relational’ knowledge from AI systems.

Saturday, June 9, 2018

  • Chelsea Frazier on rethinking our understandings of race, biocentrism, and intelligence in relation to planetary sustainability and in the face of increasingly rapid technological advancement.
  • Ras Michael Brown on using the religions technologies of West Africa and the West African Diaspora to reframe how we think about “hybrid humanity.”
  • Damien Williams on how best to use interdisciplinary frameworks in the creation of machine intelligence and human biotechnological interventions.
  • Sara Mattingly-Jordan on the implications of the current global landscape in AI ethics regulation.
  • Kent Myers on several ways in which the intelligence community is engaging with human aspects of AI, from surveillance to sentiment analysis.
  • Emma Stamm on the idea that datafication of the self and what about us might be uncomputable.
  • Joshua Earle on “Morphological Freedom.”

Continue Reading

This weekend, Virginia Tech’s Center for the Humanities is hosting The Human Futures and Intelligent Machines Summit, and there is a link for the video cast of the events. You’ll need to Download and install Zoom, but it should be pretty straightforward, other than that.

You’ll find the full Schedule, below the cut.

Continue Reading

Late last month, I was at Theorizing the Web, in NYC, to moderate Panel B3, “Bot Phenomenology,” in which I was very grateful to moderate a panel of people I was very lucky to be able to bring together. Johnathan Flowers, Emma Stamm, and Robin Zebrowski were my interlocutors in a discussion about the potential nature of nonbiological phenomenology. Machine consciousness. What robots might feel.

I led them through with questions like “What do you take phenomenology to mean?” and “what do you think of the possibility of a machine having a phenomenology of its own?” We discussed different definitions of “language” and “communication” and “body,” and unfortunately didn’t have a conversation about how certain definitions of those terms mean that what would be considered language between cats would be a cat communicating via signalling to humans.

It was a really great conversation and the Live Stream video for this is here, and linked below (for now, but it may go away at some point, to be replaced by a static youtube link; when I know that that’s happened, I will update links and embeds, here).

Continue Reading

My piece “Cultivating Technomoral Interrelations,” a review of Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting has been up over at the Social Epistemology Research and Reply Collective for a few months, now, so I figured I should post something about it, here.

As you’ll read, I was extremely taken with Vallor’s book, and think it is a part of some very important work being done. From the piece:

Additionally, her crucial point seems to be that through intentional cultivation of the self and our society, or that through our personally grappling with these tasks, we can move the world, a stance which leaves out, for instance, notions of potential socioeconomic or political resistance to these moves. There are those with a vested interest in not having a more mindful and intentional technomoral ethos, because that would undercut how they make their money. However, it may be that this is Vallor’s intent.

The audience and goal for this book seems to be ethicists who will be persuaded to become philosophers of technology, who will then take up this book’s understandings and go speak to policy makers and entrepreneurs, who will then make changes in how they deal with the public. If this is the case, then there will already be a shared conceptual background between Vallor and many of the other scholars whom she intends to make help her to do the hard work of changing how people think about their values. But those philosophers will need a great deal more power, oversight authority, and influence to effectively advocate for and implement what Vallor suggests, here, and we’ll need sociopolitical mechanisms for making those valuative changes, as well.

[Image of the front cover of Shannon Vallor’s TECHNOLOGY AND THE VIRTUES. Circuit pathways in the shapes of trees.]

This is, as I said, one part of a larger, crucial project of bringing philosophy, the humanities, and social sciences into wide public conversation with technoscientific fields and developers. While there have always been others doing this work, it is increasingly the case that these folks are being both heeded and given institutional power and oversight authority.

As we continue the work of building these systems, and in the wake of all these recent events, more and more like this will be necessary.

Shannon Vallor’s Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting is out in paperback, June 1st, 2018. Read the rest of “Cultivating Technomoral Interrelations: A Review of Shannon Vallor’s Technology and the Virtues at the Social Epistemology Review and Reply Collective.

Earlier this month I was honoured to have the opportunity to sit and talk to Douglas Rushkoff on his TEAM HUMAN podcast. If you know me at all, you know this isn’t by any means the only team for which I play, or even the only way I think about the construction of our “teams,” and that comes up in our conversation. We talk a great deal about algorithms, bias, machine consciousness, culture, values, language, and magick, and the ways in which the nature of our categories deeply affect how we treat each other, human and nonhuman alike. It was an absolutely fantastic time.

From the page:

In this episode, Williams and Rushkoff look at the embedded biases of technology and the values programed into our mediated lives. How has a conception of technology as “objective” blurred our vision to the biases normalized within these systems? What ethical interrogation might we apply to such technology? And finally, how might alternative modes of thinking, such as magick, the occult, and the spiritual help us to bracket off these systems for pause and critical reflection? This conversation serves as a call to vigilance against runaway systems and the prejudices they amplify.

As I put it in the conversation: “Our best interests are at best incidental to [capitalist systems] because they will keep us alive long enough to for us to buy more things from them.” Following from that is the fact that we build algorithmic systems out of those capitalistic principles, and when you iterate out from there—considering all attendant inequalities of these systems on the merely human scale—we’re in deep trouble, fast.

Check out the rest of this conversation to get a fuller understanding of how it all ties in with language and the occult. It’s a pretty great ride, and I hope you enjoy it.

Until Next Time.

I have a review of Ashley Shew’s Animal Constructions and Technological Knowledge, over at the Social Epistemology Research and Reply Collective: “Deleting the Human Clause.”

From the essay:

Animal Constructions and Technological Knowledge is Ashley Shew’s debut monograph and in it she argues that we need to reassess and possibly even drastically change the way in which we think about and classify the categories of technology, tool use, and construction behavior. Drawing from the fields of anthropology, animal studies, and philosophy of technology and engineering, Shew demonstrates that there are several assumptions made by researchers in all of these fields—assumptions about intelligence, intentionality, creativity and the capacity for novel behavior…

Shew says that we consciously and unconsciously appended a “human clause” to all of our definitions of technology, tool use, and intelligence, and this clause’s presumption—that it doesn’t really “count” if humans aren’t the ones doing it—is precisely what has to change.

I am a huge fan of this book and of Shew’s work, in general. Click through to find out a little more about why.

Until Next Time.

So, many of you may remember that back in June of 2016, I was invited to the Brocher Institute in Hermance, Switzerland, on the shores of Lake Geneva, to take part in the Frankenstein’s Shadow Symposium sponsored by Arizona State University’s Center for Science and the Imagination as part of their Frankenstein Bicentennial project.

While there, I and a great many other thinkers in art, literature, history, biomedical ethics, philosophy, and STS got together to discuss the history and impact of Mary Shelley’s Frankenstein. Since that experience, the ASU team compiled and released a book project: A version of Mary Shelley’s seminal work that is filled with annotations and essays, and billed as being “For Scientists, Engineers, and Creators of All Kinds.”

[Image of the cover of the 2017 edited, annotated edition of Mary Shelley’s Frankenstein, “Annotated for Scientists, Engineers, and Creators of All Kinds.”]

Well, a few months ago, I was approached by the organizers and asked to contribute to a larger online interactive version of the book—to provide an annotation on some aspect of the book I deemed crucial and important to understand. As of now, there is a full functional live beta version of the website, and you can see my contribution and the contributions of many others, there.

From the About Page:

Frankenbook is a collective reading and collaborative annotation experience of the original 1818 text of Frankenstein; or, The Modern Prometheus, by Mary Wollstonecraft Shelley. The project launched in January 2018, as part of Arizona State University’s celebration of the novel’s 200th anniversary. Even two centuries later, Shelley’s modern myth continues to shape the way people imagine science, technology, and their moral consequences. Frankenbook gives readers the opportunity to trace the scientific, technological, political, and ethical dimensions of the novel, and to learn more about its historical context and enduring legacy.

To learn more about Arizona State University’s celebration of Frankenstein’s bicentennial, visit frankenstein.asu.edu.

You’ll need to have JavaScript enabled and ad-blocks disabled to see the annotations, but it works quite well. Moving forward, there will be even more features added, including a series of videos. Frankenbook.org will be the place to watch for all updates and changes.

I am deeply honoured to have been asked to be a part of this amazing project, over the past two years, and I am so very happy that I get to share it with all of you, now. I really hope you enjoy it.

Until Next Time.

So by now you’re likely to have encountered something about the NYT Op-Ed Piece calling for a field of study that focuses on the impact of AI and algorithmic systems, a stance that elides the existence of not only communications and media studies people who focus on this work, but the whole entire disciplines of Philosophy of Technology and STS (rendered variously as “Science and Technology Studies” or “Science Technology and Society,” depending on a number of factors, but if you talk about STS, you’ll get responses from all of the above, about the same topics). While Dr. O’Neil has since tried to reframe this editorial as a call for businesses, governments, and the public to pay more attention to those people and groups, many have observed that such an argument exists nowhere in the article itself. Instead what we have is lines like academics (seemingly especially those in the humanities) are “asleep at the wheel.”

Instead of “asleep at the wheel” try “painfully awake on the side of the road at 5am in a part of town lyft and uber won’t come to, trying to flag down a taxi driver or hitchhike or any damn thing just please let me make this meeting so they can understand some part of what needs to be done.”* The former ultimately frames the humanities’ and liberal arts’ lack of currency and access as “well why aren’t you all speaking up more.” The latter gets more to the heart of “I’m sorry we don’t fund your departments or engage with your research or damn near ever heed your recommendations that must be so annoying for you oh my gosh.”

But Dr O’Neil is not the only one to write or say something along these lines—that there is somehow no one, or should be someone out here doing the work of investigating algorithmic bias, or infrastructure/engineering ethics, or any number of other things that people in philosophy of technology and STS are definitely already out here talking about. So I figured this would be, at the least, a good opportunity to share with you something discussing the relationship between science and technology, STS practitioners’ engagement with the public, and the public’s engagement of technoscience. Part 1 of who knows how many.

[Cover of the journal Techné: Research in Philosophy and Technology]

The relationship between technology and science is one in which each intersects with, flows into, shapes, and affects the other. Not only this, but both science and technology shape and are shaped by the culture in which they arise and take part. Viewed through the lens of the readings we’ll discuss it becomes clear that many scientists and investigators at one time desired a clear-cut relationship between science and technology in which one flows from the other, with the properties of the subcategory being fully determined by those of the framing category, and sociocultural concerns play no part.

Many investigators still want this clarity and certainty, but in the time since sociologists, philosophers, historians, and other investigators from the humanities and so called soft sciences began looking at the history and contexts of the methods of science and technology, it has become clear that these latter activities do not work in an even and easily rendered way. When we look at the work of Sergio Sismondo, Trevor J. Pinch and Wiebe E. Bijker, Madeline Akrich, and Langdon Winner, we can see that the social dimensions and intersections of science, culture, technology, and politics are and always have been crucially entwined.

In Winner’s seminal “Do Artifacts Have Politics?”(1980), we can see what counts as a major step forward along the path toward a model which takes seriously the social construction of science and technology, and the way in which we go about embedding our values, beliefs, and politics into the systems we make. On page 127, Winner states,

The things we call “technologies” are ways of building order in our world… Consciously or not, deliberately or inadvertently, societies choose structures for technologies that influence how people are going to work, communicate, travel, consume, [etc.]… In the processes by which structuring decisions are made, different people … possess unequal degrees of power [and] levels of awareness.

By this, Winner means to say that everything we do in the construction of the culture of scientific discovery and technological development is modulated by the sociocultural considerations that get built into them, and those constructed things go on to influence the nature of society, in turn. As a corollary to this, we can see a frame in which the elements within the frame—including science and technology—will influence and modulate each other, in the process of generating and being generated by the sociopolitical frame. Science will be affected by the tools it uses to make its discoveries, and the tools we use will be modulated and refined as our understandings change.

Pinch and Bijker write very clearly about the multidirectional interactions of science, technology, and society in their 1987 piece, [The Social Construction of Technological Systems,] using the history of the bicycle as their object of study. Through their investigation of the messy history of bicycles, “safety bicycles,” inflated rubber tires, bicycle racing, and PR ad copy, Pinch and Bijker show that science and technology aren’t clearly distinguished anymore, if they ever were. They show how scientific studies of safety were less influential on bicycle construction and adoption than the social perception [of] the devices, meaning that politics and public perception play a larger role in what gets studied, created, and adopted than we used to admit.

They go on to highlight a kind of multidirectionality and interpretive flexibility, which they say we achieve by looking at the different social groups that intersect with the technology, and the ways in which they do so (pg. 34). When we do this, we will see that each component group is concerned with different problems and solutions, and that each innovation made to address these concerns alters the landscape of the problem space. How we define the problem dictates the methods we will use and the technology that we create to seek a solution to it.

[Black and white figures comparing the frames of a Whippet Spring Frame bicycle (left) and a Singer Xtraordinary bicycle (right), from “The Social Construction of Facts and Artifacts: Or How the Sociology of Science and the Sociology of Technology Might Benefit Each Other” by Trevor J. Binch and Wiebe E. Bijker, 1987]


Akrich’s 1997 “The De-Scription of Technical Objects” (published, perhaps unsurprisingly, in a volume coedited by Bijker), engages the moral valences of technological intervention, and the distance between intent in design and “on the ground” usage. In her investigation of how people in Burkina Faso, French Polynesia, and elsewhere make use of technology such as generators and light boxes, we again see a complex interplay between the development of a scientific or technological process and the public adoption of it. On page 221 Akrich notes, “…the conversion of sociotechnical facts into facts pure and simple depends on the ability to turn technical objects into black boxes. In other words, as they become indispensable, objects also have to efface themselves.” That is, in order for the public to accept the scientific or technological interventions, those interventions had to become an invisible part of the framework of the public’s lives. Only when the public no longer had to think about these interventions did they become paradoxically “seen,” understood, as “good” science and technology.

In Sismondo’s “Science and Technology Studies and an Engaged Program” (2008) he spends some time discussing the social constructivist position that we’ve begun laying out, above—the perspective that everything we do and all the results we obtain from the modality of “the sciences” are constructed in part by that mode. Again, this would mean that “constructed” would describe both the data we organize out of what we observe, and what we initially observe at all. From page 15, “Not only data but phenomena themselves are constructed in laboratories—laboratories are places of work, and what is found in them is not nature but rather the product of much human effort.”

But Sismondo also says that this is only one half of the picture, then going on to discuss the ways in which funding models, public participation, and regulatory concerns can and do alter the development and deployment of science and technology. On page 19 he discusses a model developed in Denmark in the 1980’s:

Experts and stakeholders have opportunities to present information to the panel, but the lay group has full control over its report. The consensus conference process has been deemed a success for its ability to democratize technical decision-making without obviously sacrificing clarity and rationality, and it has been extended to other parts of Europe, Japan, and the United States…

This all merely highlights the fact that, if the public is going to be engaged, then the public ought to be as clear and critical as possible in its understanding of the exchanges that give rise to the science and technology on which they are asked to comment.

The non-scientific general public’s understanding of the relationship between science and technology is often characterized much as I described at the beginning of this essay. That is, it is often said that the public sees the relationship as a clear and clean move from scientific discoveries or breakthroughs to a device or other application of those principles. However, this casting does not take into account the variety of things that the public will often call technology, such as the Internet, mobile phone applications, autonomous cars, and more.

While there are scientific principles at play within each of those technologies, it still seems a bit bizarre to cast them merely as “applied science.” They are not all devices or other single physical instantiations of that application, and even those that are singular are the applications of multiple sciences, and also concrete expressions of social functions. Those concretions have particular psychological impacts, and philosophical implications, which need to be understood by both their users and their designers. Every part affects every other part, and each of those parts is necessarily filtered through human perspectives.

The general public needs to understand that every technology humans create will necessarily carry within it the hallmarks of human bias. Regardless of whether there is an objective reality at which science points, the sociocultural and sociopolitical frameworks in which science gets done will influence what gets investigated. Those same sociocultural and sociopolitical frameworks will shape the tools and instruments and systems—the technology—used to do that science. What gets done will then become a part of the scientific and technological landscape to which society and politics will then have to react. In order for the public to understand this, we have to educate about the history of science, the nature of social scientific methods, and the impact of implicit bias.

My own understanding of the relationship between science and technology is as I have outlined: A messy, tangled, multivalent interaction in which each component influences and is influenced by every other component, in near simultaneity. This framework requires a willingness to engage multiple perspectives and disciplines, and to perhaps reframe the normative project of science and technology to one that appreciates and encourages a multiplicity of perspectives, and no single direction of influence between science, technology and society. Once people understand this—that science and technology generate each other while influencing and being influenced by society—we do the work of engaging them in a nuanced and mindful way, working together to prevent the most egregious depredations of technoscientific development, or at least to agilely respond to them, as they arise.

But to do this, researchers in the humanities need to be heeded. In order to be heeded, people need to know that we exist, and that we have been doing this work for a very, very long time. The named field of Philosophy of Technology has been around for 70 years, and it in large parta foregrounded the concerns taken up and explored by STS. Here are just a few names of people to look at in this extensive history: Martin Heidegger, Bruno Latour, Don Ihde, Ian Hacking, Joe Pitt, and more recently, Ashley Shew, Shannon Vallor, Robin Zebrowski, John P. Sullins, John Flowers, Matt Brown, Shannon Conley, Lee Vinsel, Jacques Ellul, Andrew Feenberg, Batya Friedman, Geoffrey C. Bowker and Susan Leigh Star, Rob Kling, Phil Agre, Lucy Suchman, Joanna Bryson, David Gunkel, so many others. Langdon Winner published “Do Artifacts Have Politics” 37 years ago. This episode of the You Are Not So Smart podcast, along with Shannon Vallor and Alistair Croll, has all of us talking about the public impact of the aforementioned.

What I’m saying is that many of us are trying to do the work, out here. Instead of pretending we don’t exist, try using large platforms (Like the NYT opinion page, and well read blogs) to highlight the very real work being attempted. I know for a fact the NYT has received submission articles about philosophy of tech and STS. Engage them. Discuss these topics in public, and know that there are many voices trying to grapple with and understand this world, and we have been, for a really damn long time.

So you see that we are still talking about learning and thinking in public. About how we go about getting people interested and engaged in the work of the technology that affects their lives. But there is a lot at the base of all this about what people think of as “science” or “expertise” and where they think that comes from, and what they think of those who engage in or have it. If we’re going to do this work, we have to be able to have conversations with people who not only don’t value what we do, but who think what we value is wrongheaded, or even evil. There is a lot going on in the world, right now, in regards to science and knowability. For instance, late last year there was a revelation about the widespread use of Dowsing by UK water firms (though if you ask anybody in the US, you’ll find it’s still in use, here, too).

And then this guy was trying to use systems of fluid dynamics and aeronautics to launch himself in a rocket to prove that the earth is flat and that science isn’t real. Yeah. And while there’s a much deeper conversation to be had here about whether the social construction of the category of “science” can be understood as distinct from a set of methodologies and formulae, but i really don’t think this guy is talking about having that conversation.

So let’s also think about the nature of how laboratory science is constructed, and what it can do for us.

In his 1983 “Give Me a Laboratory and I Will Move The World,” Bruno Latour makes the claim that labs have their own agency. What Latour is asserting, here, is that the forces which coalesce within the framework of a lab become active agents in their own right. They are not merely subject to the social and political forces that go into their creation, but they are now active participants in the framing and reframing of those forces. He believes that the nature of inscription—the combined processes of condensing, translating, and transmitting methods, findings, and pieces of various knowledges—is a large part of what gives the laboratory this power, and he highlights this when he says:

The strength gained in the laboratory is not mysterious. A few people much weaker than epidemics can become stronger if they change the scale of the two actors—making the microbes big, and the epizootic small—and others dominate the events through the inscription devices that make each of the steps readable. The change of scale entails an acceleration in the number of inscriptions you can get. …[In] a year Pasteur could multiply anthrax outbreaks. No wonder that he became stronger than veterinarians. For every statistic they had, he could mobilize ten of them. (pg. 163—164)

This process of inscription is crucial for Latour; not just for the sake of what the laboratory can do of its own volition, but also because it is the mechanism by which scientists may come to understand and translate the values and concerns of another, which is, for him, the utmost value of science. In rendering the smallest things such as microbes and diseases legible on a large scale, and making largescale patterns individually understandable and reproducible, the presupposed distinctions of “macro” and “micro” are shown to be illusory. Latour believes that it is only through laboratory engagement that we can come to fully understand the complexities of these relationships (pg. 149).

When Latour begins laying out his project, he says sociological methods can offer science the tools to more clearly translate human concerns into a format with which science can grapple. “He who is able to translate others’ interests into his own language carries the day.” (pg. 144). However, in the process of detailing what it is that Pasteurian laboratory scientists do in engaging the various stakeholders in farms, agriculture, and veterinary medicine, it seems that he is has only described half of the project. Rather than merely translating the interests of others into our own language, evidence suggests that we must also translate our interests back into the language of our interlocutor.

So perhaps we can recast Latour’s statement as, “whomsoever is able to translate others’ interests into their own language and is equally able to translate their own interests into the language of another, carries the day.” Thus we see that the work done in the lab should allow scientists and technicians to increase the public’s understanding both of what it is that technoscience actually does and why it does it, by presenting material that can speak to many sets of values.

Karin Knorr-Cetina’s assertion in her 1995 article “Laboratory Studies: The Cultural Approach to the Study of Science” is that laboratory is an “enhanced” environment. In many ways this follows directly from Latour’s conceptualization of labs. Knorr-Cetina says that the constructed nature of the lab ‘“improves upon” the natural order,’ because said natural order is, in itself, malleable, and capable of being understood and rendered in a multiplicity of ways (pg. 9). If laboratories are never engaging the objects they study “as they occur in nature,” this means that labs are always in the process of shaping what they study, in order to better study it (ibid). This framing of the engagement of laboratory science is clarified when she says:

Detailed description [such as that done in laboratories] deconstructs—not out of an interest in critique but because it cannot but observe the intricate labor that goes into the creation of a solid entity, the countless nonsolid ingredients from which it derives, the confusion and negotiation that often lie at its origin, and the continued necessity of stabilizing and congealing. Constructionist studies have revealed the ordinary working of things that are black-boxed as “objective” facts and “given” entities, and they have uncovered the mundane processes behind systems that appear monolithic, awe inspiring, inevitable. (pg. 12)

Thus, the laboratory is one place in which the irregularities and messiness of the “natural world” are ordered in such a ways as to be able to be studied at all. However, Knorr-Cetina clarifies that “nothing epistemically special” is happening, in a lab (pg. 16). That is, while a laboratory helps us to better recognize nonhuman agents (“actants”) and forces at play in the creation of science, this is merely a fact of construction; everything that a scientist does in a lab is open to scrutiny and capable of being understood. If this is the case, then the “enhancement” gained via the conditions of the laboratory environment is merely a change in degree, rather than a difference in kind, as Latour seems to assert.

[Stock photo image of hundreds of scallops and two scallop fishers on the deck of a boat in the St Brieuc Bay.]

[Stock photo image of hundreds of scallops and two scallop fishers on the deck of a boat in the St Brieuc Bay.]


In addition to the above explorations of what the field of laboratory studies has to offer, we can also look at the works of Michel Callon and Sharon Traweek. Though primarily concerned with describing the network of actors and their concerns in St Brieuc Bay scallop-fishing and -farming industries, Callon’s investigation can be seen as example of Latour’s principle of bringing the laboratory out in the world, both in terms of the subjects of Callon’s investigation and the methods of those subjects. While Callon himself might disagree with this characterization, we can trace the process of selection and enframing of subjects and the investigation of their translation procedures, which we can see on page 20, when he says,

We know that the ingredients of controversies are a mixture of considerations concerning both Society and Nature. For this reason we require the observer to use a single repertoire when they are described. The vocabulary chosen for these descriptions and explanations can be left to the discretion of the observer. He cannot simply repeat the analysis suggested by the actors he is studying. (Callon, 1984)

In this way, we can better understand how laboratory techniques have become a component even of the study and description of laboratories.

When we look at a work like Sharon Traweek’s Beamtimes and Lifetimes, we can see that she finds value in bringing ethnographic methodologies into laboratory studies, and perhaps even laboratory settings. She discusses the history of the laboratory’s influence, arcing back to WWI and WWII, where scientists were tasked with coming up with more and better weapons, with their successes being used to push an ever-escalating arms race. As this process continued, the characteristics of what made a “good lab scientist” were defined and then continually reinforced, as being “someone who did science like those people over there.” In the building of the laboratory community, certain traits and behaviours become seen as ideal, those who do not match those traits and expectations are regarded as necessarily doing inferior work. She says,

The field worker’s goal, then, is to find out what the community takes to be knowledge, sensible action, and morality, as well as how its members account for unpredictable information, disturbing actions, and troubling motives. In my fieldwork I wanted to discover the physicists’ “common sense” world view, what everyone in the community knows, and what every newcomer needs to learn in order to act in a sensible way, in order to be taken seriously. (pg. 8)

And this is also the danger of focusing too closely on the laboratory: the potential for myopia, for thinking that the best or perhaps even only way to study the work of scientists is to render that work through the lens of the lab.

While the lab is a fantastic tool and studies of it provide great insight, we must remember that we can learn a great deal about science and technology via contexts other than that of the lab. While Latour argues that laboratory science actually destabilizes the inside-the-lab/outside-the-lab distinction by showing that the tools and methods of the lab can be brought anywhere out into the world, it can be said that the distinction is reinstantiated by our focusing on laboratories as the sole path to understand scientists. Much the same can be said for the insistence that systems engineers are the sole best examples of how to engage technological development. Thinking that labs are the only resource we have means that we will miss the behavior of researchers at conferences, retreats, in journal articles, and other places where the norms of the scientific community are inscribed and reinforced. It might not be the case that scientists understand themselves as creating community rules, in these fora, but this does not necessarily mean that they are not doing so.

The kinds of understandings a group has about themselves will not always align with what observations and descriptions might be gleaned from another’s investigation of that group, but this doesn’t mean that one of those has to be “right” or “true” while the other is “wrong” and “false.” The interest in studying a discipline should come not from that group’s “power” to “correctly” describe the world, but from understanding more about what it is about whatever group is under investigation that makes it itself. Rather than seeking a single correct perspective, we should instead embrace the idea that a multiplicity of perspectives might all be useful and beneficial, and then ask “To What End?”

We’re talking about Values, here. We’re talking about the question of why whatever it is that matters to you, matters to you. And how you can understand that other people have different values from each other, and we can all learn to talk about what we care about in a way that helps us understand each other. That’s not neutral, though. Even that can be turned against us, when it’s done in bad faith. And we have to understand why someone would want to do that, too.