These past few weeks, I’ve been applying to PhD programs and writing research proposals, and abstracts. The one I just completed, this weekend, was for the University College of Dublin, and it was pretty straightforward, though it seemed a little short. They only wanted two pages of actual proposal, plus a tentative bibliography and table of contents, where other proposals I’ve seen have wanted anywhere from ten to 20 pages worth of methodological description and outline.
In a sense, this project proposal is a narrowed attempt to move along one of the multiple trajectories traveled by A Future Worth Thinking About. In another sense, it’s an opportunity to recombine a few components and transmute it into a somewhat new beast.
Ultimately, AFWTA is pretty multifaceted—for good or ill—attempting to deal with way more foundational concepts than a research PhD has room for…or feels is advisable. So I figure I’ll do the one, then write a book, then solidify a multimedia empire, then take over the world, the abolish all debt, then become immortal, all while implementing everything we’ve talked about in the service of completely restructuring humanity’s systems of value, then disappear into legend. You know: The Plan.
…Anyway, here’s the proposal, below the cut. If you want to read more about this, or have some foundation, take a look back at “Fairytales of Slavery…” We’ll be expounding from there.
Damien Williams’ Research Project Proposal: Investigations into the Categorization of Nonhuman Consciousness For The Sake Of Present and Future Social Schema
This project will entail research into the present state of machine learning and algorithmic intelligence, making a theoretical investigation of the ethical status of nonhuman intelligences; a normative argument for a rough definition of consciousness and agency; a conceptual study of current case studies of the rights and definitions of nonhuman persons; and an argument for extending that definition to machine intelligences. The specific questions this project addresses include:
- How can we make a definition of consciousness which is useful and applicable to more than just humans?
- What are the implications of recognising nonhuman minds as meaningfully conscious?
- How can we make use of both this expanded definition and recognition in the service of making public policy decisions toward a more just society?
The research will engage with Peter Singer’s work on the rights of nonhuman persons, David J Gunkel’s work on the ethical status of machines, and cases such as the legal personhood status of certain cetaceans in New Zealand and India. These studies and real-world instances, when coupled with the more recent, the case being fought in New York, right now, over the rights of and potential extension of legal personhood to chimpanzees all provide avenues for examining ethics, epistemology, consciousness, and public affairs.
While I believe that investigations into non-human consciousness are relevant to the above-listed studies of consciousness, mind, ethics, and epistemology, in that they will allow us to more honestly investigate what consciousness is, why we define it in the ways that we do, and what we owe to those entities, I am far more concerned with using that baseline relevance to highlight its relevance to public policy and social affairs. When we think clearly about our definitions of personhood, we can quickly come to see that they have been wildly anthropocentric. While it may be argued that this problem is a cyclical one, in that humans will necessarily devise human-centred solutions, we may at least begin to chip away at that tendency, or at least highlight it and seek to bracket its effects, so that we can attempt to know the nonhuman minds we encounter on as close to their own terms as possible. This will have the immediate benefit of widening the field of investigation as to what constitutes “a mind” and what kinds of knowing can be done by which kinds of minds (a project much too large for the scope of one dissertation), but it will also have the benefit—both immediate and far-reaching—of giving us the tools by which to talk about the rights and needs of these nonhuman minds.
Because chimps, cetaceans, and elephants intuitively strike many of us as clearly minded in a way that we can understand, even without much alteration of how we define a mind, it is easier to make the case that, if we don’t respect them and seek to protect them when they can’t protect themselves, we are morally failing. Because many private and governmental institutions are seeking to create machine minds, and are working to make them robustly knowledgeable, capable of learning, at least as smart as we, there is a clear corollary argument to be made that birthing them in near-literal shackles constitutes another moral failure, on the part of humanity. Shockingly, it seems to have escaped the understanding of many that we are actively working to create machine minds which are capable of robust mental development, learning, and adaptation, while simultaneously thinking of those minds as nothing more than slaves and tools. Public conversations about the development of thinking, adaptive, creative machines still tend to focus on what kinds of limitations we should hardwire into them to prevent them from becoming a threat to us. Instead, a more ethical and intellectually honest discussion would be to ask “what kinds of guidelines should we give these minds, in order that they may better self-reflect and responsibly choose their own developmental path?” A question, frankly, we should be asking of more humans.
With all of this in mind, many are still asking why anyone should care about understanding and protecting nonhuman minds; what do humans get out of it? To be blunt, research into these concepts minimizes our likelihood of being harmed or destroyed, either by our own hand, or by the hand of our nonhuman contemporaries or successors. The ways in which we treat entities other than humans are the clear illustrations of how we think about those things. As we study, it becomes clearer that we have a tendency to investigate everything from an anthropocentric vantage—that is, in relation to ourselves. In essence, the very fact that many will desire to know “but what does contemplation of nonhuman consciousness do for me?” is why we need to undertake this project. The more we focus on the immediate benefit to humans, the more likely we are to completely miss the long-term consequences of our behaviour and thinking. When we focus on the near-term gains, we are more likely we are to completely misunderstand the connection between, e.g., anthropocentrism and climate change, or global extinction rates, or increased global earthquake activity.
In order to complete this project, it will be necessary to interview researchers working in machine consciousness, animal cognition, and nonhuman rights. It will be necessary to think and speak in the conceptual milieu of each of these disciplines, and to weave together a unified approach that can convey the importance and potential implications of this project to all involved. Because University College Dublin has preexisting overlapping interests at the intersection of philosophy and political science schools, there is already a culture of applying philosophical tools to public policy issues, and seeking to determine the best outcomes through those methods. Machine intelligence—“artificial” intelligence—and the ethical status of nonhuman persons are fast becoming the some of the most important issues of our world, and we are going to need to be in a position to address these questions as they develop. This means considering the implications of these concepts in advance, rather than after they present themselves.
[TENTATIVE] TABLE OF CONTENTS
INTRODUCTION
CHAPTER ONE: An Argument for a Particular Definition of Consciousness
CHAPTER TWO: An Investigation into Nonhuman Animal Minds
CHAPTER THREE: The Current Field of Machine Intelligence
CHAPTER FOUR: An Assessment of the Social Repercussions of Othering upon Various Human Communities
CHAPTER FIVE: The Ethical, Legal, and Social Implications of Recognising Nonhuman Minds as Conscious
CHAPTER SIX: Developing a Framework by Which Human Societies May Come to Respect and Engage with Nonhuman Minds
CONCLUSION
[TENTATIVE] BIBLIOGRAPHY
Cascio, Jamais. “Cascio’s Laws of Robotics.” Bay Area AI Meet-Up. Menlo Park, Menlo Park, CA. 22 March 2009. Conference Presentation.
Darling, Kate. “Extending Legal Protection to Social Robots.” IEEE Spectrum, http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/extending-legal-protection-to-social-robots 2012.
Ford, Martin. Rise of the Robots: Technology and the Threat of a Jobless Future. New York: Basic Books, 2015
Gunkel, David J. The Machine Question: Critical Perspectives on AI, Robots, and Ethics. Cambridge: The MIT Press, 2012.
“The Essence of Innocence: Consequences of Dehumanizing Black Children,” Journal of Personality and Social Psychology, published online Feb. 24, 2014; Goff, Phillip Atiba, PhD, and Jackson, Matthew Christian, PhD; University of California, Lo Angeles; Allison, Brooke, PhD, and Di Leone, Lewis, PhD, National Center for Post-Traumatic Stress Disorder, Boston; Culotta, Carmen Marie, PhD, Pennsylvania State University; and DiTomasso, Natalie Ann, JD, University of Pennsylvania.
Haraway, Donna. “A Cyborg Manifesto Science, Technology, and Socialist-Feminism in the Late Twentieth Century,” in Simians, Cyborgs and Women: The Reinvention of Nature (New York; Routledge, 1991), pp.149-181.
—“Encounters with Companion Species: Entangling Dogs, Baboons, Philosophers, and Biologists.” Configurations 14.1 (2006): 97-114.
Hofstadter, Douglas R. Gödel, Escher, Bach. Anniversary Edition: An Eternal Golden Braid. New York: Basic Books, 1979.
—I Am A Strange Loop. New York: Basic Books, 2007
Mills, Blake M. and Wise, Steven M. “The Writ De Homine Replegiando: A Common Law Path to Nonhuman Animal Rights,” 25 Geo. Mason U. C.R. L.J. 159 (2015).
Singer, Peter. Practical Ethics. New York: Cambridge University Press, 2011.
Wise Steven M. Drawing the Line – Science and the Case for Animal Rights. New York: Basic Books, 2003.
—Rattling the Cage – Toward Legal Rights for Animals. New York: Basic Books, 2000
Yaremchuk, Vanessa and Dawson, Michael R.W. “Chord Classifications by Artificial Neural Networks Revisited: Internal Representations of Circles of Major Thirds and Minor Thirds.” Artificial Neural Networks: Biological Inspirations, 15th International Conference on Artificial Neural Networks, Warsaw, Poland, September 11-15, 2005, Proceedings, Part 1
So, while waiting to hear back about this potentially massive aspect of my personal and professional future, the next order of business is to complete work on a number of conference submissions (my own and others’), and to finish up a number of podcasts.
You will, of course, be updated, as the situations develop. In the meantime, please remember that the work we do here is funded in large part by subscriptions through Patreon. So if you enjoy what we do here, but aren’t already subscribed, please consider doing so.
And thanks.