Legal Personhood for “Autonomous Artificial Agents”
I wanted to alert our readers to an intriguing online symposium that has begun today at Concurring Opinions (although the announced schedule was February 14—16) on Samir Chopra and Laurence F. White’s book, A Legal Theory for Autonomous Artificial Agents (Ann Arbor, MI: University of Michigan Press, 2011). I’ve yet to read the book (it’s on order). In the announcement to the symposium I made a few preliminary comments (as did the other longstanding CO commenter, A.J. Sutter), to the last of which Professor Chopra responded as follows:
“Patrick: I think there might be one fundamental point of disagreement between us might be my refusal to consider human intentionality and morality some sort of singularity in the natural order, the attainment of which lies entirely beyond non-carbon based entities.” [….]
This is indeed a fundamental point of disagreement. I’m considerably more than a wee bit concerned when concepts and categories intrinsic to morality, ontology (if not metaphysics), and psychology (consciousness, intentionality, autonomy, moral agency, and so on) are taken from the domain of the human (and, to a limited extent, nonhuman) animal world and applied to the realm of “artificial intelligence” (AI) technology. In other words, their natural domain of application (pun intended) does, in fact, rule out both literal and metaphorical extension into the realm of “non-carbon based entities.”
Without going into the details or possible arguments, we might find it tempting or at least easier to extend our moral and psychological concepts and categories beyond the human and nonhuman animal world into the domain of AI technology if our understanding of the mind (including consciousness and intentionality) is beholden to metaphors, models, or pictures that are currently fashionable in some quarters of philosophy, cognitive science, and psychology. One such model comes courtesy of “cognitive naturalism,” an “interdisciplinary amalgam of psychology, artificial intelligence, neuroscience, and linguistics,” the central hypothesis of which is “that thought can be understood in terms of computational procedures on mental representations,” dubbed by the philosopher Paul Thagard as CRUM, for Computational-Representational Understanding of Mind. On this model, mental representations are like (or virtually identical to) data structures and the mind’s putative “computational procedures” are algorithms, and thus “thinking” is tantamount to running programs. The current and fairly uncritical fascination with the neurosciences, evolutionary psychology, and reductionist theories in philosophy of mind together contribute to an intellectual climate and disciplinary inquiries that directly or implicitly sanction or legitimate the legal endeavor to ascribe moral autonomy and ethical agency to technological programs and devices like robots.
I’m not claiming that these new technologies don’t raise novel moral and legal problems for which we may need to craft a fairly new conceptual (including legal) vocabulary. But such an enterprise would necessarily eschew simply importing existing moral and psychological principles, predicates, and concepts (as presuppositions, assumptions, or axioms) into the world of technology and law. And such an enterprise will have to avoid, at the very least, the siren call of mind-brain reductionism. In other words, consciousness, intentionality, and normativity are decisive (i.e. basic or fundamental) properties or features or characteristics of our mental life which rule out the plausibility of reductionist or eliminativist “hypotheses.” (In Daniel Robinson’s words: ‘It cannot even be said that they are working hypotheses, because a working hypothesis is one that will rise or fall on the basis of relevant evidence, and there is no “evidence” as such that could tell for or against “hypotheses” of this sort.’)
I hope to write more on this topic in the near future.
*There is now a post up by James Grimmelmann: here.
Further Reading:
“Patrick: I think there might be one fundamental point of disagreement between us might be my refusal to consider human intentionality and morality some sort of singularity in the natural order, the attainment of which lies entirely beyond non-carbon based entities.” [….]
This is indeed a fundamental point of disagreement. I’m considerably more than a wee bit concerned when concepts and categories intrinsic to morality, ontology (if not metaphysics), and psychology (consciousness, intentionality, autonomy, moral agency, and so on) are taken from the domain of the human (and, to a limited extent, nonhuman) animal world and applied to the realm of “artificial intelligence” (AI) technology. In other words, their natural domain of application (pun intended) does, in fact, rule out both literal and metaphorical extension into the realm of “non-carbon based entities.”
Without going into the details or possible arguments, we might find it tempting or at least easier to extend our moral and psychological concepts and categories beyond the human and nonhuman animal world into the domain of AI technology if our understanding of the mind (including consciousness and intentionality) is beholden to metaphors, models, or pictures that are currently fashionable in some quarters of philosophy, cognitive science, and psychology. One such model comes courtesy of “cognitive naturalism,” an “interdisciplinary amalgam of psychology, artificial intelligence, neuroscience, and linguistics,” the central hypothesis of which is “that thought can be understood in terms of computational procedures on mental representations,” dubbed by the philosopher Paul Thagard as CRUM, for Computational-Representational Understanding of Mind. On this model, mental representations are like (or virtually identical to) data structures and the mind’s putative “computational procedures” are algorithms, and thus “thinking” is tantamount to running programs. The current and fairly uncritical fascination with the neurosciences, evolutionary psychology, and reductionist theories in philosophy of mind together contribute to an intellectual climate and disciplinary inquiries that directly or implicitly sanction or legitimate the legal endeavor to ascribe moral autonomy and ethical agency to technological programs and devices like robots.
I’m not claiming that these new technologies don’t raise novel moral and legal problems for which we may need to craft a fairly new conceptual (including legal) vocabulary. But such an enterprise would necessarily eschew simply importing existing moral and psychological principles, predicates, and concepts (as presuppositions, assumptions, or axioms) into the world of technology and law. And such an enterprise will have to avoid, at the very least, the siren call of mind-brain reductionism. In other words, consciousness, intentionality, and normativity are decisive (i.e. basic or fundamental) properties or features or characteristics of our mental life which rule out the plausibility of reductionist or eliminativist “hypotheses.” (In Daniel Robinson’s words: ‘It cannot even be said that they are working hypotheses, because a working hypothesis is one that will rise or fall on the basis of relevant evidence, and there is no “evidence” as such that could tell for or against “hypotheses” of this sort.’)
I hope to write more on this topic in the near future.
*There is now a post up by James Grimmelmann: here.
Further Reading:
- Bennett, M.R. and P.M.S. Hacker. Philosophical Foundations of Neuroscience. Malden, MA: Blackwell, 2003.
- Bennett, Maxwell, Daniel Dennett, Peter Hacker, John Searle, and Daniel Robinson. Neuroscience and Philosophy: Brain, Mind and Language. New York: Columbia University Press, 2007.
- Borgmann, Albert. Technology and the Character of Contemporary Life. Chicago, IL: University of Chicago Press, 1984.
- Buller, David J. Adapting Minds: Evolutionary Psychology and the Persistent Quest for Human Nature. Cambridge, MA: MIT Press, 2005.
- Descombes, Vincent (Stephen Adam Schwartz, tr.). The Mind’s Provisions: A Critique of Cognitivism. Princeton, NJ: Princeton University Press, 2001.
- Dupré, John. Human Nature and the Limits of Science. Oxford, UK: Clarendon Press, 2001.
- Feenberg, Andrew. Questioning Technology. New York: Routledge, 1999.
- Finkelstein, David H. Expression and the Inner. Cambridge, MA: Harvard University Press, 2003.
- Heidegger, Martin. The Question Concerning Technology and Other Essays. New York: Harper and Row, 1977.
- Horst, Steven. Beyond Reduction: Philosophy of Mind and Post-Reductionist Philosophy of Science. Oxford, UK: Oxford University Press, 2007.
- Hutto, Daniel D. The Presence of Mind. Amsterdam: John Benjamins, 1999.
- Hutto, Daniel D. Beyond Physicalism. Amsterdam: John Benjamins, 2000.
- Hutto, Daniel D. Folk Psychological Narratives: The Sociocultural Basis of Understanding. Cambridge, MA: MIT Press, 2008.
- Ihde, Don. Technology and the Lifeworld: From Garden to Earth. Bloomington, IN: Indiana University Press, 1990.
- Midgley, Mary. Science and Poetry. London: Routledge, 2001.
- Pardo, Michael S. and Dennis Patterson, “Minds, Brains, and Norms” (July 10, 2009). Neuroethics. Forthcoming. University of Alabama Public Law Research Paper. Available: http://ssrn.com/abstract=1432476.
- Pardo, Michael S. and Dennis Patterson. “Philosophical Foundations of Law and Neuroscience” (February 6, 2009). University of Illinois Law Review, 2010. University of Alabama Public Law Research Paper No. 1338763. Available: http://ssrn.com/abstract=1338763.
- Postman, Neil. Technopoly: The Surrender of Culture to Technology. New York: Alfred A. Knopf, 1992.
- Putnam, Hilary. Realism with a Human Face. Cambridge, MA: Harvard University Press, 1990.
- Putnam, Hilary. The Threefold Cord: Mind, Body, and World. New York: Columbia University Press, 1999.
- Rescher, Nicholas. The Limits of Science. Berkeley, CA: University of California Press, 1984.
- Rescher, Nicholas. Nature and Understanding: The Metaphysics and Method of Science. Oxford, UK: Clarendon Press, 2000.
- Robinson, Daniel N. Consciousness and Mental Life. New York: Columbia University Press, 2008.
- Tallis, Raymond. The Explicit Animal: A Defence of Human Consciousness. New York: St. Martin’s Press, 1999 ed.
- Tallis, Raymond. The Hand: A Philosophical Inquiry into Human Being. Edinburgh: Edinburgh University Press, 2003.
- Tallis, Raymond. I Am: A Philosophical Inquiry into First-Person Being. Edinburgh: Edinburgh University Press, 2004.
- Tallis, Raymond. The Knowing Animal: A Philosophical Inquiry into Knowledge and Truth. Edinburgh: Edinburgh University Press, 2005.
- Tallis, Raymond. Aping Mankind: Neuromania, Darwinitis and the Misrepresentation of Humanity. Durham, England: Acumen, 2011.
- Thagard, Paul. Mind: Introduction to Cognitive Science. Cambridge, MA: MIT Press, 2000.
- Travis, Charles. Unshadowed Thought: Representation in Thought and Language. Cambridge, MA: Harvard University Press, 2000.
- Turkle, Sherry. The Second Self: Computers and the Human Spirit. Cambridge, MA: MIT Press, 2005 ed.
- Velmans, Max. Understanding Consciousness. London: Routledge, 2000.
- Wedgwood, Ralph. The Nature of Normativity. New York: Oxford University Press, 2007.
- Winner, Langdon. The Whale and the Reactor: A Search for Limits in an Age of High Technology. Chicago, IL: University of Chicago Press, 1986.
- Ziman, John. Real Science: What It Is, and What It Means. Cambridge, UK: Cambridge University Press, 2000.
0 Comments:
Post a Comment
<< Home