A brief broadside on why AI systems or robots cannot—now or in the future—“read” our emotions
In a volume edited by Wendell Wallach and Colin Allen, Moral Machines: Teaching Robots Right from Wrong (Oxford University Press, 2009),
our editors open the Introduction with the breathless statement that scientists
at the Affective Computing Laboratory at the Massachusetts Institute of
Technology (MIT) “are designing computers that can read human emotions,” as if
this is a foregone conclusion awaiting technical development or completion.
Wallach and Allen, respectively a consultant and writer affiliated with Yale’s
Interdisciplinary Center for Bioethics and a Professor of History and
Philosophy of Science and of Cognitive Science, also inform us that “today’s [computer]
systems are approaching a level of complexity … that requires the systems to
make moral decisions—to be programmed with ‘ethical subroutines’ to borrow a
phrase from Star Trek” (the blurring of boundaries between contemporary science
and science fiction, or the belief that much that was once science fiction on
this score is no longer fiction but the very marrow of science itself, is
commonplace). This argument depends, I will argue at a later date, on a rather
implausible model of what it means for us to make “moral decisions,” as well as
on an incoherent or question-begging application of the predicate “ethical.”
Before going any further, I should state that I do not
believe it is true that “artificial intelligence” (hereafter, AI) replicates (or
might in principle or soon replicate) human cognitive powers and abilities (or
capacities), although it may “replicate” in a very attenuated or simply
analogical sense, one aspect or feature of one particular cognitive ability,
even then, in as much as human cognitive abilities do not function in
isolation, in other words, as they work more or less in tandem and within a
larger cognitive, affective, and situational (and temporally ‘tensed’) human
context, the replication that takes place in this case is not in any way
emulative of human intelligence as such. AI is not about emulating or
instantiating (peculiarly) human intelligence, but rather a technological
replication of mathematically amenable aspects of formal logic (as with
algorithms), thus it is a mistake to describe the process here as one of
“automated reasoning” (i.e., AI machines don’t ‘reason,’ they compute and/or
process), if only because our best philosophical and psychological conceptions
of rationality and reasoning cast a net—as the later Hilary Putnam often
reminded us—far wider than anything that can, in fact or principle, be
scientized, logicized, or mathematized (i.e., formalized).
I want to briefly address the claim that AI systems can—or
soon will—“read human emotions.” By way of tilling the ground for our
discussion, it is not an insignificant fact that, in the words of P.M.S.
Hacker, “[t]he deepest students of the role of emotions in human life are the
novelists, dramatists, and poets of our culture” (Hacker confines his
examination of ‘the passions’ from the vantage point of philosophical anthropology
to Western civilization). A virtually identical point has been made by Jon
Elster in his book, Alchemies of the Mind:
Rationality and the Emotions (1999):
“… [W]ith respect to an important subset of the emotions we
can learn more from moralists, novelists, and playwrights than from the
cumulative findings of scientific psychology. These emotions include regret,
relief, hope, disappointment, shame, guilt, pridefulness, pride, hybris, envy, jealousy, malice, pity,
indignation, wrath, hatred, contempt, joy, grief, and romantic love. By
contrast, the scientific study of the emotions can teach us a great deal about
anger, fear, disgust, parental love, and sexual desire (if we count the last two
as emotions). [….] I believe…that prescientific insights into the emotions are
not simply superseded by modern psychology [here Elster means largely what we
would call ‘scientific psychology’] in the way that natural philosophy has been
superseded by physics. Some men and women in the past have been superb students
of human nature, with more wide-ranging personal experience, better powers of
observation, and deeper intuitions than almost any psychologist I can think of.
This is only what we should expect: There is no reason why one century out of
twenty-five should have a privilege in wisdom and understanding. In the case of
physics, this argument does not apply.”
I would amend Elster’s account of the relevance of science
to the study of the emotions to narrow its range to those emotions we share
with nonhuman animals (for reasons I will not go into here) and further qualify
it with the following remark from Hacker:
“The constitutive complexity of human emotions, their
diverse relation to time, to knowledge and belief of a neurologically
uncircumscribable scope, to reasons and the evaluation of reasons, to somatic
and expressive perturbations, to motivation and decision, guarantee that there
can be no simple correlation [let alone causation!] between genetic,
physiological, or neural facts and an emotion [this comment is made with regard
to the efforts of developmental and evolutionary psychologists as well as
cognitive neuroscientists to identify a class of absolutely basic (‘natural kinds’ if you will) human emotions].”
In short, we can conclude that science does not and will not
provide us with our best or most accurate knowledge and understanding of human
emotions. One fundamental reason this is the case is the fact that emotions
often “exhibit” what Hacker defines as “both compositional complexity and contextual
or narrative complexity”:
“Compositional complexity is patent in the manner in which
emotions may involve cognitive and cogitative strands (perception, knowledge,
belief, judgment, imagination, evaluation, and thought); sensations and
perturbations; forms of facial, tonal, and behavioral manifestation; or
emotionally charged utterances that express one’s feelings; reasons and motives
for action; and intentionality and causality. The contextual complexity is
manifest in the manner in which emotions, in all their temporal diversity, are
woven into the tapestry of life. An emotional episode is rendered intelligible
not only by reference to a past history—to previous relationships and
commitments, to past deeds and encounters, and to antecedent emotional states. The
loss of temper over a triviality may be made comprehensible by reference to
long-standing, but suppressed, jealousy. One’s Schadenfreude (delight at the misfortune of another) or by reference
to one’s standing resentment at an insult. The intensity of one’s grief may be
explained by reference to the passion with which one loved. [….] For the most
part, understanding the emotions, as opposed to explaining their cortical and
physiological roots, is idiographic rather than nomothetic, and historical
rather than static.”
This suggests that the notion that AI systems or robots
“reading emotions” is quite implausible if not impossible (I happen to think
the latter), given the manner in which emotions are “woven into the tapestry of
life.” Some might respond by asserting, more plausibly (and after the work of
the evolutionary psychologist Paul Ekman on ‘facial recognition technology’),
that what is being “read” here are simply facial expressions and perhaps bodily
comportment. But assuming that is true, it’s still doubtful if only because
even episodic or “temporary” emotions “have characteristic multiple
associations, manifestations, and forms of expression” both within and across
cultures (and these are not static), together with the fact that we can conceal
our emotions by, say, pretending to feel an emotion one does not feel and thus
mimic emotions and emotional expressions. Moreover, and perhaps more
importantly, the “facial manifestations of emotions occur in a context that gives them meaning.” Hence facial recognition
software alone will not suffice to “read” our emotions, if only because, as
Hacker writes,
“[o]ur emotions are made evident not only by our countenance
and voice, but also by our posture and mien [all of which can be mimed and
mimicked by a decent actor, an adept criminal, a dishonest person or a ‘drama
queen’], the way we walk or sit, our gestures and gesticulations. So called
body language, sermo corporis, as
Cicero dubbed it, is rich and variegated, with natural behavioural roots and
cultural modifications, constraints, refinements, and inventions. [….] Throughout
recorded history, posture and deportment were refined and constrained in order
to differentiate the aristocracy from the demos or plebs, imperial rulers from
the ruled, and men from women. Natural gestures and gesticulations of anger,
defiance, triumph, submission, grief, awe, and wonder were, from one period to
another, subject to various forms of social modification and restraint to mark
out the superior from the inferior, the cultivated from the uncouth.”
Thus,
“[f]acial expression, inarticulate vocal expression,
gesture, and mien constitute collectively an orchestra of possible
behaviourable manifestations and expressions of agitations, of the
perturbations of temporary emotions, of enduring emotions, of moods, and of
emotional attitudes. In addition there are wholly conventional behavioral
signals by means of which we express our feelings. These include nodding or shaking
one’s head, thumbs up or down, pointing with index finger or—rudely—with thumb,
winking, beckoning, waving, and rude and obscene gestures of rejections,
mockery, and insult. Couple them with the articulate verbal expressions of
agitation, emotion, mood and attitude; the tone and speed of utterance; and the
volume of voice in which one speaks … and we have a veritable symphony for the
manifestation and expression of affections in general and of emotions in
particular. The orchestra is normally conducted in honest concord. The various
forms of discord are often marks of insincerity, which, for the unaccustomed,
is difficult to make. [….] One can wear
a veil but, when one doesn’t, one’s features are revealed. That one can
sometimes conceal one’s feelings does not imply that, when does not, it is not
the very feelings themselves that are manifest—even though anger is not shaking
one’s fist and crying is not sadness.”
Hacker explains how the “behaviour of others, in all its
diversity and complexity, in a context that renders it intelligible,
constitutes the logical criteria for ascribing emotions to them.” These multifarious
logical criteria are not available to an AI machine or robot. Furthermore, our
emotions are not simply inferred from
the behavioural criteria we observe, as behaviour provides the (non-formal)
logical and non-inductive ground for
ascription of an emotion, and such criteria are defeasible in part because “there is a degree of opacity and sometimes even a form of
constitutional indeterminacy about
the emotions and their manifestation.” This “interpersonal opacity” is more
frequent and pronounced in cross-cultural encounters. In any case, the opacity
and indeterminacy of the emotions (or, put differently, their ideographic
character), whatever their depth and authenticity or the motives they give rise
to, can make for mutual misunderstanding between two people who love each other,
or between two close friends who know each other well:
“There need be no disagreement between them over the facts
of their relationship—but one interprets the manifold nuances of behavior and
attitude one way, and the other another way. There may be no additional data to
resolve the misunderstanding—all the facts are given. One person makes a
pattern of their emotional life one way, the other person another way. There
need be no further ‘fact of the matter.’”
This vividly illustrates, I think, the wild implausibility
if not nonsense ensconced in the belief that AI machines or robots can or in
the near future, “read emotions.” The uniqueness of human nature and the role
of emotions as part and parcel of the human condition, singled out here in
terms of “the penumbra of opacity and indeterminacy surrounding the application
of concepts of the emotions,” is an urgent reminder that
“there is such a thing as better and worse judgment about
the emotions of others. Greater sensitivity to fine shades of behaviour is
conducive to more refined insight into their hearts. Wide knowledge of mankind
and openness to what people tell of themselves make for better judgment. If one
knows a person well, one is more likely to be able to render his responses and
reactions intelligible than if one were a mere acquaintance [or an AI
machine!]. One may learn to look, and come to see what others pass over. One
may become sensitive to imponderable
evidence, to subtleties of glance, facial expression, gesture, and tone of
voice. One will then not have a better ‘theory of the emotions’ than others:
one will have become a connoisseur of the emotions.”
The capacity for and power of judgment is distinctively human and thus forever beyond the reach
of AI. And only a (human) person, not a robot, has the potential to one day
become “a connoisseur of the emotions.”
Relevant Bibliographies:
0 Comments:
Post a Comment
<< Home