Can non-human agents using computer language exemplify "knowing about the world?"
[....] “But the general question of whether non-human agents using the Wolfram Language,
or any other computer language, ‘know about the world’ in some suitable sense,
that one still stands.”
My response:
I understand knowledge, with Raymond Tallis, as
fundamentally a mode of explicitness, of explicit-making consciousness. To
elaborate a bit: after Grice, and in the words of Raymond Tallis, “linguistic
meaning in the real world does not reside in the behavior of the symbols or
expressions of which languages are composed—they are not located in ‘the system
of symbols’ or its component terms—but in people who use languages to mean
things, and the worlds they live in. This is because the specification of
linguistic meanings requires that they are meant (by someone). What is more, in order that I
should be able to determine what you mean, I have to intuit what you mean to
mean.” This involves, as Searle shows, getting a listener to recognize my
intention to communicate just those things I intended to say in the act of
communication. One cannot ignore the speaking subject: “Our utterances are invested
with, and exploit, an ‘implicature’ in virtue of which we can always imply more
than we say. Verbal meaning, in short, resides in acts performed by human being
who draw upon their knowledge of the world and make presuppositions about the
knowledge possessed by their interlocutors.”
If one believes, as I do and again with Tallis (among
others), and yet again after Grice (or Searle for that matter) that “[m]eaning
cannot be separated from the psyche of the one who emits meaning, or from the
psyche of the one who receives it,” and that our concept of knowledge is
intimately tied to the various forms of memory (e.g., factual, experiential, and
objectual), to emotions, thoughts, beliefs, and imagination, “the general
question of whether non-human agents using the Wolfram Language, or any other
computer language, ‘know about the world’ in some suitable sense” lacks any
standing whatsoever. The question makes sense only if one thinks of meaning
(which is, as Tallis says, ‘a quintessential feature of human consciousness’)
“in purely linguistic terms and language being primarily a system of symbols.”
One, it seems, has to have a (or something like a) “computational theory of
mind” to imagine a computer language might exemplify having knowledge about the
world (the relevant ‘knowledge’ here can only be metaphorical or secondary and
derivative, parasitic in meaning on the knowledge possessed by those who program
the computers, etc.). In short, knowledge requires “an enworlded self.” More
explicitly:
“Knowledge begins with the sense of there being something
beyond how things appear to us: it
begins with the concept of an object that is other than the self who entertains
the notion of an object. Implicit in the idea of the object is the intuition of
the subject contrasted with the object; more precisely, the Existential
Intuition ‘That I am this….’ [the nature and origin of which are discussed in
Tallis’s 2004 volume, I Am: A Philosophical Inquiry into First-Person
Being] Object knowledge [even Kleinian ‘internal objects’!] is also
permeated [as ‘Wittgensteinians’ remind us] by a sense of publicness—of a shared
world—that is not available to asocial sentience or asocial neural activities
[or an electronic device that performs high-speed arithmetical and logical
operations].”
Intentionality is a feature of perceptions, of
propositional attitudes such as beliefs and desires, and of utterances such as
assertions. This necessarily implicates consciousness, consciousness of something…. Computers are without
minds, the most conspicuous feature of which is consciousness. And consciousness
cannot be reduced to material or biological or neurological properties: in other
words, materialism cannot account for the “indexicality of human consciousness”
in the sense of being “here” and “now” as Tallis says, similar to the Da-sein Heidegger identifies as the
essence of the human being (Tallis provides compelling arguments against
attempts to neurologize ‘here’ and indexicality in general). Computers by
definition can’t have first-person experience: a “narrative center of gravity”
requires the higher-order activity of a self….
See “The Chinese Room all over again?” by Catarina Dutilh Novaes at the New APPS blog.
Update:
Professor Dutilh Novaes has replied to my comment as
follows:
“But to stipulate that intentionality must be
exclusively to humans from the start is to beg the question on precisely what is
at stake, i.e. can non-human agents instantiate phenomena that are relevantly
similar to human cognition? That's one of the points eloquently made by M. Boden
in the paper I linked to above.”
My response:
Perhaps I’m obtuse, but I fail to see where Boden
“eloquently makes that point.” A computer can only instantiate phenomena that
are relevantly similar to human cognition to the extent that it is human beings
who program computers, and “similar” is then only used rather loosely if not
figuratively: For instance, we sometimes hear it said that computers “follow
rules,” but computers
“cannot correctly be described as following rules any more
than planets can correctly be described as complying with laws. The orbital
motion of the planets is described by
the Keplerian laws, but the planets do not comply with the laws. Computers were not
built to ‘engage in rule-governed manipulation of symbols,’ they were built to
produce results that will coincide
with rule-governed, correct
manipulation of symbols. For computers can no more follow a rule than a mechanical
calculator can. A machine can execute operations that accord with a rule,
provided all the causal links built into it function as designed and assuming
that the design ensures the regularity in accordance with the chosen rule or
rules. But for something to constitute following a rule, the mere production of
a regularity in accordance with a rule, is not sufficient. A being can be said
to be following a rule only in the context of a complex practice involving
actual and potential activities of justifying, noticing mistakes and correcting
them by reference to the rule, criticizing deviations from the rule, and if
called upon, explaining an action as being in accordance with the rule and
teaching others what counts as following a rule. The determination of an act as
being correct, in accordance with the rule, is
not a causal determination but a logical one. Otherwise we should have to
surrender to what results our computers produce.” (Bennett and Hacker)
The use of language that suggests, for instance, that
computers instantiate phenomena “relevantly similar to human cognition” is
fairly harmless until it is taken literally, leading us to suppose that it is a
fact, or simply possible, that “computers really think, better and faster than
we do, that they truly remember, and, unlike us, never forget, that they
interpret [or understand] what we type in, and sometimes misinterpret [or
misunderstand] it, taking what we wrote to mean something other than we meant.
Then the [computer] engineers’ [or
scientists’] otherwise harmless style of speech ceases to be an amusing
shorthand and becomes a potentially pernicious conceptual confusion,” as is, I
think, the case here.
Dennett would have us speaking of Deep Blue as “playing”
chess, just like Kasparov, but the computer only “’plays’ chess in the sense
that the microwave ‘cooks’ soup, though the programming is vastly more
complicated.” (Daniel Robinson) What’s “stipulative” is the “intentional
stance,” fashioned, in part, so as to make it appear plausible that machines
(among other things) are, like us, “intelligent systems.” In Robinson’s words,
“[c]onsider the broad, various, cultural, and dispositional factors that need to be
recruited in order to qualify an activity as ‘play,’ and then array these
against whatever ‘process’ gets Deep Blue to have the Bishop move to QP3.” And
then, relatedly and further, we might ask, “If Spassky and Kasparov are doubtful
as to whether computers are ‘playing’ chess, is it not Dennett who must rethink
the matter?”
It’s on the order of a category mistake to think
intentionality applies to non-human agents (although it applies in some degree
to at least some non-human animals), Dennett’s “intentional stance” and nonsense
about the fictional character of folk psychology notwithstanding: the ascription
of psychological attributes is not about an interpretative stance, heuristic
overlays or theoretical posits (it’s not surprising that Boden uncritically
cites Dennett on this score). One does not merely adopt an “intentional stance”
in the use of psychological predicates.* But my point concerns consciousness
(intentionality being one feature or property of consciousness) in the first
instance and not intentionality, at least insofar as some mental phenomena are
not obviously intentional in any conventional sense (e.g., moods or sensations).
In any case, it would be more precise to say, after Bennett and Hacker, that
what is intentional is “the psychological attribute that has an intentional
object.” Therefore,
“[o]ne cannot intelligibly ascribe ‘intentionality’ to
molecules, cells, parts of the brain, thermostats or computers. Not only is it a
subclass of psychological attributes that are the appropriate bearers of
intentionality and not animals or things, but, further, only animals, and fairly
sophisticated animals at that, and not parts of animals, let alone molecules,
thermostats or computers, are the subjects of such attributes. …[I]t makes no
sense to ascribe belief, fear, hope, suspicion, etc. to molecules, [contra
Searle] the brain or its parts, thermostats or computers.”
* For the full critique of Dennett on this score, see the
first appendix to M. R. Bennett and P.M.S. Hacker’s Philosophical Foundations of
Neuroscience (2003). I agree with Tallis who writes, “It is difficult to know
why this argument has been taken seriously.” See too the debate in Maxwell
Bennett, Daniel Dennett, Peter Hacker, and John Searle (with Daniel Robinson),
Neuroscience and Philosophy: Brain, Mind,
and Language (2007).
Further Reading: - Bennett, M.R. and P.M.S. Hacker. Philosophical Foundations of Neuroscience. Malden, MA: Blackwell, 2003.
- Bennett, Maxwell, Daniel Dennett, Peter Hacker, John Searle, and Daniel Robinson. Neuroscience and Philosophy: Brain, Mind and Language. New York: Columbia University Press, 2007.
- Descombes, Vincent (Stephen Adam Schwartz, tr.). The Mind’s Provisions: A Critique of Cognitivism. Princeton, NJ: Princeton University Press, 2001.
- Gillett, Grant. Subjectivity and Being Somebody: Human Identity and Neuroethics. Exeter, UK: Imprint Academic, 2008.
- Gillett, Grant. The Mind and Its Discontents. New York: Oxford University Press, 2009.
- Grice, Paul. Studies in the Way of Words. Cambridge, MA: Harvard University Press, 1989.
- Hacker, P.M.S. Human Nature: The Categorial Framework. Malden, MA: Blackwell, 2007.
- Horst, Steven. Beyond Reduction: Philosophy of Mind and Post-Reductionist Philosophy of Science. Oxford, UK: Oxford University Press, 2007.
- Hutto, Daniel D. The Presence of Mind. Amsterdam: John Benjamins, 1999.
- Hutto, Daniel D. Beyond Physicalism. Amsterdam: John Benjamins, 2000.
- Hutto, Daniel D. Folk Psychological Narratives: The Sociocultural Basis of Understanding. Cambridge, MA: MIT Press, 2008.
- Hutto, Daniel D., ed. Narrative and Folk Psychology. Exeter, UK: Imprint Academic, 2009.
- Pardo, Michael S. and Dennis Patterson. Minds, Brains, and Law: The Conceptual Foundations of Law and Neuroscience. New York: Oxford University Press, 2013.
- Robinson, Daniel N. Consciousness and Mental Life. New York: Columbia University Press, 2008.
- Searle, John R. Intentionality: An Essay in the Philosophy of Mind. Cambridge, UK: Cambridge University Press, 1983.
- Searle, John R. The Rediscovery of the Mind. Cambridge, MA: MIT Press, 1992.
- Tallis, Raymond. The Explicit Animal: A Defence of Human Consciousness. New York: St. Martin’s Press, 1999 ed.
- Tallis, Raymond. I Am: A Philosophical Inquiry into First-Person Being. Edinburgh: Edinburgh University Press, 2004.
- Tallis, Raymond. The Knowing Animal: A Philosophical Inquiry into Knowledge and Truth. Edinburgh: Edinburgh University Press, 2005.
- Tallis, Raymond. Aping Mankind: Neuromania, Darwinitis and the Misrepresentation of Humanity. Durham, England: Acumen, 2011.
0 Comments:
Post a Comment
<< Home