home *** CD-ROM | disk | FTP | other *** search
- Xref: sparky comp.ai:4692 comp.ai.philosophy:7147
- Path: sparky!uunet!pipex!bnr.co.uk!uknet!bhamcs!axs
- From: axs@cs.bham.ac.uk (Aaron Sloman)
- Newsgroups: comp.ai,comp.ai.philosophy
- Subject: Re: discussion with Penrose
- Message-ID: <BzqHws.KKF@cs.bham.ac.uk>
- Date: 23 Dec 92 22:46:02 GMT
- References: <1992Dec22.021109.4401@oracorp.com> <1992Dec22.221613.3198@udel.edu>
- Sender: news@cs.bham.ac.uk
- Organization: School of Computer Science, University of Birmingham, UK
- Lines: 133
- Nntp-Posting-Host: emotsun
-
- hughes@mercury.cis.udel.edu (John Hughes) writes:
-
- > Date: 22 Dec 92 22:16:13 GMT
- > Organization: University of Delaware, Newark
- > In article <1992Dec22.021109.4401@oracorp.com> daryl@oracorp.com
- > (Daryl McCullough) writes:
- > ....
- > > ..... There are only finitely many sequences of questions
- > >that could possibly be asked within the time limit, so all it would
- > >take in principle to pass it would be a finite lookup table that gave
- > >an appropriate answer for each question. If you want to include such
- > >questions as "What time is it?", or "Was it raining yesterday?", then
- > >you would have to have inputs corresponding to the current time, the
- > >weather report, etc. As long as the total information needed to answer
- > >each question is finite, then a lookup table could in principle answer
- > >it.
- >
- > This is not so. Though in a finite amount of time there are indeed only a
- > finite number of questions I can ask, those questions can be chosen from
- > an infinite set.
- > ..... etc
-
- No: Daryl is right.
-
- It is a simple mathematical fact that if you have a discrete and
- finite alphabet of characters c1, c2, c3, c4, ...... ck (e.g.
- printed characters, phonemes, or whatever) and a minimum time dt
- required to print or utter one of them, then in any finite time T
- there will be an upper bound N (= T/dt) to the length of the
- character string that can be produced in time T.
-
- The total number of strings of length N of characters from an
- alphabet of size k, is itself finite, i.e. k to the power N (and
- most of these strings will be nonsensical). If you take all possible
- strings of length less than or equal to N, the set is larger but
- still finite. (Adding N finite numbers gives you a finite number).
-
- So the questions cannot be "chosen from an infinite set", and the
- lookup table will not need to have more than some finite number of
- entries.
-
- Of course, for even quite small N and k (e.g. N = 500, k = 2) the
- table could be too large to fit into our universe (assuming that
- there are no more than about 10 to the power 73 electrons in the
- universe) but that doesn't undermine the argument about what is
- theoretically possible.
-
- Some people wish to include the CONTEXT in the conditions
- determining the answer to the question. It can't be the *actual*
- context that matters, but the *perceived* context, or perhaps the
- history of perceived contexts since the "agent" was "born". If the
- agent uses quantized sensory devices, such as an array of photon
- detectors, (or a digital memory of finite size) then exactly the
- same sort of argument can be used to show that for any finite period
- there is only a finite (though admittedly huge) number of possible
- sequences of perceived histories.
-
- Thus the set of possible combined histories followed by a question
- (or if you wish to include perceived questions as part of the
- history, then just the set of possible histories) that can occur in
- any bounded time period (e.g. 1000 years) will also be finite, and
- therefore appropriate responses can, in principle, be associated
- with them, in a *finite* lookup table.
-
- It would be possible to reduce the size of the table and the lookup
- time by transforming the table into a discrimination tree by merging
- histories with common initial strings, though it would still, for
- any reasonable time period, be too large to fit in our universe
- (which doesn't matter for the argument).
-
- It is because, on the assumption of a quantised universe, a finite
- table-driven machine can in principle pass *any* behavioural test
- you like (not just the Turing test) that I argued in my review of
- Penrose (in the Journal Artificial Intelligence, August 1992) that
- all such behavioural tests are inappropriate as definitions of
- intelligence, or definitions of any particular mental state.
-
- It's not *what* behaviour is produced but *how* it is produced that
- makes it reasonable to ascribe mental predicates to an agent.
-
- (Of course, what it does may be compelling evidence concerning how
- it does it, if you make enough assumptions, and that's why we don't
- poke inside people's brains before regarding them as intelligent,
- but that's irrelevant to the conceptual argument.)
-
- -------------------------------------------------------------------
-
- I believe that this simple fact is all the truth that lies behind
- Searle's claim that the causal powers of brains, i.e. *how* they do
- things, at some appropriate level of description, would need to be
- replicated in order to replicate human intelligence.
-
- Corresponding to different levels of description of how brains do
- what they do there are different "Strong AI" theses about what can
- be replicated on a Turing machine.
-
- E.g. If all the strong AI thesis requires is replicating
- input-output behaviour via fixed-size digital interfaces, then the
- finite lookup table argument trivially suffices to prove the truth
- of strong AI.
-
- If there are things in the brain at the relevant level of
- description that involve asynchronous concurrent processes with
- continuously varying speeds, then it may or may not be possible for
- a single Turing machine to replicate them, depending on the precise
- kind of asynchrony. (I think).
-
- If there are non-digital, e.g. continuously varying, processes that
- play an essential role in the relevant level of description of how
- the brain does things, and Strong AI requires these processes to be
- replicated, then it is trivially true that a Turing machine cannot
- replicate them. (This seems to be what some people claim about
- neural systems.)
-
- Additional cases are discussed in my review of Penrose.
-
- Thus some versions of the Strong AI thesis are trivially true,
- others trivially false, and some interestingly open, i.e. whether
- there is some interesting description of how brains do things that
- cannot be applied to any Turing machine.
-
- However most people who discuss the Strong AI thesis fail to
- distinguish the different versions, and consequently their
- disagreements are likely to be at cross purposes, making argument
- futile.
-
- Aaron
- ---
- --
- Aaron Sloman, School of Computer Science,
- The University of Birmingham, B15 2TT, England
- EMAIL A.Sloman@cs.bham.ac.uk OR A.Sloman@bham.ac.uk
- Phone: +44-(0)21-414-3711 Fax: +44-(0)21-414-4281
-