home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.philosophy
- Path: sparky!uunet!tcsi.com!iat.holonet.net!ken
- From: ken@iat.holonet.net (Ken Easlon)
- Subject: Re: Drawing the entity/environment boundary
- Message-ID: <C06KHq.2qw@iat.holonet.net>
- Organization: HoloNet National Internet Access BBS: 510-704-1058/modem
- References: <1992Dec31.223448.28663@news.media.mit.edu>
- Date: Fri, 1 Jan 1993 15:03:25 GMT
- Lines: 72
-
-
- In article <1992Dec31.223448.28663@news.media.mit.edu> ,
- minsky@media.mit.edu (Marvin Minsky) writes:
-
- >1). The notion of "understanding" is itself defective if we try to
- >define "A understands X". Read Chapter 30 of The Society of Mind.
- >A major point is that this is relative to someone's judgment, so one
- >really should be discussing "O agrees (thinks, believes, assumes) that
- >A understands (in some sense proposed by O) X. So Joe may think Jack
- >understands Physics, whereas Gell-Man thinks Jack understands nothing
- >about it.
-
- I think if we apply enough tests we can determine for all practical
- purposes whether or not A understands X, per a standardized definition of
- "understand". I agree that without the standardized tests and definitions
- there might be some confusion.
-
- >2) the notion of "boundary" should also be relative to some observer...
-
- I think what Mark is searching for is a standardized definition of the
- entity/environment boundary that is useful for attaining a functional
- degree of understanding of the process of "grounding". I guess my point in
- bringing up all the other possible uses of the boundary concept is that
- there are a number of ways of thinking about grounding.
-
- I think if we are to determine if an AI configuration (notice my clever
- avoidance of the flamed words) is sufficiently grounded for practical
- purposes, we need to know the job(s) the AI is designed for.
-
- >Read Dawkins' The Extended Phenotype for many examples of where it is bad
- >to draw a boundary around a particular cell, organ, or individual
- > organism's body. See his discussion of Beaver-caused lakes, for example.
-
- I haven't read Dawkin's, but I can easily visualize the human body as a
- structure of proteins and glyco-proteins, with cellular membranes being
- little more than sort of an access control system.
-
- In this view (which only consider's a cell's interphase function), cellular
- boundaries are only important for determining the local chemical
- environment in microscopic regions. A membrane enclosed area would be an
- entity for a biochemist interested in studying its chemical content.
-
- Of course when we look at the cell's whole life cycle, the structure
- enclosed by the cellular membrane takes on many more entity defining
- functions than simple access control.
-
- Likewise, if we look at a functioning AI configuration we might draw the
- boundary around a rather large area based on job description. If we look
- at other phases of the design-manufacturing-deployment-junk life cycle we
- might draw the line some place else.
-
- In other words, a functioning AI might have to interface with a large
- number of other "entities", but when it comes time to scrap that pile of
- junk we only have to replace one robot.
-
- I think Mark is interested in the training phase. How much equipment,
- software, and developmental interaction is necessary to provide the "A" "I"
- with the "I".
-
- In my opinion, once you go through the training process for one system, you
- can build an exact copy without going through all the interactive
- development, which means you can eliminate some of the equipment that was
- used exclusively for the training phase.
-
- I wonder what can of worms I've opened with THAT statement?
-
-
- --
- Ken Easlon | "...somebody spoke and I went into a dream..."
- ken@holonet.net | -Paul McCartney
- Pleasantly Unaffiliated |
-
-