home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.philosophy
- Path: sparky!uunet!think.com!spool.mu.edu!umn.edu!umeecs!quip.eecs.umich.edu!marky
- From: marky@quip.eecs.umich.edu (Mark Anthony Young)
- Subject: Re: grounding and the entity/environment boundary
- Message-ID: <1993Jan3.181243.1231@zip.eecs.umich.edu>
- Sender: news@zip.eecs.umich.edu (Mr. News)
- Organization: University of Michigan EECS Dept., Ann Arbor
- References: <1992Dec28.144030.23113@cs.wm.edu> <C00GMG.ML7@iat.holonet.net> <C051Kq.8tD@spss.com>
- Date: Sun, 3 Jan 1993 18:12:43 GMT
- Lines: 151
-
- %r markrose@spss.com (Mark Rosenfelder)
- > I'm interested in what it takes to make an AI grounded-- to make its use of
- > language meaningful rather than "merely syntactic"-- to have it know what
- > it's talking about. Placing Harnad, Lakoff, and my own musings into a
- > blender and pressing "Liquefy", I come up with this: For an entity to be
- > (statically) grounded it must have high-bandwidth sensorimotor experience
- > with the real world; it's less grounded to the extent that the experience
- > is of low quality or quantity, or not really its own, or is weakly
- > integrated with its internal architecture. Dynamic grounding would depend
- > on a capacity to maintain this experience and function in the world.
- > I trust you can see how this formulation leads to a concern with what is
- > or isn't part of the system.
- >
- >[...]
- >
- >Just in case I haven't laid out enough flame-bait already... Some alternative
- >conceptions of grounding avoid the boundary problem by concentrating on
- >correlation of the internal world model with the real world. The problem I
- >have with these conceptions is that it's even less clear where you draw the
- >line between an ungrounded system that's just playing with symbols and one
- >that really knows what it's talking about; that it's not clear how many
- >errors, gaps, and ambiguities are permitted in the correlation; and that the
- >origin of the correlation is not taken into account.
-
- What Mark Rosenfelder is talking about above seems to me more about how
- something gets or stays grounded. Still, just being "right" about the
- world doesn't seem like enough to be grounded. There is some notion of
- "having experience" involved in that notion. I'd like to offer the
- following as a definition of grounded that uses both correlation and
- experience of the world.
-
- To give my definition of "grounded", I first have to define a couple of
- auxiliary notions. A system is said to be "correct" to the extent that its
- internal world model correlates with the real world. Note that this is a
- graded feature, not binary. Thus, one system can be said to be "more
- correct" than another (though I do not claim that degree of correctness
- forms a total order).
-
- Example: Consider two (twelve hour) clocks: one is stopped; the other
- runs slow by one minute every twelve hours. In the first twelve hours
- after they are set, the first clock is on average three hours out, while
- the latter is on average thirty seconds out. Over that limited time span,
- the second clock is 360x as correct as the first (on average). Of course,
- as time goes on, the second clock becomes even more incorrect. In the
- limiting case, both clocks average three hours of error, and so are equally
- correct.
-
- Note: it is often said that the stopped clock is right twice a day, while
- the slow clock is wrong almost always. However, unless one is prepared to
- argue that the human mind actually contains a tiny scale model of the
- universe, then the appropriate definition of "correctness" relies not on
- "precise match", but on "amount of error".
-
- The example of the clocks shows that "correctness" may decay. Sensory
- input can check that decay, however. Consider having someone reset each
- clock to the proper time at noon each day. Then the stopped clock will
- stay three hours out on average, while the slow clock will be 180x as
- accurate, at an average of one minute error.
-
- The measure of correctness does not depend on how the correctness is
- achieved. Thus it is irrelevent whether the clock is reset by a human or
- is actually capable of detecting when an atomic clock "strikes" noon and
- then resetting itself.
-
- A system is said to "adjust" when its internal world model is compared to
- the world, and is changed to a more correct model (if necessary). Thus our
- clocks above adjust at noon every day. A system that adjusts itself is
- said to be self-adjusting, while one that is adjusted by some other agent
- is externally adjusted. The clock that is reset by a human is externally
- adjusted, while the clock that resets itself is self-adjusting (in our
- example it is adjusting _to_ the atomic clock--an interesting question that
- I will not get into here is "what if the atomic clock is incorrect?").
-
- Note that the stopped clock in the example above is just as adjusted as the
- slow clock. When our custodian comes around, or when the atomic clock
- strikes noon, the stopped clock is correct, so its model does not need to
- change. If the custodian happens by at a few minutes before or after noon,
- then the stopped clock would be adjusted accordingly. The important part
- is that the model was checked and any detected inaccuracies were
- eliminated.
-
- Systems can adjust to greater or lesser extents, and more or less often. A
- self-adjusting clock that is slow by five minutes a day will adjust by five
- times as much as one that is slow by only one minute a day. Since the
- amount of adjustment depends only on how fast error builds up in the
- system, we are more interested in the frequency of adjustment. If they
- both adjust only at noon (standard time), then they adjust equally as often
- (1ce/day).
-
- A system is "adjusted" to the extent that it has recently adjusted its
- world model. For our example, the clock adjusts at noon, and so is fully
- adjusted at noon. As the day wears on, the clock becomes less adjusted
- until, at just before noon the next day, it is 24 hours less adjusted --
- half as adjusted as it was 12 hours earlier. On average that clock is
- twelve hours unadjusted. The clock that adjusts itself every hour is 24x
- as adjusted as the clock that adjusts daily.
-
- A system is "accurate" to the extent that it can maintain correctness
- without adjusting. One system can be more accurate than another, and yet
- less correct: if it is adjusted less often, the error accumulated over a
- longer time may outweigh the larger incremental errors built up over a
- shorter time.
-
- One last definition before I get to grounding proper. A system is said to
- be "complete" to the extent that its internal model of the world can
- describe the world. As an example, a clock with two faces -- say one set
- to London time and the other to Tokyo -- is twice as complete as the clock
- with one face. If the two faces are however forced to read the same time
- (by a mechanical connection, for example), it is no more complete than its
- cousin.
-
- Now to the definition of grounding. A system is "grounded" (in a
- particular world) to the extent that it is correct and complete and
- adjusted (for that world). If no world is mentioned, then, based on
- context, either reality is assumed or we are talking about grounding in any
- world. Grounding is a graded concept, but does not admit a total order.
- The extent of grounding can change over time (almost certainly will). When
- we speak of how grounded a particular system is, we mean its average
- grounding over some (unstated) period of time.
-
- A system is "self-grounded" to the extent that it is correct and complete
- and self-adjusted. This is the most interesting kind of grounding, as our
- notion of intelligence relies on _self_ adjustment.
-
- While (self-)grounding is a graded concept, it does not have a total order.
- A clock that loses one minute a day and is reset every week is more correct
- than one that loses ten minutes a day and is reset daily, but it is also
- less adjusted.
-
- Note that the bandwidth of the sensory experience has no direct effect on
- how grounded a system is. It will, however, have an indirect effect: the
- wider the bandwidth, the faster the entire model can be checked for errors
- (until the bandwidth reaches the size of the model, anyway). Thus, ceteris
- paribus, the system with the wider sensory bandwidth will be more correct
- and thus more grounded.
-
- Also, motor abilities have no direct effect on grounding. It seems likely
- that a system that can actually experiment with the world will be able to
- build better models. Still, this is not necessarily the case.
-
- As for the boundary problem, you can draw the boundary wherever you please.
- In general, the further out you draw the boundary, the more grounded the
- system is.
-
- So, how can you build an AI as grounded as a human? There are two
- extremes: you can either give it high bandwidth sensors and effectors and
- let it build its models the way people do; or you can build it more
- accurate than a human and get by with more limited world experience -- the
- narrower the bandwidth, the more accurate the system must be.
-
- ...mark young
-