home *** CD-ROM | disk | FTP | other *** search
- Xref: sparky sci.lang:8045 sci.philosophy.tech:4113 sci.cognitive:639
- Newsgroups: sci.lang,sci.philosophy.tech,sci.cognitive
- Path: sparky!uunet!mcsun!Germany.EU.net!news.netmbx.de!mailgzrz.TU-Berlin.DE!math.fu-berlin.de!Sirius.dfn.de!chx400!univ-lyon1.fr!ghost.dsi.unimi.it!batcomputer!rpi!usc!zaphod.mps.ohio-state.edu!pacific.mps.ohio-state.edu!linac!mp.cs.niu.edu!uxa.ecn.bgu.edu!news.ils.nwu.edu!pautler
- From: pautler@ils.nwu.edu (David Pautler)
- Subject: Re: Theories of meaning
- Message-ID: <1992Nov16.162453.29937@ils.nwu.edu>
- Sender: usenet@ils.nwu.edu (Mr. usenet)
- Nntp-Posting-Host: aristotle.ils.nwu.edu
- Organization: The Institute for the Learning Sciences
- Date: Mon, 16 Nov 1992 16:24:53 GMT
- Lines: 39
-
- In article <THO.92Nov16001115@hemuli.tik.vtt.fi>, tho@tik.vtt.fi (Timo Honkela) writes:
-
- > Connectionistic models with unsupervised learning capabilities
- > seem to be formalisms or even grounding blocks for "theories of
- > how the signifier is linked to the signified." Professor Teuvo
- > Kohonen has written: "In attempting to devise Neural Network
- > models for linguistic representations, the first difficulty
- > is encountered when trying to find metric distance relations
- > between symbolic items. [...] it cannot be assumed that encodings
- > of symbols in general have any relationship with the observable
- > characteristics of the corresponding items. How could it then
- > be possible to represent the 'logical similarity' of pairs
- > of items [...]? The answer lies in the fact that *the symbol,
- > during the learning process, is presented in context*."
- > (The Self-Organizing Map, Proceedings of the IEEE, vol. 78, no 9,
- > September 1990)
-
- "Metric distance relations between symbolic items"? Is that an associationist
- take on dereferencing? And if symbol-encodings have nothing necessarily
- to do with with the corresponding item, what led to the encoding in the first
- place? Also, can you imagine any instance of learning that does not occur
- within *some* context?
-
- > The context may consist of words, symbolic features, continuous
- > features or even pictorial images. The virtue of unsupervised
- > learning (e.g. the self-organizing map) lies in the fact
- > that there is no need for any "correct answers" which is the
- > case with most of the connectionist models.
-
- It seems to me that "unsupervised learning" here means: the desired output
- was achieved, but we deliberately ignored looking to see if the learning
- sequence itself was psychologically plausible. So, if symbolist models
- require an infinite sequence of dereferencing operations before we can
- reach the signified from the initial signifier, an associationist solution
- is to turn one's back during "dereferencing" and then insist that the output
- is the real signified, rather than just another signifier?
-
- -dp-
-
-