home *** CD-ROM | disk | FTP | other *** search
- Path: sparky!uunet!dtix!darwin.sura.net!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!news.columbia.edu!cunixb.cc.columbia.edu!rs69
- From: rs69@cunixb.cc.columbia.edu (Rong Shen)
- Newsgroups: comp.ai.neural-nets
- Subject: How to train a lifeless network (of "silicon atoms")?
- Message-ID: <1992Nov21.002654.13198@news.columbia.edu>
- Date: 21 Nov 92 00:26:54 GMT
- Sender: usenet@news.columbia.edu (The Network News)
- Organization: Columbia University
- Lines: 23
- Nntp-Posting-Host: cunixb.cc.columbia.edu
-
- Your Highness:
-
- Please allow me to ask you this childish question:
-
- Suppose you have a neural network and you want to train it to
- perform a task; for the moment, let's say the task is to recognize
- handwriting. Now suppose the network has recognized the word "hello,"
- and the weight in the synapse between neurodes (neurons) X and Y is k.
- If you proceed to train the network to recognize the word "goodbye"
- (by back propagation, or whatever algorithms), and since all the
- neurodes are connected in some way (through some interneurons, maybe),
- the synaptic weight between X and Y is likely to change from k to some
- other number; similarly, the weights in other synapses will change.
- Therefore, it is extremely likely that one training session will erase
- the efforts of previous sessions.
-
- My question is, What engineering tricks shall we use to
- overcome this apparent difficulty?
-
- Thanks.
-
- --
- rs69@cunixb.cc.columbia.edu
-