home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.neural-nets
- Path: sparky!uunet!usc!news.aero.org!speedy.aero.org!news
- From: jrg@punch (James R. Goble)
- Subject: Re: Need help on Neural-network learning
- Message-ID: <1993Jan22.171308.8391@speedy.aero.org>
- Sender: news@speedy.aero.org
- Nntp-Posting-Host: punch.aero.org
- Organization: The Aerospace Corporation; El Segundo, CA
- References: <qq6qXB7w164w@cellar.org>
- Date: Fri, 22 Jan 1993 17:13:08 GMT
- Lines: 73
-
- In article <qq6qXB7w164w@cellar.org> tsa@cellar.org (The Silent Assassin)
- writes:
- > I only recently got interested in neural networks. I believe I
- understand
- > how they work, but I have some questions:
- >
- > How many hidden layers do most neural networks have? Does the typical
- NN
- > have an input layer, a hidden layer, and an output layer, or several
- hidden
- > layers?
- >
- > How do NN's learn? Other than randomly changing weights, how do they
- > "back-propagate" or whatever?
- >
- > I finally discovered, at the age of 21, what all of my friends had found
- out
- > at 12 or 13-that your parents are just as fucked-up and wierd as
- everyone
- > else. Politically Correct: Socially, Politcally, and Environmentally
- > concious to the point of nausea.
- tsa%cellar@tredysvr.tredydev.unisys.com
-
- Let me try to answer some of your questions.
-
- 1. There is no standard number of hidden layers for a neural net because
- there is no standard neural net.
-
- Backpropagation nets, in their most basic form have one input layer, one
- hidden layer, and one output layer. Some people feel that additional
- hidden layers will speed up learning and allow the modeling of more
- complex systems. I haven't found any noticable speed enhancement by using
- more hidden layers. The extra update time seems to negate any decrease in
- training iterations. Backprop nets are trained with known data. You know
- which output should be chosen for a given input. For example I want
- output node 3 to be a 1 and all other nodes to be a 0 when I input a
- vector from a given set of data. During training, everytime I input one
- of those vectors an error signal is generated which attempts to force the
- desired output by updating the weights. The same conditioning occurs for
- all other sets of data from the other classes. If my net trains properly,
- (I haven't gotten stuck in a local minimum) I will have a good
- representation of the types of data I want to classify stored in my
- weights.
-
- Self-Organizing nets (Blessed be Saint Tevo) may not have any hidden
- layers, though I have seen some that did (hybrids like counterprop nets).
- These nets essentially learn by pushing the outputs in the direction of
- the data. If you have data from 4 different events, you should see four
- distinct clumps of nodes at the output. This is an extremely powerful
- tool. It allows you to use a binary machine (a computer) to derive a
- human-like closest guess answer.
-
- I know this is a rudimentary explanation and doesn't completely answer
- your questions. Neural nets are like that. The exciting part is that
- there are so many unanswered question. So many fields, like
- electromagnetics, are entrenched. Most of the good questions have been
- answered. NN are new. You can make a difference. Look at what Grossberg
- has done in the last twenty years. Besides offending most of his peers
- with his incredible ego, he has forged a new and exciting field from the
- ashes of perceptrons. Tevo Kohonen, well he's just a god of NN.
-
- A suggestion for a good, interesting book on NN is "An Introduction to
- Biological Artificial Neural Networks" by Kabrisky and Rogers. This is
- possibly the best written book in the field, from the perspective that
- normal human can understand what the hell they're talking about. It also
- has some great software at the called Neural Graphics. This program was
- written by Dr. Greg Tarr and is available at edu.afit.blackbird.ftp.
-
- Hope this is of some value to you!
-
- Later,
- Jim Goble
- Los Angeles
-