home *** CD-ROM | disk | FTP | other *** search
- Xref: sparky comp.ai.neural-nets:4293 uw.neural-nets:2
- Newsgroups: comp.ai.neural-nets,uw.neural-nets
- Path: sparky!uunet!utcsri!torn!watserv2.uwaterloo.ca!watdragon.uwaterloo.ca!watserv1!joordens
- From: joordens@watserv1.uwaterloo.ca (Steve Joordens)
- Subject: How deep are spurious attractor basins?
- Message-ID: <Bxv79K.G7F@watserv1.uwaterloo.ca>
- Keywords: Hopfield, Boltzmann, Attractors, Energy Functions
- Organization: University of Waterloo
- Date: Tue, 17 Nov 1992 14:36:55 GMT
- Lines: 21
-
- In recent simulations that I have been doing with a Hopfield net
- model of human memory representation, the network has problems staying
- out of spurious attractors. I have heard about two possible ways of
- solving this problem but, before applying this solutions blindly, I
- want to know more about the problem itself. My original thought was
- that since spurious attractors were attractors that were not directly
- 'taght' to the network, they would be shallower (speaking in energy
- space terms) than learned attractors. If this is the case, then adding
- a simulated annealing process may increase performance. However, I
- have heard that this may not be the case (i.e., that spurious attractors
- may be as deep as learned attractors). So, the question of the day:
-
- Has anybody done any research comparing the depth of spurious and
- learned attractors using either a hopfield net ot boltzmann machine
- type architecture (references would be welcomed) ?
-
- uu uu ww ww Steve Joordens || |\^/| ||
- UU UU WW WW Department of Psychology || ._|\| |/|_. ||
- UU UU WW WW WW University of Waterloo || \ Canada / ||
- UU UU WW WW WW Office: PAS 4259 || <____ ____> ||
- UUUUU WW WW (519) 885-1211 (ext 6819) || | ||
-