home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: comp.ai.neural-nets
- Path: sparky!uunet!charon.amdahl.com!amdahl!rtech!pacbell.com!sgiblab!munnari.oz.au!newsroom.utas.edu.au!probitas!vamplew
- From: vamplew@probitas.cs.utas.edu.au (Peter Vamplew)
- Subject: Re: Recurrent networks
- Message-ID: <vamplew.728187821@probitas>
- Sender: news@newsroom.utas.edu.au
- Organization: University of Tasmania, Australia.
- References: <728134656.26714@minster.york.ac.uk>
- Date: Thu, 28 Jan 1993 02:23:41 GMT
- Lines: 33
-
- In <728134656.26714@minster.york.ac.uk> george@minster.york.ac.uk writes:
-
- >I've been trying to get neural networks to learn sequences
- >of type DIFFERENT->SAME->DIFFERENT states, e.g.
-
- >A->B->C and X->B->Y
-
- >is a simple example. However, with a recurrent network (feeding
- >back the hidden layer), the output of a network for both pattern
- >sequences can be given as: (fn O() is output layer, H() hidden)
-
- >O(H(A)) and O(H(X)) both must equal B
-
- >This seems to force the hidden representation for A and X to be
- >identical. On the next step when B is given as input, the context
- >fed back, i.e. the hidden representation H(A) or H(X), will
- >not distinguish the network's state to output C or Y as appropriate.
-
- >i.e. O(H(B, H(A))) = O(H(B, H(X)))
-
- >How can this problem be overcome?
-
- Why is it neccesary for H(A) = H(X)? This is true only if your output
- function is strictly one-to-one. The network should be able to produce B as
- an output for more than one possible set of hidden layer activations.
-
- Peter
-
- --
- Peter Vamplew Dept of Computer Science, Uni of Tasmania
- "The second hand moves very quickly, the minute hand moves
- a bit more slowly, and the hour hand moves not much at all"
- - results of a time and motion study by Reg the Slayer
-