home *** CD-ROM | disk | FTP | other *** search
- Newsgroups: talk.abortion
- Path: sparky!uunet!news.encore.com!jbates
- From: jbates@encore.com (John W. Bates)
- Subject: Re: Myelin (Was Re: Spoken Like a True ProLifer)
- Organization: Encore Computer Corporation
- Date: Fri, 22 Jan 1993 19:05:58 GMT
- Message-ID: <JBATES.93Jan22140558@pinocchio.encore.com>
- In-Reply-To: mcochran@nyx.cs.du.edu's message of Fri, 22 Jan 93 03:58:02 GMT
- References: <1993Jan20.062913.13725@mnemosyne.cs.du.edu>
- <1993Jan21.034431.24481@organpipe.uug.arizona.edu>
- <JBATES.93Jan21035438@pinocchio.encore.com>
- <1993Jan22.035802.8755@mnemosyne.cs.du.edu>
- Sender: news@encore.com (Usenet readnews user id)
- Nntp-Posting-Host: pinocchio.encore.com
- Lines: 106
-
- In article <1993Jan22.035802.8755@mnemosyne.cs.du.edu> mcochran@nyx.cs.du.edu (Mark A. Cochran) writes:
-
- >>Sorry, Mark, but you've got things a little mixed up. In networking
- >>lingo, hypercubes are a network designed for supercomputing, with
- >>a node at each vertex connecting to each neighboring vertex. It's not
- >>related to neural networks at all. The closest neural network design
- >>I can think of is James Anderson's "brain state in a box", in which
- >>each output pattern is a vertex of an n-dimensional box. Nice model
- >>for associative memory, but n tends to have to be very large (in
- >>computational terms) for it to be useful.
- >>
- > It's a good thing I made sure to deny any real knowledge of
- > hyper-cubes then, isn't it? :)
- > I still bet you can't build one out of C=64's though.... :)
-
- Oh, you could build one, all right. It might even be more useful than
- a single C-64. That's not saying much. (Actually, on a side note, my
- father still uses the C-64 he bought in 1982. He really likes it, but
- soon might be upgrading to a 286. Whoooeee! (And besides, look at what
- Chaney accomplishes with his C-64...))
-
- >>I've been leary of bringing models into this discussion, since it is
- >>often hard to relate models of neural networks to actual neural
- >>networks. But now, let me refer to a model by Stanislas Dehaene and
- >>Jean-Pierre Changeux, which simulated the performance of human
- >>infants in Piaget's A not B task. Their results approximated the
- >>performance of human infants.
- >>
- >>The interesting part of the experiment, though, was that they varied
- >>the amount of "noise" that the network received. At high levels of
- >>noise, the network performed at the level of a 7-month old infant,
- >>but at low levels, it performed at the level of a 12-month old
- >>infant. Noise levels seemed to correspond to the development of
- >>myelin in the frontal lobes. (from the _Journal of Cognitive
- >>Neuroscience_ 1:3, S. Dehaene and J.P. Changeux, A simple model of
- >>prefontal cortex function: delayed response tasks)
- >>
- > Interesting. If they were able to jump the noise level around to
- > approximate various developmental stages, I wonder if they could/did
- > jump it up to a level that would approximate that development of, say,
- > a 22 week fetus?
- > Be interesting to see the results if they did. It could shed some
- > light on this subject, at least.
-
- Well, the problem with models is that it is entirely possible to read
- too much into them. The strongest conclusion you can make from the data
- gathered is that the performance of the network matches the performance
- of a child at a specific age. Since the mechanism by which the network
- befores does not map one-to-one with the mechanism by which the child
- performs, it's still only an approximation. Extrapolating the performance
- of the network beyond the data we have from actual experimentation is
- really only guesswork.
-
- Of course, we can determine the amount of noise in an unmyelinated vs.
- a myelinated axon, by using the cable equations. We know that the
- performance of a network will degrade rapidly when the noise increases
- and/or when the size decreases. But, without experimental data, we can't
- be sure that our simulated network matches the actual performance of the
- human mind.
-
- > [Crabs and Squid deleted, since I've already had lunch]
-
- >>>>>> The resonable work in question, though, is thought. Just as you
- >>>>>> can't use a 4 bit 16K RAM computer as an effective file server, I
- >>>>>> don't see how you cna use the similarly limited abilities ofthe
- >>>>>> pre-myelinated neural system as a 'thought server'.
- >>
- >>>>>I'm reserving my judgment on the matter for a time when we know
- >>>>>more about all the issues involved. The hypomyelinated mice
- >>>>>discussed later in the post may be our best window into this issue.
- >>>>>In the meantime, it must be obvious to everyone reading this thread
- >>>>>(all 3 of us :-) that neither of us has any clue about whether
- >>>>>myelin is necessary or not. I think that there is at least a fair
- >>>>>amount of information on the biological side that suggests that
- >>>>>myelin is not as central as some claim. On the other hand, your
- >>>>>arguments about the presumed complexity of the network are
- >>>>>certainly thought-provoking (there's that smell again...myelin
- >>>>>burning or something).
- >>
- >>Yes. The major problem that we have in modelling brain processes is
- >>the complexity of the whole thing. I mean, our supercomputers have
- >>problems with 2-3000 neuron models. Massively parallel systems
- >>reach the 16-32000 neuron level. How much of the brain is actually
- >>dedicated to thought? Maybe what, 10^10 neurons?
- >>
- > I recall reading we'd need a computer the size of (something like)
- > Manhatten to approximate the brain. Does that sound like a resonable
- > size?
-
- Well, I just did some quick calculations, and came up with over three
- hundred million of the new Thinking Machines CM-5 (I think that's the
- model.) Stacked three high, that's a solid block about three miles long
- on each side. But then you need power supplies, cooling systems, and lots
- of cabling.
-
- I suspect (in fact I'm sure) that this model is a bit overkill. Sounds
- impressive, but really overly useful. A whole SPARC processor for each
- neuron? Phooey. In practice, I think that Intel(?) has a prototype
- neural network chip that simulates about 400 connected neurons in parallel.
- Assuming liberal space requirements for each chip, that's only a block
- 7/10 of a mile on each side, and twenty feet high. Of course, there are
- a few minor implementation problems to be worked out...
-
- Gotta run.
-
- John
-