home *** CD-ROM | disk | FTP | other *** search
-
-
-
-
- SIMULATING ARTIFICIAL NEURAL SYSTEMS
- USING PARSIMONIOUS PARALLELISM
-
- Professor William W. Armstrong
- Department of Computing Science
- University of Alberta
- Edmonton, Alberta
- Canada
- T6G 2H1
- Tel. (403) 492 2374
- FAX: (403) 492 1071
- email: arms@cs.ualberta.ca
-
- Many problems cannot be completely solved by mathematical techniques,
- even though there may be a wealth of empirical data available. For
- example, consider a medical application where some measurement data
- related to symptoms, treatments and history are given for each person
- in a sample, together with a value indicating whether a certain
- disease was found present or not after costly tests. Suppose we want
- to find a simple relationship between the given data and the presence
- of the disease, which could lead to an inexpensive method for
- screening for the disease, or to an understanding of which factors are
- important for disease prevention. If there are many factors, and
- there are complex interactions among them, the usual statistical
- techniques may be inappropriate. In that case, one way of analysing
- data is by use of adaptive logic networks, which can, in principle,
- find simple relationships by means of an adaptive learning procedure.
- Since the method uses only empirical data, very little human
- intervention may be required to obtain an answer, hence making the
- approach very easy to try out.
-
- Adaptive logic networks belong to the class of artificial neural
- systems (ANS). Beyond applications in data analysis, such as the
- above, these are being used in an increasing number of applications
- where high-speed computation of functions is important. For example,
- to correct an industrial robot's positioning to take into account the
- mass of an object it is holding would normally require time-consuming
- numerical calculations which are impossible to do in a real-time
- situation during use of the robot. An ANS can learn the necessary
- corrections based on trial motions, and can perform the corrections in
- real time. It is not necessary to use all possible masses and motions
- during training, since ANS have the capacity to extrapolate smoothly
- to cases not presented in training.
-
- Speed and extrapolation ability would also be useful in situations
- where agile motions of autonomous robots are required. Extremely high
- speed response is needed in electronic systems, when parameters have
- to be adjusted, as in automatic equalizers for communication links.
- Other applications are in pattern recognition, sensor fusion, sonar
- signal interpretation, and many areas of data analysis.
-
- The usual type of ANS depends on fast multiplications and additions.
- Special chips have been built to do ANS computations at high speed,
- even resorting to analog operations for greater speed. A type of ANS
- has been developed at the University of Alberta, following earlier
- work at Bell Telephone Laboratories and the Universite de Montreal,
- which uses only simple logical functions AND, OR, and NOT. In
- hardware, computations would be done in parallel in a tree of
- combinational logic gates. Comparisons with a recent chip using
- standard techniques suggest that hardware based on adaptive logic
- could evaluate functions at least one thousand times faster, and would
- be in the trillion connections per second range.
-
- Another advantage of the logic networks is that most of a computation
- can often be left out. For example if a logical zero is input to an
- AND-node in a logical tree, then the output of the node can be
- computed without computing the other input, or even knowing the inputs
- which give rise to it. This produces no speedup in a completely
- parallel system, however in systems running on ordinary processors, or
- in systems which re-use any ANS special hardware sequentially (the
- usual case), it is of critical importance for speed. Systems which
- combine special hardware parallelism with the possibility of leaving
- out unnecessary computations are using "parsimonious parallelism". A
- small amount of such parsimony applies to ordinary ANS, but in logical
- networks, the speedups produced can amount to many orders of
- magnitude, and confer a great advantage on this approach vis-a-vis the
- usual one.
-
- The backpropagation technique for training standard ANS is quite slow.
- However, there is a technique for training adaptive logic networks
- that runs at combinational speeds. On-line learning would be quite
- feasible.
-
- Finally, although the networks are constructed using logical
- operations, they can also be applied to functions of real values or
- integers by using appropriate encodings of reals or integers into
- logical vectors. The results of logical computations are then decoded
- to obtain the real or integer results.
-
- Demonstration software in C-source form is available to researchers
- for non-commercial purposes only. The software is intended to be very
- clear, rather than being highly optimized for performance.
- Researchers are invited to copy it and modify it to suit their needs,
- without a license. Anyone requiring better performance should inquire
- at the above address about other versions of the software, which will
- be available later. They will offer great improvements in adaptive
- and evaluation algorithms, as well as numerous optimizations at the
- coding level. Such a version will be ready by 1991.
-
- Best wishes for success in using the research version of the adaptive
- logic network package! Please let us know about your successes and
- failures, so we can better serve the research community.
-
-
-