home *** CD-ROM | disk | FTP | other *** search
- This source contains the following files:
-
- Neuron.c - source for content addressible memory program
- NOut.ex1 - an example output report generated by the program
- NOut.ex2 - another example output report generated by the program
- Neuron.doc - this file.
-
- The program Neuron.c simulates a SIMPLE stable state neural network
- reporting on both input and output states and energy levels after each
- iteration (namely set up for 8, though usually the network stabilizes after
- about 4). The program demonstrates a very straight-forward method of
- programming a content-addressable memory and receiving output from that
- memory. This program is based on a BASIC program developed in Ed Rietman's book
- "Experiments in Artificial Neural Networks" published by TAB Books copyright
- 1988.
- The program allows you to create an input vector (vector u - either created
- randomly or by entry), a connection strength matrix (matrix T - either created
- randomly or by entry), enter a threshold value, a information value (see pg. 27
- of "Exp. in ANN"), and a seed for generating the random vector and matrices (if
- necessary - this also allows reproducible tests). It outputs a vector (vector
- v) and feeds this vector back into the connection strength matrix until the
- network stabilizes (the network can have multiple Lorenz strange attractors).
-
- The executable was created on an AMIGA 2000 computer with 1M using a Manx
- C Compiler Ver 3.4a and can be run either from the CLI or Workbench (please
- use the ICON supplied or incorporate the tool window line into your ICON as
- this is what creates the window for the program to run in). This program should
- execute on an AMIGA 500/1000/2000 with no difficulty. The source should be
- equally compilable on most C compilers as I tried to stay as system independent
- as possible.
-
- The program creates an output report to a file called "Neuron.Output". Two
- example reports are included in the archive file (NOut.ex1 and NOut.ex2) for
- comparison purposes.
-
- Feel free to repost this archive to other BBS's so long as you keep the
- copyright and this doc file intact (at least I would appreciate it). Also,
- feel free to modify or use this program in any way you like provided it is not
- for commercial purposes (i.e. this was written in the spirit of learning and
- promoting Neural Networks, Parrallel Distributed Processing and NOT for anyone
- to make money!!!).
-
- For more detailed analysis of this program I suggest researching Ed
- Rietman's book.
-
- Remember, this is just a simple program demonstrating the programming of
- matrix mathematics and content addressible memories. Other programs will be
- on the way demonstrating other aspects of PDP and NNets (such as Hebb's
- learning rule, Grossberg's revision, interactive activation, etc.) just as
- soon as I research them myself (I'm still in the learning curve.)
-
- If you have any questions regarding this program, PDP, NNet's, or anything
- dealing with computers and their use (I've been an IBM Mainframe (4300 VM/VSE)
- systems programmer for 5 years and an IBM applications programmer before that
- for 5 years) send me some EMAIL (check the BBS you got this from - I might be
- there - if not I've included my USENET address as that's where I check for mail
- most often. If you can't find me check with somebody else on your BBS - I
- usually can be tracked down.).
-
- Hope this program helps some newcomers to the field of PDP and NNets,
-
- Shawn P. Legrand, CCP
-
- USENET: spl@cup.portal.com or ...sun!cup.portal.com!spl
-
- ------------------------------ cut here for source -----------------------------
- /*
-
- Original Author: Shawn P. Legrand 15-Sep-88
-
- This program simulates a simple neural network by calculating the
- inner product of a vector and a matrix where the matrix T is symmetric
- {T(i,j) = T(j,i)}, dilute (more 0's than 1's), and T(i,i)=0 (to prevent
- oscillations and chaotism). Reports are sent to a report file showing the
- input and output matrices and the energy level of the matrix after each
- iteration.
-
- This program is based on information contained in a book by Ed Rietman,
- 'Experiments in Artificial Neural Networks', published by TAB.
-
- Copyright 1988 Shawn P. Legrand, CCP All Rights Reserved
-
- */
-
- #include <stdio.h>
-
- /*
- * Random number generator -
- * adapted from the FORTRAN version
- * in "Software Manual for the Elementary Functions"
- * by W.J. Cody, Jr and William Waite.
- */
-
- static long int iy = 100001;
-
- sran(seed)
- long seed;
- {
- iy = seed;
- }
-
- double ran()
- {
- iy *= 125;
- iy -= (iy/2796203) * 2796203;
- return (double) iy/ 2796203.0;
- }
-
- /*
- * Main program logic.
- */
-
- main()
- {
- int neurons, io, info, vector, matrix, i, j, iterate, sigma=0;
- long seed;
- float energy=0;
- double r;
- int t[100][100], u[100], v[100];
- FILE *fp;
-
- /* Open new CLI device */
-
- if ((fp = fopen("Neuron.output","w")) == NULL) {
- printf("Error opening Neuron.output!!!\n");
- exit(10);
- }
-
- /* Retrive initial values */
-
- printf("Input the random seed: ");
- scanf("%ld",&seed);
- sran(seed);
-
- printf("Enter the number of neurons (100 maximum): ");
- scanf("%d",&neurons);
- printf("Input the threshold value (0 to 2 are reasonable values): ");
- scanf("%d",&io);
- printf("Enter the value of the information (0 to 1 is a good value): ");
- scanf("%d",&info);
- printf("Do you want to enter the input vector yourself (1/Yes, 0/No)? ");
- scanf("%d",&vector);
- printf("Do you want to enter the T matrix (1/Yes, 0/No)? ");
- scanf("%d",&matrix);
-
- /* Initialize the weight matrix (matrix T) */
-
- if (!matrix) {
- for (i=0; i<neurons; i++) {
- for (j=0; j<neurons; j++) {
- if (i==j) {
- t[i][j] = 0;
- } else {
- r=ran();
- if (r<0.8) {
- t[i][j] = 0;
- } else {
- t[i][j] = 1;
- }
- }
- }
- }
- } else {
- for (i=0; i<neurons; i++) {
- for (j=0; j<neurons; j++) {
- printf("T(%d,%d): ",i,j);
- scanf("%d",&t[i][j]);
- }
- }
- }
-
- /* Output initial values */
-
- fprintf(fp,"Seed = %ld\n",seed);
- fprintf(fp,"Threshold = %d\n",io);
- fprintf(fp,"Information = %d\n",info);
- fprintf(fp,"T matrix:\n");
- for (i=0; i<neurons; i++) {
- for (j=0; j<neurons; j++) {
- fprintf(fp,"%d",t[i][j]);
- }
- fprintf(fp,"\n");
- }
- fprintf(fp,"\n\n");
-
- /* Retrieve input vector (vector u) */
-
- if (!vector) {
- for (i=0; i<neurons; i++) {
- r=ran();
- if (r<0.5) {
- u[i]=0;
- } else {
- u[i]=1;
- }
- }
- } else {
- for (i=0; i<neurons; i++) {
- fprintf(fp,"Input u(%d): ",i);
- fscanf(fp,"%d",&u[i]);
- }
- }
-
- /* Perform calculations, Report input and output vector, Energy level,
- Iterate using feedback from prior calculation */
-
- for (iterate=0; iterate<8; iterate++) {
- for (i=0; i<neurons; i++) {
- for (j=0; j<neurons; j++) {
- sigma+=t[i][j]*u[j];
- }
- sigma+=info;
- if (sigma>io) {
- sigma=1;
- } else {
- sigma=0;
- }
- v[i]=sigma;
- sigma=0;
- }
- fprintf(fp,"Iteration %d:\n",iterate);
- fprintf(fp,"Input vector U:\n");
- for (i=0; i<neurons; i++) {
- fprintf(fp,"%d",u[i]);
- }
- fprintf(fp,"\nOutput vector V:\n");
- for (i=0; i<neurons; i++) {
- fprintf(fp,"%d",v[i]);
- }
- fprintf(fp,"\n");
- fprintf(fp," Energy: ");
- for (i=0; i<neurons; i++) {
- energy+=(float) (u[i]*v[i]);
- }
- energy*=-0.5;
- fprintf(fp,"%f\n\n",energy);
- for (i=0; i<neurons; i++) {
- u[i]=v[i];
- }
- energy=0;
- }
-
- }
- ------------------------------- cut here for example one -----------------------
- Seed = 16000
- Threshold = 1
- Information = 1
- T matrix:
- 0001010000100000
- 0000100010000000
- 0000000000000000
- 0110000000010000
- 0110010010000000
- 1000000011001000
- 0000000000000000
- 0010100001000001
- 0000000000000000
- 0000101000000001
- 1001000000000000
- 0010000000001010
- 0010000100000100
- 0010000101000010
- 0000000000000100
- 0000001000110000
-
-
- Iteration 0:
- Input vector U:
- 1101010011110001
- Output vector V:
- 1101110101100101
- Energy: -3.500000
-
- Iteration 1:
- Input vector U:
- 1101110101100101
- Output vector V:
- 1101110101101111
- Energy: -5.000000
-
- Iteration 2:
- Input vector U:
- 1101110101101111
- Output vector V:
- 1101110101111111
- Energy: -6.000000
-
- Iteration 3:
- Input vector U:
- 1101110101111111
- Output vector V:
- 1101110101111111
- Energy: -6.500000
-
- Iteration 4:
- Input vector U:
- 1101110101111111
- Output vector V:
- 1101110101111111
- Energy: -6.500000
-
- Iteration 5:
- Input vector U:
- 1101110101111111
- Output vector V:
- 1101110101111111
- Energy: -6.500000
-
- Iteration 6:
- Input vector U:
- 1101110101111111
- Output vector V:
- 1101110101111111
- Energy: -6.500000
-
- Iteration 7:
- Input vector U:
- 1101110101111111
- Output vector V:
- 1101110101111111
- Energy: -6.500000
-
- ---------------------------------- cut here for example 2 ----------------------
- Seed = 10
- Threshold = 1
- Information = 1
- T matrix:
- 0001001000110001
- 0010000010000000
- 0000100000000100
- 0000010010000010
- 0101000000000000
- 0000001000001001
- 0010000110000001
- 0000001000010000
- 1000101000000010
- 0001000100010000
- 1110001001000000
- 1001100000000000
- 0010011001000000
- 0000000000001010
- 1011010010100000
- 0000000010000000
-
-
- Iteration 0:
- Input vector U:
- 0000010101101000
- Output vector V:
- 1001011001101110
- Energy: -2.000000
-
- Iteration 1:
- Input vector U:
- 1001011001101110
- Output vector V:
- 1011110111111110
- Energy: -4.000000
-
- Iteration 2:
- Input vector U:
- 1011110111111110
- Output vector V:
- 1111111111111111
- Energy: -6.500000
-
- Iteration 3:
- Input vector U:
- 1111111111111111
- Output vector V:
- 1111111111111111
- Energy: -8.000000
-
- Iteration 4:
- Input vector U:
- 1111111111111111
- Output vector V:
- 1111111111111111
- Energy: -8.000000
-
- Iteration 5:
- Input vector U:
- 1111111111111111
- Output vector V:
- 1111111111111111
- Energy: -8.000000
-
- Iteration 6:
- Input vector U:
- 1111111111111111
- Output vector V:
- 1111111111111111
- Energy: -8.000000
-
- Iteration 7:
- Input vector U:
- 1111111111111111
- Output vector V:
- 1111111111111111
- Energy: -8.000000
-
- ---------------------------------- End of File ---------------------------------
-