Previous | Next | To the beginning
Self-organization of very large data collections in brain-like modelsTeuvo Kohonen
Neural Networks Research Centre, Helsinki University of Technology, 02015 HUT, Espoo, Finland
The Self-Organizing Map (SOM) is the most widespread unsupervised artificial-neural-network algorithm, which is a compromise between biological model-ing and statistical data processing. On one hand it sheds some light into the mystery of the development of ordered and organized brain maps, especially in the cerebral cortices. On the other hand, its practical applications range from speech recognition, image analysis, and industrial process control to automatic ordering of extensive document libraries, visualization of financial records, and categorization of the brain's electric and magnetic signals.
The self-organizing process  may be realized in any set of elements, where only a few basic operational conditions are assumed. For simplicity, let the elements (e.g. neurons, cortical columns, or modules consisting of closely cooperating neurons) form a regular planar array and let each element represent a set of numerical values Mi which we call a model. These values may correspond to some parameters of a module in the neuronal system, and in computational neuroscience it is customary to identify them with synaptic efficacies. The models, however, can also be much more general and take into account other biological details. We further assume that each model Mi is modified by the messages X which the unit receives. Let there exist some mechanism by which an ingoing message X, e.g., a set of parallel numerical signal values, can be transferred simultaneously to all units, and compared with all models Mi. The model that matches best with the present message (e.g., whose parameter values comply best with the signal values at a given time) is called the winner and denoted Mc. Another requirement is that the models shall change only in the local vicinity of the winner(s) and that all the modified models shall then resemble the prevailing message better than before. When the models in the neighborhood of the winner simultaneously tend to resemble the prevailing message X better, they also tend to become more similar mutually, i.e., the differences between all models in the neighborhood of Mc are smoothed. Different messages affect different parts of the set of models at different times, and thus the models Mi, after many learning steps, tend to acquire values that relate to each other smoothly over the whole array in the same way as the original messages X in the 'signal space' do; in other words, topographical maps of the sensory occurrences start to emerge. These three subprocesses - broadcasting of the input, selection of the winner, and adaptation of the models in the spatial neighborhood of the winner - seem to be sufficient, in the general case, to define a self-organization process which then results in the emergence of the topographically organized 'maps'.
 For a more complete discussion see: Kohonen, T. and Hari, R. Trends in Neurosciences (in press)
Previous | Next