Making Sense of the Data (in progress..)

Topology learning and associative memory algorithms.
Topology learning can be used as clustering. Some of the methods allow continuous clustering with very few parameters. These are suitable for stream data, where the amount of clusters is unknown a priori.


  
Neural Networks
  
Competitive Learning
  
Incremental Topology Learning
  
Associative Memory
Method Features Variations
  
  
SOM (Kohonen's Feature Maps). Able to project highly dimensional data onto lower (2D) dimensional space. Must specify number of nodes in advance. Decaying parameters over time. Number of nodes is not pre-defined. Pre-specified and fixe topology that matches the data. Maximum number of nodes must be defined. Has two layers: input and map layer. Forms spatial clusters. Performs vector quantization. May be initialised as a single line of neurons or 2D grid, or any other structure. Hierarchical SOM, Parallel SOM, eSOM, Self-Growing SOM
  
  
BAM Forget previously learned data
  
  
Hopfield Forget previously learned data. Even when learning in batch
  
SOIAM Does not cope with temporal sequences. Based on SOINN. Associative memory.
  
  
  
SOINN Insertion of nodes is stopped and restarted with new input patterns. Avoids indefinite increase of nodes. Copes with temporal sequences. Difficult to choose when to stop first layer training and start the second layer training. M-SOINN, E-SOINN
  
  
GAM 3-layer input-symbol memorisation-symbol association (grounding) architecture
  
  
GNG Number of nodes is not predefined. Maximum amount of nodes must be defined. Competitive Hebbian Learning (CHL) is not optional. Terminates when network reaches user-defined size. Has issues adapting to rapidly changing distributions. Nodes are added after certain number of iterations. The structure of the network is not constrained. Can map inputs onto different dimensionalities within the same network. GNGU
  
eSOM Faster than SOM. Has two layers: input and map layer. Forms spatial clusters. Does not perform vector quantization. No topological constraints. Nodes are not organised into 1D or 2D.
  
  
E-SOINN Works better on datasets and uses fewer parameters than SOINN. Single-layered network. Result depends on the sequence of input data. Uses Euclidean distance for finding the nearest node, which may not be scalable to higher dimensions.
  
GNGU Can adapt to rapidly changing distributions by relocating less useful nodes. It removes nodes that contribute little to the reduction of the error and inserts nodes where they would contribute mostly to the reduction of error. Nodes with low utility are removed.
M-SOINN Allows to set similarity thresholds for all nodes. Moves nodes and its neighbours closer to the input. Prunes clusters with only few nodes.
  
LVQ Number of neurons is pre-defined. Not suitable for incremental learning.
GCS Based on SOM. Number of neurons is not predefined. Maximum amount of nodes must be defined. Nodes are inserted after certain number of iterations. Topology-preserving. GCS network structure is constrained
  
MAM Number of associations must be pre-defined. With 3 layers can deal with 3-3 associations, but not 4-4 associations.
K-mean Must specify number of clusters.
  
KFMAM
  
KFMAM-FW Fixed weights. May enter infinite loop with edges between nodes if number of maximum nodes is not given.
  
  
ANG Enables incremental learning. Treats two overlapping clusters as one. Can be used for clustering.
Self-Growing SOM Does not need specified amount of nodes. Every specified amount of iterations, neurons are added into the map space. Instead of one neuron and learning connections, a row or a column of neurons is added. This is done to maintain the structure of the SOM.
  
SOINN-AM Given a pattern, reconstructs it.
  
SOM-AM Associative memory for temporal sequences. Initial weights are crucial.
  
  
  
NG Parameters decay over time. CHL is optional. Can map inputs onto different dimensionalities within the same network.
PSOM Interpolation approach to self-organisation
LB-SOINN
GTM Alternative to SOM. Does not require decaying parameters. Map highly dimensional data onto lower dimensional data and adds noise. Uses RBF for nonlinear mapping from input space to the output space. Good for representation of the data.
TRN
GGG
GM?
CCLA Growing network. Supervised. Nodes added to the hidden layer. New nodes act as feature detectors
Incremental Growing Grid Nodes are added to the perimeter of the grid that grows to cover the input space.
RCE Use prototype vectors to describe particular classes. If none of the vectors are close to the input, new class is generated. Prototypes cannot move once they have been placed.
ART More complex example of RCE. Adds new categories when mismatch is found between input and existing categories.
CLAM Few nodes participate in classification, rather than winner-takes-it-all approach.
  
  
GWR Does not connect the winning and the second winning node. New nodes can be added at any times, not only after certain amount of iterations. Can be used as novelty detectors (if node that fires has not fired before or fired infrequently, then the input is novel)
XOM ?