Artificial Neural Networks/Competitive Learning

Competitive Learning
Competitive learning is a rule based on the idea that only one neuron from a given iteration in a given layer will fire at a time. Weights are adjusted such that only one neuron in a layer, for instance the output layer, fires. Competitive learning is useful for classification of input patterns into a discrete set of output classes. The “winner” of each iteration, element i*, is the element whose total weighted input is the largest. Using this notation, one example of a competitive learning rule can be defined mathematically as:


 * $$w_{ij}[n+1] = w_{ij}[n] + \Delta w_{ij}[n]$$
 * $$\Delta w_{ij} = \left\{ \begin{matrix} \eta (x_i - w_{ij}) & \mbox{ if } i = j \\ 0 & \mbox{ otherwise}\end{matrix}\right.$$

Linear Vector Quantization
In a learning vector quantization (LVQ) machine, the input values are compared to the weight vector of each neuron. Neurons who most closely match the input are known as the best match unit (BMU) of the system. The weight vector of the BMU and those of nearby neurons are adjusted to be closer to the input vector by a certain step size. Neurons become trained to be individual feature detectors, and a combination of feature detectors can be used to identify large classes of features from the input space. The LVQ algorithm is a simplified precursor to more advanced learning algorithms, such as the self-organizing map. LVQ training is a type of competitive learning rule.