Talk:Artificial Intelligence/Neural Networks/Introduction

Old content
Some content from one of the previous AI book stubs (Computer Programming/AI) on various types of NNs.


 * perceptron
 * The perceptron is a classification of feedforward neural network. The nodes, or artificial neurons, in a layer sum their inputs, which may be weighted, and provide a response (output) if a threshold value is reached.  There may be a single layer or more of nodes which direct their output to the next layer.


 * feedforward neural network
 * A node in neural networks is similar to the neuron of a biological system. It receives input from connections and produces output, which depends upon the input.  A group of these nodes and their connections constitute a feedforward neural network; the output of the nodes, the fundamental units, form the input connections for other nodes in the subsequent layer.  The group of nodes that are at the same heirarchical level form a layer.  Note that information only proceeds forward, and there are generally no loops in the network.


 * Groups of nodes are usually divided into three layers in feedforward neural networks: the input layer, the hidden layer, and the output layer. The input layer is analagous to sensory neurons in that it simply gathers data.  The hidden layer is similar to interneurons and is responsible for processing the information along with the output layer, which could be compared to motor neurons.


 * multi-layer neural networks
 * Multi-layer neural networks are usually feed-forward. Each node of the preceding layer has a connection with one or more nodes in the subsequent layer.  Learning techniques, such as back-propagation which requires a loop in the network, are often utilized in multi-layer networks.  They work by bringing the network into a steady state when a certain input is present.


 * recurrent neural networks
 * Recurrent neural networks differ from feedforward networks in that data may flow both backwards and forwards. Subsequent layers may provide output to preceding layers.  Nodes may even supply input to themselves.

I posted it here, on the talk page, because it wasn't much of a start anyway. --Mrwojo (talk) 04:31, 1 January 2009 (UTC)

Good idea, essentially this is covered by the Network Model description I added to the introductory page although we might want to edit in recurrent networks just in order for the term to be introduced. I think one thing that we want to consider is how much information we want on neural networks, after all they are a major topic and journals have been published on them since probably the 50's or so. Wikipedia tries to separate the field into "Artificial NN" and "Natural NN" I think the main difference is the tendency for A.I. implementations to move away from attempting to model neural circuits from natural systems. My own work in Artificial Consciousness tends to favor what Wikipedia would call "Natural NN". The primary difference is the Distributed Processing School, that divorced Neural Networks from the Natural Modeling role, in order to take advantage of the opportunity to implement a distributed processing system. Ideally A.I. techniques that are oriented towards Distributed Processing should not be mistaken for Neural Networks, but according to Wikipedia, that confusion is critical to how the articles are being written. Perhaps what I should do, is split this article into two sections, 1 about Natural Neural Networks, and 1 about Distributed Processing applications using artificial analogs of neural networks. What do you think?--Graeme E. Smith (talk) 04:55, 16 May 2009 (UTC)