The History of Neural Networking
The origins of Neural Networks date back to the 1940s when the first neural network computing model was introduced by Warren McCulloch and Walter Pitts. They wrote " A Logical Calculus Immanent in Nervous Activity", which introduced the first neural network computational model.
In 1958, Frank Rosenblatt developed the perceptron, the first network that could learn by trial and error. The preceptron was also the first to utilize weights between the nodes. Below is and example of how a perceptron algorithm works. The goal is to find the optimum angle to separate the blue dots form the red dots.
Although, the perceptron did have its faults. For example it was unable to solve the "exclusive OR" problem. The Exclusive Or problem is a logical operation which outputs true when the inputs differ. Below is an example of a perceptron trying to solve this. The goal is to separate the true statements from the false statements.
It would never be able to find the optimal angle to separate the two because it is limited to one layer, or in this example, represented by the one line. Much later, in 1975, Paul Werbos developed backpropagation, which allowed the network to have multiple layers. the example below is how the "exclusive Or problem would be solved using backpropagation.