Day 4: What are Perceptrons?
Perceptrons are the building blocks of Neural Networks. How do Perceptrons work?
Jun 03, 2021

Perceptron
Perceptron was first discovered by Frank Rosenblatt in 1958 The Perceptron and was quoted as "the first machine which is capable of having an original idea" by him. It was inspired by an earlier model by McCullouch and Pitts.
Later it was carefully analyzed and refined by Minsky and Papert in 1969.
A perceptron can have n number of inputs,

Neuron output whether 0 or 1 was determined by the weighted sum of inputs
This can be rewritten as
Now instead of handling the
And the commonly used convention is.

Also, notice the change in the initialization of i from 1 instead of 0.
The advantage over McCullouch and Pitts model
- We can introduce non-boolean inputs.
- The weights were not restricted to unity. Thus by changing the weights we can assign importance to the inputs.
- Unlike McCullouch and Pitts model there is no inhibitory or excitatory input.
The XOR affair and the AI winter!
A single Perceptron is incapable of implementing an XOR function. There are no perceptron solutions for non-linearly separated data. But the introduction of multi-layer perceptrons can easily solve these functions and hence used heavily in industry.
In his book, Perceptrons Minsky said certain functions can not be represented by Perceptrons (XOR function). This literally killed the field for decades. Later research with multi layer perceptrons showed how can such functions be implemented thus saving the field.
- Wikipedia