Day 3: McCulloch-Pitts Neurons. World's First Neural Networks.

I uncover the maths behind McCulloch-Pitts Neuron and how it works with Logical functions.

By Nandeshwar

Jun 03, 2021

Day 3.png


In the previous article, we went through the theoretical understanding of a blog post. In this blog post, we will understand McCulloch Pitts Neuron and the maths behind it. The paper related to this was published in 1943 and has been an important building block of the Neural Network.

McCulloch Pitts Neuron (M-P Neuron)

One of the very first traces about Neuron comes from McCulloch Pitts Neural Network - 1943. Although this work was developed in the early forties, many of the principles can still be seen in neural networks today.

mccollouch_and_pitts.png

Assumption of McCullouch & Pitts Neurons :

  1. It is divided into two parts input g and output f. The first part g, takes an input and performs aggregation, and based on that value the second part f, makes a decision.
  2. These inputs can either be excitatory or inhibitory.
    1. Inhibitory inputs have the maximum effect irrespective of other inputs. If Inhibitory input is present neuron will not fire.
    2. Excitatory inputs do NOT make the neuron fire on their own but they might get triggered when aggregated together.
  3. Input has to be binary $ x_{i} $ such that $ x_{i} $ ∈ {0, 1}.
  4. It has binary output y ∈ {0, 1}, where y = 1 indicates that the neuron fires and y = 0 that it is at rest.
  5. $ \theta $ here is called Thresholding Parameter. The aggregated value of the neuron coming from the second part f must be greater than this to fire the neuron.

McCulloch-Pitts network can be defined as y where

Case 1

y = 0 if $ x_{i} $ is inhibitory

Case 2

$$
\begin{aligned}
g\left(x_{1}, x_{2}, x_{3}, \ldots, x_{n}\right) &=g(\mathbf{x})=\sum_{i=1}^{n} x_{i} \\
y=f(g(\mathbf{x})) &=1 \quad \text { if } \quad g(\mathbf{x}) \geq \theta \\
&=0 \quad \text { if } \quad g(\mathbf{x})<\theta \\
\end{aligned}
$$


Boolean functions using McCullouch-Pitts Neuron

McCullouch-Pitts neuron can be in turn used to create a variety of boolean function as we have to keep in mind they only give binary output.

1. Logical AND Function

100-Days-of-Deep-Learning-drawio-diagrams-net.png

An AND function neuron would only fire when all the inputs are ON that is $g(x) \geq 2$. In this case, both inputs x1 and x2 are excitatory inputs.

Possible cases -

  1. x1 = 0 & x2 = 0 ; x1 + x2 = 0
  2. x1 = 0 & x2 = 1 ; x1 + x2 = 1
  3. x1 = 1 & x2 = 0 ; x1 + x2 = 1
  4. x1 = 1 & x2 = 1 ; x1 + x2 = 2

As $ \theta = 2 $ in this case, and rest of g(x) in case-1, 2, 3 will be 0, 1, 1 respectively which is less than $ \theta $.


Hence we can create a Logical AND function with M-P Neuron.

2. Logical OR Function

100-Days-of-Deep-Learning-drawio-diagrams-net (1).png

An OR function neuron would fire if any of the inputs are ON that is $g(x) \geq 1$. In this case, both inputs x1 and x2 are excitatory inputs.

Possible cases -

  1. x1 = 0 & x2 = 0 ; x1 + x2 = 0
  2. x1 = 0 & x2 = 1 ; x1 + x2 = 1
  3. x1 = 1 & x2 = 0 ; x1 + x2 = 1
  4. x1 = 1 & x2 = 1 ; x1 + x2 = 2

As $ \theta = 1 $ in this case, and also g(x) in case-1 will be 0 which is less hence it will not fire. In rest cases $ g(x) > \theta $ hence firing the neurons.

Hence we can create a Logical OR function with M-P Neuron.

3. Logical NOR Function

100-Days-of-Deep-Learning-drawio-diagrams-net2.png

A NOR function neuron would fire if only both of the inputs are OFF that is $\theta = 0$. In this case, both inputs x1 and x2 are inhibitory inputs.

Possible cases -

  1. x1 = 0 & x2 = 0 ; x1 + x2 = 0
  2. x1 = 0 & x2 = 1 ; x1 + x2 = 0 (inhibitory input is 1 which means neuron won't fire)
  3. x1 = 1 & x2 = 0 ; x1 + x2 = 1 (inhibitory input is 1 which means neuron won't fire)
  4. x1 = 1 & x2 = 1 ; x1 + x2 = 2 (inhibitory input is 1 which means neuron won't fire)

As $ \theta = 0 $ in this case, and also g(x) in case-1 is 0 hence it will fire. In rest cases $ g(x) > \theta $ so not valid

Hence we can create a Logical NOR function with M-P Neuron.

Limitation of M-P Neuron.

  1. M-P Neurons can not handle non-boolean values which is a huge problem.
  2. Similarly, the output can only be binary.
  3. Can not handle non-linear outputs like Logical XOR gate.
  4. Can not assign weights to the input.
References

Tags

Deep Learning
Beginner
Maths