Learn AI

AI Concepts Workshop

© 2026 Cloudy Software Ltd

The Perceptron

The atomic building block of modern Neural Networks.

0.8
Input X1
-0.4
Input X2
W1: 0.5
W2: 1.2
Summation
-0.6
Bias: -0.5
Binary Output
0
Activation: f(Σ wi*xi + b)

Neuron Controls

Input Signals (X)

Signal 10.8
Signal 2-0.4

Weights (W)

Weight 10.5
Weight 21.2

Threshold (Bias)

Adjust Bias-0.5
The Bias acts as a threshold. A high negative bias makes it harder for the neuron to fire.
How it works

A perceptron mimics a biological neuron. It takes inputs, multiplies them by Weights (importance), adds a Bias (threshold adjustment), and fires if the result is positive.
Neural networks are just layers of these nodes working in parallel.

Founders Note

LLM training is essentially the process of adjusting 175+ Billion of these weights and biases until the network "fires" in a way that generates coherent human language.

One Single Neuron

A neural network is just millions of these simple math units connected together. Each one weights its inputs and decides whether to "fire" its signal forward.

Founder Context

When someone says an LLM has Billion Parameters, they mean it has billions of these little "Weights" (Wi) that decide how thoughts flow through the digital brain.