How Perceptrons Work
An artificial neuron takes input values, multiplies each by a predetermined weight
, then sums the results and adds a bias
.
This calculated value is then transformed into the final output through an activation function
.
Expressed as a formula, it looks like this:
y = f(w₁x₁ + w₂x₂ + ... + wₙxₙ + b)
-
y: Final output of the neuron
-
f: Activation function
-
w: Weights for each input value
-
x: Input values
-
b: Bias
Understanding with a Simple Example
Let's consider a scenario where a perceptron makes the decision to "turn on the air conditioner if it’s hot (Input1) and humid (Input2)"
.
Input Values
- Temperature: 86°F (Input1)
- Humidity: 90% (Input2)
Weights
- Weight for temperature: 0.7
- Weight for humidity: 0.3
Bias
- Bias value: -10
In this case, the perceptron calculates as follows:
(Temperature × Temperature Weight) + (Humidity × Humidity Weight) + Bias
= (86 × 0.7) + (90 × 0.3) + (-10)
= 60.2 + 27 - 10
= 77.2
Since the result does not exceed a set threshold (e.g., 98), the activation function decides to "keep the air conditioner off (Output=0)"
.
If the result had exceeded the threshold, the decision would be "turn on the air conditioner (Output=1)"
.
Thus, the perceptron can make simple decisions by combining input values, weights, and biases.
When multiple perceptrons are connected, they form a neural network, and when these network layers are stacked to form deep structures, it is known as Deep Learning
.
In the next lesson, we will delve deeper into Deep Learning.
Want to learn more?
Join CodeFriends Plus membership or enroll in a course to start your journey.