Skip to the content.

Weight

In deep learning, weights are numbers that represent the importance of each input to a neuron.
They control how much influence an input has on the output of a neuron.


Example:

Imagine you are judging a cake based on: - Taste - Look - Smell

You might feel that taste is the most important, followed by look, and smell is less important.

So you can assign: - Taste → Weight = 0.6

Then, your overall score of the cake would be calculated as:

Score = (Taste Rating × 0.6) + (Look Rating × 0.3) + (Smell Rating × 0.1)

Thus, weights tell the model which input is more important in decision-making.


In Deep Learning Neurons:

  1. Each neuron receives multiple inputs (features).
  2. Each input is multiplied by a weight.
  3. The weighted inputs are added up.
  4. The result is passed through an activation function to decide the output.

Mathematical View:

Output = Activation(w1 × x1 + w2 × x2 + … + wn × xn + b)

Where:


Why are weights important?

Role Explanation
Control Importance Higher weight → More important input
Learn from data Model adjusts weights during training
Prediction accuracy Good weights help the model predict better

Key points:


How Does a Deep Learning Model Decide Weights?


Simple Answer:

A deep learning model does not start with perfect weights.
It starts with random guesses for weights and learns better weights by looking at the errors it makes during predictions.

This learning process is done using algorithms like Gradient Descent.


Step-by-Step Layman Explanation

Step 1: Start with random weights

Step 2: Make predictions using these weights

Step 3: Check how wrong the model is (Calculate error)

Step 4: Adjust the weights to reduce the error

Step 5: Repeat the process many times (Epochs)


Mathematical View (Simplified)

\[\large \text{New Weight} = \text{Old Weight} - \alpha \times \frac{\partial \text{Error}}{\partial \text{Weight}}\]

Where:


Real-World Analogy

Learning to throw a basketball into a hoop:

  1. First try (Random weight): You throw blindly.
  2. Check the error: You see the ball missed the hoop.
  3. Adjust your angle and force (Weight): Based on how far you missed.
  4. Try again: Repeat until you start scoring baskets.

Deep learning models do the same. They adjust their ‘throw’ (weight) until the ‘ball’ (prediction) lands close to the ‘hoop’ (actual label).


Summary Table

Step What happens?
Initialize Random weights assigned
Predict Model makes predictions using current weights
Calculate Error Compare predictions with actual output (Loss)
Update Weights Use Gradient Descent to adjust weights to reduce error
Repeat Do this over many passes (epochs)