Gradient Descent Simulator
Visualize how models find optimal solutions through optimization
Your Progress
0 / 5 completedโ
Previous Module
Loss Functions Explorer
What is Gradient Descent?
Gradient descent is the workhorse of machine learning - the optimization algorithm that helps models learn by iteratively adjusting parameters to minimize loss. Think of it as hiking down a mountain in the fog, taking steps in the steepest downward direction.
๐๏ธ The Mountain Analogy
๐
Current Position
Your current parameter values
Weight = 5.0
๐งญ
Gradient
Direction of steepest ascent
Slope = +2.0 (upward)
๐
Step
Move opposite to gradient
New weight = 5.0 - 0.1ร2.0
๐ The Algorithm
1
Initialize Parameters
Start with random weights
ฮธ = random values
2
Compute Loss
Calculate prediction error
L = loss(y, ลท)
3
Calculate Gradient
Find direction to reduce loss
โL = โL/โฮธ
4
Update Parameters
Take step in negative gradient direction
ฮธ = ฮธ - ฮฑยทโL
5
Repeat
Continue until convergence
Loop until |โL| โ 0
๐ก Key Concepts
Gradient (โ)
Vector of partial derivatives showing steepest ascent direction
Learning Rate (ฮฑ)
Step size - how far to move in gradient direction each iteration
Convergence
When gradient approaches zero - we've found a minimum
Local vs Global Minimum
GD finds local minima, not always the global best solution