← AI Lab
📉 Gradient Descent Visualizer
Class 11–12 · Optimization
Loss Surface
Algorithm
Hyperparameters
Stats
Iteration0
Loss
x
y
ConvergedNo
Running
Current position
Global minimum
Path taken
Click on surface to set start position

Loss Curve

Gradient Magnitude

Step Log
Press Run to start...
How it works:
Gradient descent updates parameters by moving opposite to the gradient:

θ = θ − α · ∇L(θ)

CBSE Connection: Class 11-12 AI — Understanding optimization in neural network training, loss functions, and hyperparameter tuning.