An interactive visualization tool that demonstrates backpropagation in a simple 2-layer neural network. This tool helps understand how neural networks learn by showing the forward and backward passes of computation, along with real-time parameter updates.
- Interactive visualization of a 2-layer neural network
- Real-time parameter adjustment through sliders
- Live computation of forward and backward passes
- Visualization of gradients and activations
- Adjustable learning rate
- Reset parameters functionality
- Step-by-step training capability
The neural network consists of:
- Input layer (1 neuron)
- Hidden layer (2 neurons)
- Output layer (1 neuron)
- Sigmoid activation function
- Mean Squared Error (MSE) loss function
- Python 3.x
- NumPy
- Tkinter (usually comes with Python)
- Clone this repository:
git clone https://github.com/yourusername/backpropagation.git
cd backpropagation
- Install the required dependencies:
pip install numpy
Run the visualization tool:
python backprop.py
- Input x: Set the input value (default: 0.5)
- Target y: Set the target output value (default: 0.8)
- LR (α): Set the learning rate for training
- Reset Params: Randomize all network parameters
- Train (1 Step): Perform one step of gradient descent
- Adjust network parameters (weights and biases) using sliders
- View real-time updates of:
- Network activations
- Parameter values
- Gradients
- Loss value
- Predicted output
-
Forward Pass:
- Input is multiplied by weights and added to biases
- Sigmoid activation is applied
- Final output is computed
- Loss is calculated
-
Backward Pass:
- Gradients are computed for all parameters
- Parameters are updated using gradient descent
- Updates are reflected in the visualization
Feel free to submit issues and enhancement requests!
This project is open source and available under the MIT License.