Skip to content

A Python implementation of Linear Regression using gradient descent optimization from scratch with NumPy.

Notifications You must be signed in to change notification settings

imzhikan/regression

Repository files navigation

Linear Regression with Gradient Descent

A Python implementation of Linear Regression using gradient descent optimization from scratch with NumPy.

Overview

This project implements a linear regression model that learns to find the optimal parameters (weights and bias) using gradient descent. The implementation includes:

  • Custom LinearRegression class with gradient descent optimization
  • Synthetic data generation for testing
  • Loss tracking during training
  • Visualization of training progress

Requirements

  • Python 3.7+
  • NumPy >= 1.21.0
  • Matplotlib >= 3.4.0

Installation

  1. Clone or download this repository
  2. Install the required dependencies:
pip install -r requirements.txt

Usage

Running the Script

Simply run the Python script from the command line:

python Bahareh_Moradi_regression.py

This will:

  1. Generate synthetic data with known parameters
  2. Train a linear regression model
  3. Display the true and predicted parameters
  4. Plot the loss history during training

Using the Jupyter Notebook

Alternatively, you can use the interactive Jupyter notebook:

jupyter notebook Bahareh_Moradi_regression.ipynb

Using the LinearRegression Class

from Bahareh_Moradi_regression import LinearRegression
import numpy as np

# Create training data
X = np.random.randn(1000, 3)
y = 2 * X[:, 0] + 3 * X[:, 1] - X[:, 2] + 5

# Initialize and train the model
model = LinearRegression(learning_rate=5e-3)
model.fit(X, y, epochs=500)

# Make predictions
predictions = model.predict(X)

# Access learned parameters
print(f"Weights: {model.w}")
print(f"Bias: {model.bias}")
print(f"Final loss: {model.loss_history[-1]}")

Features

  • Gradient Descent Optimization: Implements batch gradient descent for parameter updates
  • Mean Squared Error Loss: Uses MSE as the loss function
  • Training History: Tracks loss values throughout training
  • Type Hints: Fully typed code for better IDE support
  • Documentation: Comprehensive docstrings for all methods

Implementation Details

The linear regression model follows the equation:

y = w₁x₁ + w₂x₂ + ... + wₙxₙ + b

Where:

  • w = weight coefficients
  • b = bias term
  • x = input features
  • y = predicted output

The model minimizes the Mean Squared Error (MSE) loss function using gradient descent:

Loss = (1/n) Σ(y_pred - y_true)²

Project Structure

HW2/
├── Bahareh_Moradi_regression.py      # Main Python script
├── Bahareh_Moradi_regression.ipynb   # Jupyter notebook
├── requirements.txt                   # Package dependencies
└── README.md                          # This file

Example Output

============================================================
Linear Regression with Gradient Descent
============================================================

True parameters:
  Bias: 4.7
  Weights: [13.   6.7 -3.5]

Generating synthetic data...
  Sample size: 10000
  Number of features: 3

Training model...

Predicted parameters:
  Bias: 4.7021
  Weights: [12.99876543  6.69923456 -3.50012345]

Final loss: 0.998765

Plotting loss history...
Done!

License

This project is created for educational purposes.

Contact

For questions or feedback, please contact me.

About

A Python implementation of Linear Regression using gradient descent optimization from scratch with NumPy.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published