SolveWithPython

Building a Neural Network Template in Python — Project Setup and Core Abstractions

(Neural Networks From Scratch · Article 1)

Who this series is for

This series is written for beginners who want to understand neural networks by building one step by step in Python, instead of treating them as black boxes.

You do not need:

  • prior machine learning experience
  • PyTorch, TensorFlow, or Keras
  • advanced linear algebra

You do need:

  • basic Python knowledge (functions, classes, lists)
  • curiosity about how neural networks actually work

By the end of this series, you will have:

  • a reusable Neural Network template
  • a clean Python class-based design
  • real examples using the template in production-style scenarios

What we are building (big picture)

Our end goal is a small but real neural network framework written in Python.

Not a toy script.
Not a single notebook.
But a template you can reuse, extend, and reason about.

The final usage will look like this:

Python
net = NeuralNet(
layers=[
Dense(2, 16),
ReLU(),
Dense(16, 1)
],
loss=MSE(),
optimizer=SGD(lr=0.01)
)
net.fit(X, y, epochs=100)
predictions = net.predict(X)

But we are not starting there.

We start with the foundation.

Why we use classes (and not one big script)

Many “from scratch” neural network tutorials put everything into one file:

  • forward pass
  • backward pass
  • loss
  • optimizer
  • training loop

That works for learning once, but it breaks down fast.

We want:

  • readable code
  • clear responsibilities
  • the ability to add features later (Dropout, Adam, saving models)

That means classes with clear roles.

The core idea: everything is a layer

At the heart of almost every neural network framework is one simple rule:

Every component behaves like a layer.

A layer:

  • receives input
  • produces output
  • (later) knows how to compute gradients

For now, we only care about one method:

Python
forward(x)

Backward passes will come later.

Project structure (important!)

Before writing logic, we define a clean structure.

nn_template/
├── nn/
│ ├── __init__.py
│ ├── core.py
│ └── layers.py
├── examples/
│ └── 01_forward_only.py
└── README.md

Why this matters:

  • beginners learn where code belongs
  • logic stays separated
  • the project can grow without chaos

Step 1: Define a base Layer

Every layer in our system will follow the same interface.

Create nn/layers.py:

class Layer:
"""
Base class for all layers.
"""
def forward(self, x):
"""
Computes the forward pass.
Must be implemented by subclasses.
"""
raise NotImplementedError("forward() not implemented")

That’s it.

This is not doing anything yet — and that is intentional.

We are defining a contract:

“If you want to be a layer, you must implement forward().”

Step 2: A first real layer — Dense

A Dense (fully connected) layer:

  • takes a vector as input
  • multiplies it by weights
  • adds a bias

Create this in nn/layers.py:

Python
import random
class Dense(Layer):
def __init__(self, input_size, output_size):
self.weights = [
[random.uniform(-0.1, 0.1) for _ in range(input_size)]
for _ in range(output_size)
]
self.bias = [0.0 for _ in range(output_size)]
def forward(self, x):
output = []
for neuron_weights, neuron_bias in zip(self.weights, self.bias):
value = 0.0
for w, xi in zip(neuron_weights, x):
value += w * xi
value += neuron_bias
output.append(value)
return output

No NumPy.
No magic.
Just Python and loops.

This is deliberate so beginners can follow every line.

Step 3: The NeuralNet container

Now we need something that:

  • holds layers
  • sends data through them in order

Create nn/core.py:

Python
class NeuralNet:
def __init__(self, layers):
self.layers = layers
def forward(self, x):
for layer in self.layers:
x = layer.forward(x)
return x

That is the entire forward pipeline.

A neural network is just:

input → layer → layer → layer → output

Step 4: First runnable example (no training yet)

Create examples/01_forward_only.py:

Python
from nn.layers import Dense
from nn.core import NeuralNet
# Simple network: 2 inputs → 3 neurons → 1 output
net = NeuralNet([
Dense(2, 3),
Dense(3, 1)
])
x = [1.0, 2.0]
output = net.forward(x)
print("Output:", output)

Run it:

python examples/01_forward_only.py

You should see something like:

Output: [0.0342]

The number itself does not matter yet.

What matters is:

  • the pipeline works
  • layers are chained
  • nothing feels mysterious

What we achieved (important checkpoint)

At the end of Article 1, you now have:

  • a clean project structure
  • a base Layer abstraction
  • a real Dense layer
  • a NeuralNet container
  • a working forward pass

No training.
No gradients.
No math overload.

Just structure.

Common beginner mistakes (and why we avoided them)

“Why not put everything in one class?”
Because neural networks grow. Separation keeps sanity.

“Why not NumPy already?”
Because understanding beats speed at this stage.

“Why random weights?”
Because identical weights break learning later.

What comes next (Article 2)

In the next article we will answer:

How does the network know what to change?

That means:

  • introducing loss functions
  • understanding “error”
  • computing a simple loss (MSE)
  • still no backpropagation yet

You will see what the network is doing wrong before we teach it how to fix it.

Series status

Python source code is available on Github: https://github.com/Benard-Kemp/Building-a-Neural-Network-Template-in-Python