SolveWithPython

Forward Propagation in Neural Networks — How Data Flows Through the Network

Up to now, we have built all the individual parts of a neural network:

  • A neuron that computes a weighted sum
  • Activation functions that introduce non-linearity
  • Dense layers composed of multiple neurons

What we have not done yet is connect these pieces into a complete system.

That connection process is called forward propagation.

Forward propagation answers one simple but fundamental question:

Given an input, how does a neural network produce an output?

This article walks through that process step by step, in pure Python, with no abstractions hidden.

What Is Forward Propagation?

Forward propagation is the act of:

  1. Taking an input vector
  2. Passing it through the first layer
  3. Taking that layer’s output as input to the next layer
  4. Repeating until the final output is produced

There is no learning here yet.
No optimization.
No gradients.

Forward propagation is just computation.

Conceptual View

A neural network is a chain of functions:f(x)=fL(fL1(f1(x)))f(x) = f_L(f_{L-1}(\dots f_1(x)))

Each layer:

  • Receives a vector
  • Transforms it
  • Passes it forward

Learning later will simply adjust the parameters inside these functions.

Step 1: Recap — Our Dense Layer

From Article #3, we have a dense layer function:

Python
def neuron_output(inputs, weights, bias):
total = 0.0
for x, w in zip(inputs, weights):
total += x * w
total += bias
return total
def dense_layer(inputs, weights_list, bias_list, activation):
outputs = []
for weights, bias in zip(weights_list, bias_list):
z = neuron_output(inputs, weights, bias)
a = activation(z)
outputs.append(a)
return outputs

This function maps:RnRk\mathbb{R}^n \rightarrow \mathbb{R}^k

Step 2: Chaining Layers Together

Forward propagation is simply repeated application of dense_layer.

Let’s define a function that passes data through multiple layers.

Python
def forward_propagation(inputs, layers):
current_output = inputs
for layer in layers:
weights_list, bias_list, activation = layer
current_output = dense_layer(
current_output,
weights_list,
bias_list,
activation
)
return current_output

Each layer is represented as a tuple:

Python
(weights_list, bias_list, activation)

Step 3: Building a Small Network

Let’s define a simple network:

  • Input size: 2
  • Hidden layer: 3 neurons (ReLU)
  • Output layer: 1 neuron (Sigmoid)

Activation Functions

Python
import math
def relu(z):
return max(0.0, z)
def sigmoid(z):
return 1 / (1 + math.exp(-z))

Network Definition

Python
layers = [
# Hidden layer (3 neurons)
(
[
[0.5, -1.0],
[1.0, 1.0],
[-0.5, 2.0]
],
[0.0, 1.0, -1.0],
relu
),
# Output layer (1 neuron)
(
[
[1.0, -1.0, 0.5]
],
[0.0],
sigmoid
)
]

Step 4: Running Forward Propagation

Python
inputs = [1.0, 2.0]
output = forward_propagation(inputs, layers)
print(output)

The result is the network’s prediction for this input.

At this point:

  • All computations are deterministic
  • No learning has occurred
  • We are simply evaluating the network

What Just Happened?

Let’s break it down:

  1. The input vector goes into the hidden layer
  2. Each hidden neuron computes its activation
  3. The hidden layer produces a vector of 3 values
  4. That vector becomes input to the output layer
  5. The output layer produces the final result

This is exactly how large neural networks operate—just at a much larger scale.

Why Forward Propagation Matters

Forward propagation is used:

  • During prediction (inference)
  • During training (to compute loss)
  • During debugging (to inspect activations)

If you do not understand forward propagation, backpropagation will feel impossible.

This is why we are building everything manually.

Common Beginner Errors

Error 1: Mixing inputs and outputs between layers
→ Each layer’s output is the next layer’s input.

Error 2: Forgetunning activation at the wrong time
→ Activation happens inside the layer, per neuron.

Error 3: Thinking learning happens here
→ Forward propagation does not learn anything.

What We Have Built So Far

At this point, we have a complete neural network pipeline:

  • Input
  • Multiple layers
  • Non-linear activations
  • Final output

This network works—but it does not improve.

That improvement requires one missing ingredient:

A way to measure error.

What’s Next in the Series

In Article #5, we will:

  • Introduce loss functions
  • Measure how wrong the network is
  • Implement Mean Squared Error and Cross-Entropy
  • Prepare the ground for learning

Loss functions are the bridge between prediction and optimization.

GitHub Code

All code for this article is available here:

👉 [link to your GitHub repository]

Each article adds exactly one new concept and one new building block.

Series Progress

You are reading:

Neural Networks From Scratch (Pure Python)
✔ Article #1 — What a Neuron Really Computes
✔ Article #2 — Activation Functions
✔ Article #3 — Building a Layer
✔ Article #4 — Forward Propagation
➡ Article #5 — Loss Functions and Error Measurement