Laboratory Task 2 – Forward Pass#
Name: Joanna Reyda Santos
Section: DS4A
Instruction: Perform a single forward pass and compute for the error.
Given Parameters#
\[\begin{split}
x =
\begin{bmatrix}
1 \\
0 \\
1
\end{bmatrix},
\quad
y = [1],
\quad
f(z) = \max(0, z)
\end{split}\]
Output Weights#
\[\begin{split}
W_o =
\begin{bmatrix}
w_{21} = -0.3 \\
w_{22} = -0.2
\end{bmatrix}
\end{split}\]
Biases#
\[
\theta_1 = -0.4, \quad
\theta_2 = 0.2, \quad
\theta_3 = 0.1
\]
Solution#
Output Layer#
\[
z_3 = (a_1)(-0.3) + (a_2)(-0.2) + (0.1) = 0.08
\]
\[
\hat{y} = f(z_3) = f(0.08) = 0.08
\]
Error Calculation#
\[
E = \frac{1}{2}(y - \hat{y})^2
\]
\[
E = \frac{1}{2}(1 - 0.08)^2 = 0.4232
\]
Final Results#
\[
\hat{y} = 0.08, \quad E = 0.4232
\]
import numpy as np
# Input and target
x = np.array([1, 0, 1])
y = np.array([1])
# ReLU activation
def relu(z):
return np.maximum(0, z)
# Hidden layer
z1 = (1*0.2) + (0*0.4) + (1*-0.5) + (-0.4)
z2 = (1*-0.3) + (0*0.1) + (1*0.2) + (0.2)
h1, h2 = relu(z1), relu(z2)
# Output layer
z3 = (h1*-0.3) + (h2*-0.2) + (0.1)
y_hat = relu(z3)
# Error
E = 0.5 * (y - y_hat)**2
print(f"z1 = {z1:.2f}, z2 = {z2:.2f}")
print(f"h1 = {h1:.2f}, h2 = {h2:.2f}")
print(f"z3 = {z3:.2f}")
print(f"Predicted Output (ŷ) = {y_hat:.2f}")
print(f"Error (E) = {E[0]:.4f}")
z1 = -0.70, z2 = 0.10
h1 = 0.00, h2 = 0.10
z3 = 0.08
Predicted Output (ŷ) = 0.08
Error (E) = 0.4232
Reflection#
In this activity, I performed a single forward pass through a simple neural network using the ReLU activation function. By manually computing each step and verifying with Python, I saw how the weights, biases, and activation function determine the output prediction. The error value ( E = 0.4232 ) shows that the network’s output (0.08) is far from the target (1), which will later be corrected through backpropagation.