Laboratory Task 2 – Forward Pass#

Name: Joanna Reyda Santos
Section: DS4A

Instruction: Perform a single forward pass and compute for the error.

Given Parameters#

\[\begin{split} x = \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}, \quad y = [1], \quad f(z) = \max(0, z) \end{split}\]

Hidden Unit Weights#

\[\begin{split} W_h = \begin{bmatrix} w_{11} = 0.2 & w_{12} = -0.3 \\ w_{13} = 0.4 & w_{14} = 0.1 \\ w_{15} = -0.5 & w_{16} = 0.2 \end{bmatrix} \end{split}\]

Output Weights#

\[\begin{split} W_o = \begin{bmatrix} w_{21} = -0.3 \\ w_{22} = -0.2 \end{bmatrix} \end{split}\]

Biases#

\[ \theta_1 = -0.4, \quad \theta_2 = 0.2, \quad \theta_3 = 0.1 \]

Solution#

Hidden Layer#

Compute each hidden unit pre-activation:

\[ z_1 = (1)(0.2) + (0)(0.4) + (1)(-0.5) + (-0.4) = -0.7 \]
\[ z_2 = (1)(-0.3) + (0)(0.1) + (1)(0.2) + (0.2) = 0.1 \]

Apply activation \(( a_i = f(z_i) = \max(0, z_i))\):

\[ a_1 = f(-0.7) = 0, \quad a_2 = f(0.1) = 0.1 \]

Output Layer#

\[ z_3 = (a_1)(-0.3) + (a_2)(-0.2) + (0.1) = 0.08 \]
\[ \hat{y} = f(z_3) = f(0.08) = 0.08 \]

Error Calculation#

\[ E = \frac{1}{2}(y - \hat{y})^2 \]
\[ E = \frac{1}{2}(1 - 0.08)^2 = 0.4232 \]

Final Results#

\[ \hat{y} = 0.08, \quad E = 0.4232 \]
import numpy as np

# Input and target
x = np.array([1, 0, 1])
y = np.array([1])

# ReLU activation
def relu(z):
    return np.maximum(0, z)

# Hidden layer
z1 = (1*0.2) + (0*0.4) + (1*-0.5) + (-0.4)
z2 = (1*-0.3) + (0*0.1) + (1*0.2) + (0.2)

h1, h2 = relu(z1), relu(z2)

# Output layer
z3 = (h1*-0.3) + (h2*-0.2) + (0.1)
y_hat = relu(z3)

# Error
E = 0.5 * (y - y_hat)**2

print(f"z1 = {z1:.2f}, z2 = {z2:.2f}")
print(f"h1 = {h1:.2f}, h2 = {h2:.2f}")
print(f"z3 = {z3:.2f}")
print(f"Predicted Output (ŷ) = {y_hat:.2f}")
print(f"Error (E) = {E[0]:.4f}")
z1 = -0.70, z2 = 0.10
h1 = 0.00, h2 = 0.10
z3 = 0.08
Predicted Output (ŷ) = 0.08
Error (E) = 0.4232

Reflection#

In this activity, I performed a single forward pass through a simple neural network using the ReLU activation function. By manually computing each step and verifying with Python, I saw how the weights, biases, and activation function determine the output prediction. The error value ( E = 0.4232 ) shows that the network’s output (0.08) is far from the target (1), which will later be corrected through backpropagation.