Professional Documents
Culture Documents
3rd Ass
3rd Ass
3rd Ass
1|Page
Question # 01: Perform Forward and Backward propogation….
Forward Propagation:
Forward propagation is the process in a neural network where input data is passed through the network
to obtain predictions. Each layer in the network performs a weighted sum of its inputs, followed by an
activation function that introduces non-linearity. This process continues until the final output is obtained.
In simple terms, forward propagation is like the "thinking" phase of the neural network, where it makes
predictions based on the given inputs.
Backward Propagation:
Backward propagation is the learning phase in a neural network. It involves calculating the error between
the predicted output and the actual target, then propagating this error backward through the network.
The goal is to adjust the weights and biases of the network to minimize this error. Backward propagation
utilizes the chain rule from calculus to calculate the gradients of the error with respect to the weights and
biases. In essence, backward propagation is the network's way of "learning from its mistakes" to improve
future predictions.
CODE
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras import backend as K
# Input values
input_data = np.array([[5, 3]])
# Target value
target = np.array([[0]])
2|Page
# Add the output layer with 1 neuron
model.add(Dense(1, activation='sigmoid', use_bias=True,
bias_initializer='ones'))
model.layers[0].set_weights([hidden_layer_weights.T, hidden_layer_biases])
# Transpose the output layer weights to match the correct shape (4, 1)
output_layer_weights_corrected = output_layer_weights.reshape((4, 1))
model.layers[1].set_weights([output_layer_weights_corrected,
output_layer_biases])
# Forward propagation
output = model.predict(input_data)
print(output)
3|Page
delta_hidden = np.dot(delta_output, model.layers[1].get_weights()[0].T) *
hidden_layer_output * (1 - hidden_layer_output)
model.layers[0].set_weights([hidden_layer_weights.T - learning_rate *
np.dot(input_data.T, delta_hidden),
hidden_layer_biases - learning_rate *
np.sum(delta_hidden, axis=0)])
4|Page
Code
5|Page
OUTPUT
6|Page