In the previous tutorial, we learned how to create a single-layer neural network model without coding. In this tutorial, we will learn hpw to create a single-layer perceptron model with python. In this section, I won’t use any library and framework. Let’s create an artificial neural network model step by step.

Firstly, we need to define inputs, weights, output values. Inputs consist first layer of the neural network model. But the input layer is not treated as a layer.

Inputs

x1 = 0.3
x2 = 0.5

Weights

w1 = 0.2
w2 = 0.1

Output

y = 1

Forward Propagation

Step 1: Let’s multiply the values of the weights by the input values and then add them.

z = w1 * x1 + w2 * x2

z = w1*x1 + w2*x2
print("z : ", z)

z : 0.11

Step 2: We need to apply the sigmoid function to the z value. Because the output value must be between a range 0-1. If we do not apply the activation function to the z value, the output value can be a large number.

a = sigmoid(z)

sigmoid = 1/(1+e^-z)

import math
def sigmoid(x):
return 1/(1 + math.exp(-z))
a = sigmoid(z)
print("a : ", a)

a : 0.5274723043445937

Step 3: Finally we will find error value.

error = 1/2 * (y – y_head)^2

print("error : ", error)

error : 0.23626384782770316

Backward Propagation

Feedback neural networks can be more difficult to understand than feed-forward neural networks. Because we use the derivative function in feedback artificial neural networks. If you have no information about derivative, you can get information about derivative by using this link.

Step 1: Find the derivative of the error fuction. Using this you can learn how to calculate this equetion

# derivative of error with respect to w1
d_error = (y_head - y) # derivative error
d_a = a*(1-a)          # derivative a
d_z_w1 = x1               # derivative z
d_z_w2 = x2               # derivative z
print("d_error :",  d_error)
print("d_a :",  d_a)
print("d_z_w1 :",  d_z_w1)
print("d_z_w2 :",  d_z_w2)

d_error : -0.4725276956554063

d_a : 0.24924527249399803

d_z_w1 : 0.3

d_z_w2 : 0.5

Step 2: Find the derivative of error with respect to w1 multiplying values.

d_error_w1 = d_error*d_a*d_z_w1
print("d_error_w1 : ", d_error_w1)

d_error_w1 : -0.03533258827937781

Step 3: Find the derivative of error with respect to w2 multiplying values.

d_error_w2 = d_error*d_a*d_z_w2
print("d_error_w2 : ", d_error_w2)

d_error_w2 : -0.058887647132296356

Step 4: Update w1 and w2 with new weights values.

w1 = w1 - d_error_w1
w2 = w2 - d_error_w2
print("new w1 : ", w1)
print("new w2 : ", w2)

new w1 : 0.2353325882793778

new w2 : 0.15888764713229636

In the next tutorial, we will see how to create a multi-layer neural network.