To develop a python code that creates a simple feed-forward neural networks or perception with the Sigmoid activation function. The neuron has to be trained such that it can predict the correct output value when provided with a new set of input data.
- Hardware โ PCs
- Anaconda โ Python 3.7 Installation / Moodle-Code Runner / Google Colab
Sigmoid Activation Function is commonly used for models where we have to predict the probability as an output. Since probability of anything exists only between the range of 0 and 1, sigmoid is the right choice because of its range. The function is differentiable and provides a smooth gradient, i.e., preventing jumps in output values. This is represented by an S-shape of the sigmoid activation function.
The limitations of sigmoid function are : The derivative of the function is f'(x) = sigmoid(x)*(1-sigmoid(x)). The output of the logistic function is not symmetric around zero. So the output of all the neurons will be of the same sign. This makes the training of the neural network more difficult and unstable.
- Import packages
- Define neural network class
- Model single neuron with 3 inputs and 1 output and assign random weights to a 3 x 1 matrix with values between -1 and 1
- Define sigmoid function and calculate the error
- Multiply the error by the input and again by the gradient of the sigmoid curve
- Initialize a single neuron neural network
- Train neural network using training data
- Test neural network with new data
'''
Program to implement the sigmoid activation function in a feed forward ANN.
Developed by: Lokesh Krishnaa M
RegisterNumber: 212220230030
'''
import numpy as np
class NeuralNetwork():
def __init__(self):
np.random.seed(1)
self.synaptic_weights=2*np.random.random((3,1)) -1
def sigmoid(self,x):
return 1/(1+np.exp(-x))
def sigmoid_derivative(self,x):
return x*(1-x)
def train(self,training_inputs,training_outputs,training_iterations):
for iteration in range(training_iterations):
output=self.think(training_inputs)
error=training_outputs-output
adjustments=np.dot(training_inputs.T,error*self.sigmoid_derivative(output))
self.synaptic_weights += adjustments
def think(self,inputs):
inputs=inputs.astype(float)
output=self.sigmoid(np.dot(inputs,self.synaptic_weights))
return output
if __name__=="__main__":
neuron=NeuralNetwork()
print("Beginning Randomly Generated weights: ")
print(neuron.synaptic_weights)
training_inputs=np.array([[0,0,1],[1,1,1],[1,0,1],[0,1,1]])
training_outputs=np.array([[0,1,1,0]]).T
neuron.train(training_inputs,training_outputs, 15000)
print("Ending Weights After Training")
print(neuron.synaptic_weights)
user_input_one=str(input("User Input One : "))
user_input_two=str(input("User Input Two : "))
user_input_three=str(input("User Input Three : "))
print("Considering New Situation: ",user_input_one, user_input_two, user_input_three)
print("New Output data: ")
print(neuron.think(np.array([user_input_one, user_input_two, user_input_three])))
print("Wow, we did it!")
Thus created a perception to employ the Sigmoid activation function. This neuron was successfully trained to predict the correct output value, when provided with a new set of input data.