To develop a neural network regression model for the given dataset.
This dataset presents a captivating challenge due to the intricate relationship between the input and output columns. The complex nature of this connection suggests that there may be underlying patterns or hidden factors that are not readily apparent.
- Load the dataset containing features and target variables into memory.
- Check for data consistency and handle any missing values or anomalies.
- Divide the dataset into training and testing subsets, ensuring a representative distribution of data in each subset.
- Shuffle the data before splitting to avoid any inherent ordering bias.
- Normalize the features using MinMaxScaler to scale them within a predefined range, typically [0, 1].
- Fit the scaler to the training data and transform both training and testing data accordingly.
- Design the architecture of the neural network model, specifying the number of layers and neurons per layer.
- Compile the model by defining the loss function, optimizer, and any additional metrics to monitor during training.
- Train the neural network model using the training data, specifying the number of epochs and batch size.
- Monitor the training process for convergence and potential overfitting by observing the loss on both training and validation data.
- Visualize the training process by plotting the training and validation loss over epochs.
- Plot any additional metrics such as accuracy or precision to assess the model's performance.
- Evaluate the trained model's performance using the testing data.
- Compute relevant metrics such as accuracy, precision, recall, and F1-score to assess the model's effectiveness.
import pandas as pd
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense,Dropout
from google.colab import auth
import gspread
from google.auth import default
import pandas as pd
auth.authenticate_user()
creds, _ = default()
gc = gspread.authorize(creds)
worksheet = gc.open('exp1').sheet1
rows = worksheet.get_all_values()
df = pd.DataFrame(rows[1:], columns=rows[0])
import pandas as pd
import seaborn as sns
df['Input 1 (Number)'] = pd.to_numeric(df['Input 1 (Number)'])
sns.pairplot(df)
df['Input 1 (Number)'] = pd.to_numeric(df['Input 1 (Number)'])
df['Output'] = pd.to_numeric(df['Output'])
X = df['Input 1 (Number)']
y=df['Output']
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(X,y,test_size=0.2,random_state=42)
x_train = x_train.values.reshape(-1, 1)
x_test = x_test.values.reshape(-1, 1)
from sklearn.preprocessing import MinMaxScaler
M = MinMaxScaler()
x_train = M.fit_transform(x_train)
model = Sequential()
model.add(Dense(15,activation='relu',input_shape=x_train.shape))
model.add(Dense(10,activation='relu'))
model.add(Dense(1))
model.summary()
model.compile(optimizer='rmsprop',loss='mse')
model.fit(x_train,y_train,epochs=80)
model.history
loss_df = pd.DataFrame(model.history.history)
loss_df.plot()
y_pred=model.predict(x_test)
y_pred
Summarize the overall performance of the model based on the evaluation metrics obtained from testing data as a regressive neural network based prediction has been obtained.