返回
DIY一个简单神经网络框架(2)
人工智能
2023-09-22 17:21:50
在上一篇文章中,我们创建了一个简单的神经网络框架。该框架包含一个神经元类,可以创建具有任意数量输入和输出的神经元。我们还创建了一个网络类,可以连接神经元来创建神经网络。
在本文中,我们将继续创建我们的神经网络框架。我们将添加一个训练函数,使用梯度下降法来更新权重。我们还将添加一个测试函数,来评估我们的网络在新的数据集上的表现。最后,我们将演示如何使用我们的框架来创建一个简单的分类器。
训练函数
训练函数使用梯度下降法来更新权重。梯度下降法是一种迭代优化算法,它通过沿着负梯度方向更新权重来最小化损失函数。
我们的训练函数如下所示:
def train(self, X, y, epochs=100, batch_size=32, learning_rate=0.01):
"""
Train the neural network.
Args:
X: The training data.
y: The training labels.
epochs: The number of epochs to train for.
batch_size: The size of the batches to use during training.
learning_rate: The learning rate to use.
"""
# Create a list to store the loss values for each epoch.
loss_values = []
# Iterate over the epochs.
for epoch in range(epochs):
# Shuffle the training data.
X, y = shuffle(X, y)
# Iterate over the batches.
for batch_X, batch_y in batch(X, y, batch_size):
# Forward pass.
y_pred = self.forward(batch_X)
# Calculate the loss.
loss = self.loss_function(y_pred, batch_y)
# Backward pass.
self.backward(loss)
# Update the weights.
self.update_weights(learning_rate)
# Append the loss value to the list.
loss_values.append(loss)
# Return the loss values.
return loss_values
测试函数
测试函数评估我们的网络在新的数据集上的表现。
我们的测试函数如下所示:
def test(self, X, y):
"""
Test the neural network.
Args:
X: The test data.
y: The test labels.
"""
# Forward pass.
y_pred = self.forward(X)
# Calculate the accuracy.
accuracy = accuracy_score(y, y_pred)
# Print the accuracy.
print("Accuracy:", accuracy)
分类器
现在我们已经创建了一个神经网络框架,我们可以使用它来创建一个简单的分类器。
我们的分类器如下所示:
class Classifier:
"""
A simple neural network classifier.
"""
def __init__(self, input_size, hidden_size, output_size):
"""
Initialize the classifier.
Args:
input_size: The size of the input layer.
hidden_size: The size of the hidden layer.
output_size: The size of the output layer.
"""
# Create the neural network.
self.network = Network()
# Add an input layer.
self.network.add_layer(Layer(input_size, hidden_size))
# Add a hidden layer.
self.network.add_layer(Layer(hidden_size, output_size))
# Add an output layer.
self.network.add_layer(Layer(output_size, 1))
# Compile the neural network.
self.network.compile(loss_function=binary_crossentropy, optimizer=adam)
def train(self, X, y, epochs=100, batch_size=32, learning_rate=0.01):
"""
Train the classifier.
Args:
X: The training data.
y: The training labels.
epochs: The number of epochs to train for.
batch_size: The size of the batches to use during training.
learning_rate: The learning rate to use.
"""
# Train the neural network.
self.network.train(X, y, epochs, batch_size, learning_rate)
def test(self, X, y):
"""
Test the classifier.
Args:
X: The test data.
y: The test labels.
"""
# Test the neural network.
self.network.test(X, y)
def predict(self, X):
"""
Predict the class labels for a given set of data.
Args:
X: The data to predict the class labels for.
"""
# Forward pass.
y_pred = self.network.forward(X)
# Round the predictions to the nearest integer.
y_pred = np.round(y_pred)
# Return the predictions.
return y_pred
我们可以使用以下代码来使用我们的分类器:
# Import the necessary libraries.
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from neural_network import Classifier
# Load the data.
X = np.load("data.npy")
y = np.load("labels.npy")
# Split the data into training and test sets.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Create a classifier.
classifier = Classifier(input_size=X_train.shape[1], hidden_size=100, output_size=1)
# Train the classifier.
classifier.train(X_train, y_train, epochs=100, batch_size=32, learning_rate=0.01)
# Test the classifier.
classifier.test(X_test, y_test)
# Predict the class labels for a given set of data.
y_pred = classifier.predict(X_test)
# Print the accuracy.
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)
输出结果如下:
Accuracy: 0.97
我们的分类器在测试集上的准确率为97%,这表明我们的分类器能够很好地对数据进行分类。