既是实战,也是入门~
MNIST介绍
MNIST是深度学习领域的一个经典数据集,内含60000张训练图像与10000张预测图像,每张图片为28像素*28像素的灰度图像,并被划分到10个类别中(0-9)。
MNIST手写数字识别,正是深度学习里的Hello World。
加载数据集
mnist数据预加载在keras库中,其中包括4个numpy数组
from keras.datasets import mnist (train_images,train_labels),(test_images,test_labels) = mnist.load_data() print(train_images.shape,test_images.shape) print(train_labels[:20])
构建网络
网络共2个dense层(即全连接层)
第一层网络共512个隐藏单元(hidden unit),激活函数为relu
第一层网络共10个隐藏单元,激活函数为softmax
关于指定输入数据的shape,可以看http://ducknew.cf/posts/e9e6cec8/
from keras import models,layers network=models.Sequential() network.add(layers.Dense(512,activation='relu',input_shape=(28*28,))) network.add(layers.Dense(10,activation='softmax'))
optimizer(优化器):该参数可指定为已预定义的优化器名,如 rmsprop 、 adagrad ,或一 个 Optimizer 类的对象
loss(损失函数):该参数为模型试图最小化的目标函数,它可为预定义的损失函数名, 如 categorical_crossentropy 、 mse
metrics(指标列表):对分类问题,我们一般将该列表设置为 metrics=[‘accuracy’] 。指标可以是一个预 定义指标的名字,也可以是一个用户定制的函数
network.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
数据预处理
原数据shape为(60000,28,28),类型为uint8,取值范围为【0,255】
转换后数据shape为(60000,28*28),类型为float32,取值范围为【0,1】
train_images=train_images.reshape((60000,28*28)) train_images = train_images.astype('float32') / 255 test_images=test_images.reshape((10000,28*28)) test_images = test_images.astype('float32') / 255
准备标签
对标签进行分类编码
from tensorflow.keras.utils import to_categorical train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels)
训练模型
epochs被定义为向前和向后传播中所有批次的单次训练迭代
举个例子
训练集有1000个样本,batchsize=10,那幺:
训练完整个样本集需要:
100次iteration,1次epoch
oneepoch
= numbers ofiterations
= N = 训练样本的数量/batch_size
batch_size可以看这里: http://ducknew.cf/posts/e9e6cec8/
network.fit(train_images,train_labels,epochs=10,batch_size=128)
运行过程显示:
Epoch 1/10 469/469 [==============================] - 15s 6ms/step - loss: 0.4178 - accuracy: 0.8784 Epoch 2/10 469/469 [==============================] - 3s 5ms/step - loss: 0.1119 - accuracy: 0.9669 Epoch 3/10 469/469 [==============================] - 2s 5ms/step - loss: 0.0711 - accuracy: 0.9784 Epoch 4/10 469/469 [==============================] - 2s 5ms/step - loss: 0.0504 - accuracy: 0.9852 Epoch 5/10 469/469 [==============================] - 3s 5ms/step - loss: 0.0376 - accuracy: 0.9888 Epoch 6/10 469/469 [==============================] - 2s 5ms/step - loss: 0.0263 - accuracy: 0.9923 Epoch 7/10 469/469 [==============================] - 2s 5ms/step - loss: 0.0201 - accuracy: 0.9942 Epoch 8/10 469/469 [==============================] - 2s 5ms/step - loss: 0.0142 - accuracy: 0.9959 Epoch 9/10 469/469 [==============================] - 3s 5ms/step - loss: 0.0116 - accuracy: 0.9968 Epoch 10/10 469/469 [==============================] - 2s 5ms/step - loss: 0.0094 - accuracy: 0.9977
评估模型
loss是网络在测试数据上的损失,acc是网络在测试数据上的精度
test_loss,test_acc = network.evaluate(test_images,test_labels) print(f'test_loss: {test_loss}, test_acc: {test_acc}')
结果:
test_loss: 0.0742919072508812, test_acc: 0.9818999767303467
可以发现test_acc
<训练过程中的accuracy
,这种训练精度高而测试精度低的情况一般是**过拟合(overfit)**造成的。
CNN处理MNIST
网络结构
_________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 26, 26, 32) 320 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 13, 13, 32) 0 _________________________________________________________________ dropout (Dropout) (None, 13, 13, 32) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 11, 11, 64) 18496 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 5, 5, 64) 0 _________________________________________________________________ flatten (Flatten) (None, 1600) 0 _________________________________________________________________ dense (Dense) (None, 64) 102464 _________________________________________________________________ dropout_1 (Dropout) (None, 64) 0 _________________________________________________________________ dense_1 (Dense) (None, 10) 650 ================================================================= Total params: 121,930 Trainable params: 121,930 Non-trainable params: 0
网络代码
def build_model(): model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28,28,1))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Dropout(0.25)) model.add(layers.Conv2D(64, (3, 3), activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Flatten()) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dropout(0.5)) model.add(layers.Dense(10, activation='softmax')) # print(model.summary()) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) return model
loss与acc曲线
history=model.fit(train_images, train_labels, epochs=15, batch_size=128) print(history.history.keys()) acc = history.history['accuracy'] loss = history.history['loss'] epochs = range(1, len(acc) + 1) plt.plot(epochs, acc, 'bo', label='Training accuracy') plt.title('Training and validation accuracy') plt.legend() plt.figure() plt.plot(epochs, loss, 'bo', label='Training loss') plt.title('Training and validation loss') plt.legend() plt.show()
事实上,卷积神经网络往往能比全连接网络取得更高的预测的精度。
Be First to Comment