Pooling

Max Pooling近年来效果最好，它以某个地区的最大像素代表该地区最重要的特征为理论基础。通常我们希望分类的物体图像可能包含许多其他物体，例如，出现在汽车图像某处的猫可能会误导分类器，Pooling有助于缓解这种影响。

Dropout

Batch Normalization

`def Unit(x,filters):    out = BatchNormalization()(x)    out = Activation("relu")(out)    out = Conv2D(filters=filters, kernel_size=[3, 3], strides=[1, 1], padding="same")(out)    return out`

`def MiniModel(input_shape):    images = Input(input_shape)    net = Unit(images,64)    net = Unit(net,64)    net = Unit(net,64)    net = MaxPooling2D(pool_size=(2,2))(net)    net = Unit(net,128)    net = Unit(net,128)    net = Unit(net,128)    net = MaxPooling2D(pool_size=(2, 2))(net)    net = Unit(net,256)    net = Unit(net,256)    net = Unit(net,256)    net = Dropout(0.5)(net)    net = AveragePooling2D(pool_size=(8,8))(net)    net = Flatten()(net)    net = Dense(units=10,activation="softmax")(net)    model = Model(inputs=images,outputs=net)    return model`

`#load the cifar10 dataset(train_x, train_y) , (test_x, test_y) = cifar10.load_data()#normalize the datatrain_x = train_x.astype('float32') / 255test_x = test_x.astype('float32') / 255#Subtract the mean image from both train and test settrain_x = train_x - train_x.mean()test_x = test_x - test_x.mean()#Divide by the standard deviationtrain_x = train_x / train_x.std(axis=0)test_x = test_x / test_x.std(axis=0)`

`datagen = ImageDataGenerator(rotation_range=10,                             width_shift_range=5. / 32,                             height_shift_range=5. / 32,                             horizontal_flip=True)# Compute quantities required for featurewise normalization# (std, mean, and principal components if ZCA whitening is applied).datagen.fit(train_x)`

`#Encode the labels to vectorstrain_y = keras.utils.to_categorical(train_y,10)test_y = keras.utils.to_categorical(test_y,10)`

```#import needed classesimport kerasfrom keras.datasets import cifar10from keras.layers import Dense,Conv2D,MaxPooling2D,Flatten,AveragePooling2D,Dropout,BatchNormalization,Activationfrom keras.models import Model,Inputfrom keras.optimizers import Adamfrom keras.callbacks import LearningRateSchedulerfrom keras.callbacks import ModelCheckpointfrom math import ceilimport osfrom keras.preprocessing.image import ImageDataGeneratordef Unit(x,filters):    out = BatchNormalization()(x)    out = Activation("relu")(out)    out = Conv2D(filters=filters, kernel_size=[3, 3], strides=[1, 1], padding="same")(out)    return out#Define the modeldef MiniModel(input_shape):    images = Input(input_shape)    net = Unit(images,64)    net = Unit(net,64)    net = Unit(net,64)    net = MaxPooling2D(pool_size=(2,2))(net)    net = Unit(net,128)    net = Unit(net,128)    net = Unit(net,128)    net = MaxPooling2D(pool_size=(2, 2))(net)    net = Unit(net,256)    net = Unit(net,256)    net = Unit(net,256)    net = Dropout(0.25)(net)    net = AveragePooling2D(pool_size=(8,8))(net)    net = Flatten()(net)    net = Dense(units=10,activation="softmax")(net)    model = Model(inputs=images,outputs=net)    return model#load the cifar10 dataset(train_x, train_y) , (test_x, test_y) = cifar10.load_data()#normalize the datatrain_x = train_x.as
type('float32') / 255test_x = test_x.astype('float32') / 255#Subtract the mean image from both train and test settrain_x = train_x - train_x.mean()test_x = test_x - test_x.mean()#Divide by the standard deviationtrain_x = train_x / train_x.std(axis=0)test_x = test_x / test_x.std(axis=0)datagen = ImageDataGenerator(rotation_range=10,                             width_shift_range=5. / 32,                             height_shift_range=5. / 32,                             horizontal_flip=True)# Compute quantities required for featurewise normalization# (std, mean, and principal components if ZCA whitening is applied).datagen.fit(train_x)#Encode the labels to vectorstrain_y = keras.utils.to_categorical(train_y,10)test_y = keras.utils.to_categorical(test_y,10)#define a common unitinput_shape = (32,32,3)model = MiniModel(input_shape)#Print a Summary of the modelmodel.summary()#Specify the training componentsmodel.compile(optimizer=Adam(0.001),loss="categorical_crossentropy",metrics=["accuracy"])epochs = 20steps_per_epoch = ceil(50000/128)# Fit the model on the batches generated by datagen.flow().model.fit_generator(datagen.flow(train_x, train_y, batch_size=128),                    validation_data=[test_x,test_y],                epochs=epochs,steps_per_epoch=steps_per_epoch, verbose=1, workers=4)#Evaluate the accuracy of the test datasetaccuracy = model.evaluate(x=test_x,y=test_y,batch_size=128)model.save("cifar10model.h5")```

`input_shape = (32,32,3)model = MiniModel(input_shape)#Print a Summary of the modelmodel.summary()`

`epochs = 20steps_per_epoch = ceil(50000/128)# Fit the model on the batches generated by datagen.flow().model.fit_generator(datagen.flow(train_x, train_y, batch_size=128),                    validation_data=[test_x,test_y],                  epochs=epochs,steps_per_epoch=steps_per_epoch, verbose=1, workers=4)#Evaluate the accuracy of the test datasetaccuracy = model.evaluate(x=test_x,y=test_y,batch_size=128)model.save("cifar10model.h5")`

`steps_per_epoch = ceil(50000/128)`

50000是总共训练图像的数量，我们使用一个批处理大小为128。

`Fit the model on the batches generated by datagen.flow().model.fit_generator(datagen.flow(train_x, train_y, batch_size=128),                    validation_data=[test_x,test_y],                    epochs=epochs,steps_per_epoch=steps_per_epoch, verbose=1, workers=4)`