## 教程大纲

1.    Keras 中的权值约束

2.    神经网络层上的权值约束

3.    权值约束的案例分析

### Keras 中的权值约束

Keras API 支持权值约束技术。这样的权值约束是逐层指定的，但是需要在层中的每一个节点应用并执行。使用权值约束的方法通常包括在层上为输入权值设置「kernel_constraint」参数，以及为偏置的权值设置「bias_constraint」。一般来说，权值约束不会用于偏置的权重。我们可以使用一组不同的向量范数作为权值约束，Keras 在「keras.constraints module」中给出了这些方法：

```# import norm
from keras.constraints import max_norm
# instantiate norm
norm = max_norm(3.0)
Weight Constraints on Layers```

### 神经网络层上的权值约束

```# example of max norm on a dense layer
from keras.layers import Dense
from keras.constraints import max_norm
...
...```

```# example of max norm on a cnn layer
from keras.layers import Conv2D
from keras.constraints import max_norm
...
...```

```# example of max norm on an lstm layer
from keras.layers import LSTM
from keras.constraints import max_norm
...
...```

### 权值约束案例分析

```# generate 2d classification dataset
X, y = make_moons(n_samples=100, noise=0.2, random_state=1)```

```# generate two moons dataset
from sklearn.datasets import make_moons
from matplotlib import pyplot
from pandas import DataFrame
# generate 2d classification dataset
X, y = make_moons(n_samples=100, noise=0.2, random_state=1)
# scatter plot, dots colored by class value
df = DataFrame(dict(x=X[:,0], y=X[:,1], label=y))
colors = {0:'red', 1:'blue'}
fig, ax = pyplot.subplots()
grouped = df.groupby('label')
for key, group in grouped:
group.plot(ax=ax, kind='scatter', x='x', y='y', label=key, color=colors[key])
pyplot.show()```

```# generate 2d classification dataset
X, y = make_moons(n_samples=100, noise=0.2, random_state=1)
# split into train and test
n_train = 30
trainX, testX = X[:n_train, :], X[n_train:, :]
trainy, testy = y[:n_train], y[n_train:]```

```# define model
model = Sequential()

```# fit model
history = model.fit(trainX, trainy, validation_data=(testX, testy), epochs=4000, verbose=0)```

```# evaluate the model
_, train_acc = model.evaluate(trainX, trainy, verbose=0)
_, test_acc = model.evaluate(testX, testy, verbose=0)
print('Train: %.3f, Test: %.3f' % (train_acc, test_acc))```

```# plot history
pyplot.plot(history.history['acc'], label='train')
pyplot.plot(history.history['val_acc'], label='test')
pyplot.legend()
pyplot.show()```

```# mlp overfit on the moons dataset
from sklearn.datasets import make_moons
from keras.layers import Dense
from keras.models import Sequential
from matplotlib import pyplot
# generate 2d classification dataset
X, y = make_moons(n_samples=100, noise=0.2, random_state=1)
# split into train and test
n_train = 30
trainX, testX = X[:n_train, :], X[n_train:, :]
trainy, testy = y[:n_train], y[n_train:]
# define model
model = Sequential()
# fit model
history = model.fit(trainX, trainy, validation_data=(testX, testy), epochs=4000, verbose=0)
# evaluate the model
_, train_acc = model.evaluate(trainX, trainy, verbose=0)
_, test_acc = model.evaluate(testX, testy, verbose=0)
print('Train: %.3f, Test: %.3f' % (train_acc, test_acc))
# plot history
pyplot.plot(history.history['acc'], label='train')
pyplot.plot(history.history['val_acc'], label='test')
pyplot.legend()
pyplot.show()```

`Train: 1.000, Test: 0.914`

`model.add(Dense(500, input_dim=2, activation='relu', kernel_constraint=unit_norm()))`

`model.add(Dense(500, input_dim=2, activation='relu', kernel_constraint=min_max_norm(min_value=1.0, max_value=1.0)))`

`model.add(Dense(500, input_dim=2, activation='relu', kernel_constraint=max_norm(1.0)))`

```# mlp overfit on the moons dataset with a unit norm constraint
from sklearn.datasets import make_moons
from keras.layers import Dense
from keras.models import Sequential
from keras.constraints import unit_norm
from matplotlib import pyplot
# generate 2d classification dataset
X, y = make_moons(n_samples=100, noise=0.2, random_state=1)
# split into train and test
n_train = 30
trainX, testX = X[:n_train, :], X[n_train:, :]
trainy, testy = y[:n_train], y[n_train:]
# define model
model = Sequential()
# fit model
history = model.fit(trainX, trainy, validation_data=(testX, testy), epochs=4000, verbose=0)
# evaluate the model
_, train_acc = model.evaluate(trainX, trainy, verbose=0)
_, test_acc = model.evaluate(testX, testy, verbose=0)
print('Train: %.3f, Test: %.3f' % (train_acc, test_acc))
# plot history
pyplot.plot(history.history['acc'], label='train')
pyplot.plot(history.history['val_acc'], label='test')
pyplot.legend()
pyplot.show()```

`Train: 1.000, Test: 0.943`

## 扩展

### API

Keras Constraints API：https://keras.io/constraints/

Keras constraints.py：https://github.com/keras-team/keras/blob/master/keras/constraints.py

Keras Core Layers API：https://keras.io/layers/core/

Keras Convolutional Layers API：https://keras.io/layers/convolutional/

Keras Recurrent Layers API：https://keras.io/layers/recurrent/

sklearn.datasets.make_moons API：http://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_moons.html