CS231n/Assignment 2/Dropout(ドロップアウト)

その買うを、もっとハッピーに。|ハピタス

今回は、Stanford University提供のCS231n/Assignment2/Dropoutに挑戦する。しかしそれにしても、本当に酷暑並みの酷い暑さだ。これは1000年に1度の暑さに違いないと思わざるを得ない今日この頃。

スポンサーリンク

Dropout

Dropout1は、フォワードパスの間、無作為にいくつかの特徴をゼロに設定することでニューラルネットワークを正則化するテクニックだ。今回のエクササイズでは、ドロップアウト層を実装して、任意でドロップアウトを使えるように全結合ネットワークを一部修正する。

# As usual, a bit of setup
from __future__ import print_function
import time
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.fc_net import *
from cs231n.data_utils import get_CIFAR10_data
from cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array
from cs231n.solver import Solver

plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'

def rel_error(x, y):
    """ returns relative error """
    return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))

# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
%matplotlib inline
# Load the (preprocessed) CIFAR10 data.

data = get_CIFAR10_data()
for k, v in data.items():
    print('%s: ' % k, v.shape)
X_train:  (49000, 3, 32, 32)
y_train:  (49000,)
X_val:  (1000, 3, 32, 32)
y_val:  (1000,)
X_test:  (1000, 3, 32, 32)
y_test:  (1000,)
スポンサーリンク

Dropout forward pass

cs231n/layers.pyに、ドロップアウト用フォワードパスを実装する。ドロップアウトは訓練中とテスト中の振る舞いが異なるので、両モード用の操作を実装する。それが終わったら、下記のコードを実行して実装をテストする。

np.random.seed(231)
x = np.random.randn(500, 500) + 10

for p in [0.3, 0.6, 0.75]:
    out, _ = dropout_forward(x, {'mode': 'train', 'p': p})
    out_test, _ = dropout_forward(x, {'mode': 'test', 'p': p})

    print('Running tests with p = ', p)
    print('Mean of input: ', x.mean())
    print('Mean of train-time output: ', out.mean())
    print('Mean of test-time output: ', out_test.mean())
    print('Fraction of train-time output set to zero: ', (out == 0).mean())
    print('Fraction of test-time output set to zero: ', (out_test == 0).mean())
    print()
Running tests with p =  0.3
Mean of input:  10.000207878477502
Mean of train-time output:  10.035072797050494
Mean of test-time output:  10.000207878477502
Fraction of train-time output set to zero:  0.699124
Fraction of test-time output set to zero:  0.0

Running tests with p =  0.6
Mean of input:  10.000207878477502
Mean of train-time output:  9.976910758765856
Mean of test-time output:  10.000207878477502
Fraction of train-time output set to zero:  0.401368
Fraction of test-time output set to zero:  0.0

Running tests with p =  0.75
Mean of input:  10.000207878477502
Mean of train-time output:  9.993068588261146
Mean of test-time output:  10.000207878477502
Fraction of train-time output set to zero:  0.250496
Fraction of test-time output set to zero:  0.0

スポンサーリンク

Dropout backward pass

cs231n/layers.pyに、ドロップアウト用バックワードパスを実装する。終わったら下のセルを実行して実装の数的勾配確認をする。

np.random.seed(231)
x = np.random.randn(10, 10) + 10
dout = np.random.randn(*x.shape)

dropout_param = {'mode': 'train', 'p': 0.8, 'seed': 123}
out, cache = dropout_forward(x, dropout_param)
dx = dropout_backward(dout, cache)
dx_num = eval_numerical_gradient_array(lambda xx: dropout_forward(xx, dropout_param)[0], x, dout)

print('dx relative error: ', rel_error(dx, dx_num))
dx relative error:  5.445612718272284e-11
スポンサーリンク

Fully-connected nets with Dropout

Fully-connected nets with Dropout
cs231n/classifiers/fc_net.pyで、ドロップアウトを使うために実装を修正する。特に、コンストラクターとネットが受け取るドロップアウトパラメーター用の値が非ゼロ値の場合、ネットは、全てのReLU nonlinearity(非線形性)のすぐ後にドロップアウトを付け加える必要がある。終わったら下のコードを実行して実装の数的勾配確認をする。

np.random.seed(231)
N, D, H1, H2, C = 2, 15, 20, 30, 10
X = np.random.randn(N, D)
y = np.random.randint(C, size=(N,))

for dropout in [0, 0.25, 0.5]:
    print('Running check with dropout = ', dropout)
    model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,
                            weight_scale=5e-2, dtype=np.float64,
                            dropout=dropout, seed=123)

    loss, grads = model.loss(X, y)
    print('Initial loss: ', loss)

    for name in sorted(grads):
        f = lambda _: model.loss(X, y)[0]
        grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)
    print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))
    print()
Running check with dropout =  0
Initial loss:  2.3004790897684924
b3 relative error: 5.80e-11

Running check with dropout =  0.25
Initial loss:  2.2924325088330475
b3 relative error: 1.65e-10

Running check with dropout =  0.5
Initial loss:  2.3042759220785896
b3 relative error: 1.13e-10

スポンサーリンク

Regularization experiment

実験として、一組の二層ネットワークを500の訓練標本を使って訓練する。一つは、ドロップアウトを使わず、もう一つは、ドロップアウト確率0.75を使う。訓練後、2つのネットワークの経時的な訓練正確度と検証正確度を視覚化する。

# Train two identical nets, one with dropout and one without
np.random.seed(231)
num_train = 500
small_data = {
  'X_train': data['X_train'][:num_train],
  'y_train': data['y_train'][:num_train],
  'X_val': data['X_val'],
  'y_val': data['y_val'],
}

solvers = {}
dropout_choices = [0, 0.75]
for dropout in dropout_choices:
    model = FullyConnectedNet([500], dropout=dropout)
    print(dropout)

    solver = Solver(model, small_data,
                  num_epochs=25, batch_size=100,
                  update_rule='adam',
                  optim_config={
                    'learning_rate': 5e-4,
                  },
                  verbose=True, print_every=100)
    solver.train()
    solvers[dropout] = solver
0
(Iteration 1 / 125) loss: 7.856643
(Epoch 0 / 25) train acc: 0.260000; val_acc: 0.184000
(Epoch 1 / 25) train acc: 0.416000; val_acc: 0.258000
(Epoch 2 / 25) train acc: 0.482000; val_acc: 0.276000
(Epoch 3 / 25) train acc: 0.532000; val_acc: 0.277000
(Epoch 4 / 25) train acc: 0.600000; val_acc: 0.271000
(Epoch 5 / 25) train acc: 0.708000; val_acc: 0.299000
(Epoch 6 / 25) train acc: 0.722000; val_acc: 0.282000
(Epoch 7 / 25) train acc: 0.832000; val_acc: 0.255000
(Epoch 8 / 25) train acc: 0.878000; val_acc: 0.269000
(Epoch 9 / 25) train acc: 0.902000; val_acc: 0.275000
(Epoch 10 / 25) train acc: 0.888000; val_acc: 0.261000
(Epoch 11 / 25) train acc: 0.928000; val_acc: 0.276000
(Epoch 12 / 25) train acc: 0.960000; val_acc: 0.304000
(Epoch 13 / 25) train acc: 0.962000; val_acc: 0.306000
(Epoch 14 / 25) train acc: 0.968000; val_acc: 0.306000
(Epoch 15 / 25) train acc: 0.970000; val_acc: 0.279000
(Epoch 16 / 25) train acc: 0.988000; val_acc: 0.297000
(Epoch 17 / 25) train acc: 0.980000; val_acc: 0.304000
(Epoch 18 / 25) train acc: 0.986000; val_acc: 0.303000
(Epoch 19 / 25) train acc: 0.990000; val_acc: 0.295000
(Epoch 20 / 25) train acc: 0.984000; val_acc: 0.307000
(Iteration 101 / 125) loss: 0.009816
(Epoch 21 / 25) train acc: 0.974000; val_acc: 0.314000
(Epoch 22 / 25) train acc: 0.986000; val_acc: 0.311000
(Epoch 23 / 25) train acc: 0.974000; val_acc: 0.288000
(Epoch 24 / 25) train acc: 0.968000; val_acc: 0.286000
(Epoch 25 / 25) train acc: 0.982000; val_acc: 0.304000
0.75
(Iteration 1 / 125) loss: 11.299055
(Epoch 0 / 25) train acc: 0.234000; val_acc: 0.187000
(Epoch 1 / 25) train acc: 0.388000; val_acc: 0.241000
(Epoch 2 / 25) train acc: 0.552000; val_acc: 0.263000
(Epoch 3 / 25) train acc: 0.608000; val_acc: 0.265000
(Epoch 4 / 25) train acc: 0.676000; val_acc: 0.282000
(Epoch 5 / 25) train acc: 0.760000; val_acc: 0.285000
(Epoch 6 / 25) train acc: 0.766000; val_acc: 0.291000
(Epoch 7 / 25) train acc: 0.836000; val_acc: 0.271000
(Epoch 8 / 25) train acc: 0.866000; val_acc: 0.288000
(Epoch 9 / 25) train acc: 0.856000; val_acc: 0.283000
(Epoch 10 / 25) train acc: 0.840000; val_acc: 0.273000
(Epoch 11 / 25) train acc: 0.906000; val_acc: 0.293000
(Epoch 12 / 25) train acc: 0.934000; val_acc: 0.291000
(Epoch 13 / 25) train acc: 0.918000; val_acc: 0.291000
(Epoch 14 / 25) train acc: 0.946000; val_acc: 0.295000
(Epoch 15 / 25) train acc: 0.952000; val_acc: 0.310000
(Epoch 16 / 25) train acc: 0.962000; val_acc: 0.294000
(Epoch 17 / 25) train acc: 0.972000; val_acc: 0.317000
(Epoch 18 / 25) train acc: 0.968000; val_acc: 0.306000
(Epoch 19 / 25) train acc: 0.978000; val_acc: 0.292000
(Epoch 20 / 25) train acc: 0.976000; val_acc: 0.299000
(Iteration 101 / 125) loss: 0.377725
(Epoch 21 / 25) train acc: 0.974000; val_acc: 0.293000
(Epoch 22 / 25) train acc: 0.970000; val_acc: 0.294000
(Epoch 23 / 25) train acc: 0.992000; val_acc: 0.323000
(Epoch 24 / 25) train acc: 0.984000; val_acc: 0.312000
(Epoch 25 / 25) train acc: 0.990000; val_acc: 0.297000
# Plot train and validation accuracies of the two models
plt.rcParams["font.size"] = "16"

train_accs = []
val_accs = []
for dropout in dropout_choices:
    solver = solvers[dropout]
    train_accs.append(solver.train_acc_history[-1])
    val_accs.append(solver.val_acc_history[-1])

plt.subplot(3, 1, 1)
for dropout in dropout_choices:
    plt.plot(solvers[dropout].train_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Train accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')
  
plt.subplot(3, 1, 2)
for dropout in dropout_choices:
    plt.plot(solvers[dropout].val_acc_history, 'o', label='%.2f dropout' % dropout)
plt.title('Val accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend(ncol=2, loc='lower right')

plt.gcf().set_size_inches(18, 25)
plt.show()

とにかく暑い。明日はもっと暑いんだからやってられない。

参考サイトhttps://github.com/

  1. Geoffrey E. Hinton et al, “Improving neural networks by preventing co-adaptation of feature detectors”, arXiv 2012
スポンサーリンク
スポンサーリンク