PyTorch/TensorRT/PyCUDA MNISTチュートリアル

その買うを、もっとハッピーに。|ハピタス

TensorRT, PyCUDA, PyTorchを使ったこのチュートリアルをやってみた。

スポンサーリンク

手動でTensorRT Engineを構築する

Python APIが、NumPyコンパチレイヤーウェイトを使用していて、UFFコンバーターにサポートされていない可能性がある、Pythonベースのフレームワークへの道筋を提供している。今回のチュートリアルではPyTorchが使われている。先ずは必要なモジュールをインポートする。

import tensorrt as trt
import pycuda.driver as cuda
import pycuda.autoinit
import numpy as np
from matplotlib.pyplot import imshow # to show test case
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.autograd import Variable
スポンサーリンク

PyTorchでモデルをトレーニング

PyTorchでモデルを特訓する詳細についてはこのサイトを参照。最初に、hyper-parametersを決めてからdataloaderを作成してネットワークを定義、トレーニング/テストステップを決める。

BATCH_SIZE = 64
TEST_BATCH_SIZE = 1000
EPOCHS = 3
LEARNING_RATE = 0.001
SGD_MOMENTUM = 0.5
SEED = 1
LOG_INTERVAL = 10
# Enable Cuda
torch.cuda.manual_seed(SEED)
# Dataloader
kwargs = {'num_workers': 1, 'pin_memory': True}
train_loader  = torch.utils.data.DataLoader(
    datasets.MNIST('/tmp/mnist/data', train=True, download=True,
                    transform=transforms.Compose([
                    transforms.ToTensor(),
                    transforms.Normalize((0.1307,), (0.3081,))
        ])),
    batch_size=BATCH_SIZE,
    shuffle=True,
    **kwargs)

test_loader = torch.utils.data.DataLoader(
    datasets.MNIST('/tmp/mnist/data', train=False,
                   transform=transforms.Compose([
                   transforms.ToTensor(),
                    transforms.Normalize((0.1307,), (0.3081,))
        ])),
    batch_size=TEST_BATCH_SIZE,
    shuffle=True,
    **kwargs)
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
Processing...
Done!
# Network
class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, kernel_size=5)
        self.conv2 = nn.Conv2d(20, 50, kernel_size=5)
        self.conv2_drop = nn.Dropout2d()
        self.fc1 = nn.Linear(800, 500)
        self.fc2 = nn.Linear(500, 10)

    def forward(self, x):
        x = F.max_pool2d(self.conv1(x), kernel_size=2, stride=2)
        x = F.max_pool2d(self.conv2(x), kernel_size=2, stride=2)
        x = x.view(-1, 800)
        x = F.relu(self.fc1(x))
        x = self.fc2(x)
        return F.log_softmax(x, dim=1)

model = Net()
model.cuda()
Net(
  (conv1): Conv2d(1, 20, kernel_size=(5, 5), stride=(1, 1))
  (conv2): Conv2d(20, 50, kernel_size=(5, 5), stride=(1, 1))
  (conv2_drop): Dropout2d(p=0.5)
  (fc1): Linear(in_features=800, out_features=500, bias=True)
  (fc2): Linear(in_features=500, out_features=10, bias=True)
)
optimizer = optim.SGD(model.parameters(), lr=LEARNING_RATE, momentum=SGD_MOMENTUM)
def train(epoch):
    model.train()
    for batch, (data, target) in enumerate(train_loader):
        data, target = data.cuda(), target.cuda()
        data, target = Variable(data), Variable(target)
        optimizer.zero_grad()
        output = model(data)
        loss = F.nll_loss(output, target)
        loss.backward()
        optimizer.step()
        if batch % LOG_INTERVAL == 0:
            print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'
                  .format(epoch,
                          batch * len(data),
                          len(train_loader.dataset),
                          100. * batch / len(train_loader),
                          loss.data.item()))

def test(epoch):
    model.eval()
    test_loss = 0
    correct = 0
    for data, target in test_loader:
        data, target = data.cuda(), target.cuda()
        with torch.no_grad():
            data, target = Variable(data), Variable(target)
        output = model(data)
        test_loss += F.nll_loss(output, target).data.item()
        pred = output.data.max(1)[1]
        correct += pred.eq(target.data).cpu().sum()
    test_loss /= len(test_loader)
    print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'
          .format(test_loss,
                  correct,
                  len(test_loader.dataset),
                  100. * correct / len(test_loader.dataset)))

次に、モデルをトレーニングする。

for e in range(EPOCHS):
    train(e + 1)
    test(e + 1)
Train Epoch: 1 [0/60000 (0%)]	Loss: 2.339442
Train Epoch: 1 [640/60000 (1%)]	Loss: 2.302078
Train Epoch: 1 [1280/60000 (2%)]	Loss: 2.293532
Train Epoch: 1 [1920/60000 (3%)]	Loss: 2.271640
Train Epoch: 1 [2560/60000 (4%)]	Loss: 2.270803
Train Epoch: 1 [3200/60000 (5%)]	Loss: 2.238370
Train Epoch: 1 [3840/60000 (6%)]	Loss: 2.215274
Train Epoch: 1 [4480/60000 (7%)]	Loss: 2.209767
Train Epoch: 1 [5120/60000 (9%)]	Loss: 2.208596
Train Epoch: 1 [5760/60000 (10%)]	Loss: 2.191300
Train Epoch: 1 [6400/60000 (11%)]	Loss: 2.185210
Train Epoch: 1 [7040/60000 (12%)]	Loss: 2.169501
Train Epoch: 1 [7680/60000 (13%)]	Loss: 2.127026
Train Epoch: 1 [8320/60000 (14%)]	Loss: 2.145267
Train Epoch: 1 [8960/60000 (15%)]	Loss: 2.129811
Train Epoch: 1 [9600/60000 (16%)]	Loss: 2.063754
Train Epoch: 1 [10240/60000 (17%)]	Loss: 2.037061
Train Epoch: 1 [10880/60000 (18%)]	Loss: 1.995715
Train Epoch: 1 [11520/60000 (19%)]	Loss: 2.050589
Train Epoch: 1 [12160/60000 (20%)]	Loss: 2.022595
Train Epoch: 1 [12800/60000 (21%)]	Loss: 1.963765
Train Epoch: 1 [13440/60000 (22%)]	Loss: 1.978015
Train Epoch: 1 [14080/60000 (23%)]	Loss: 1.959741
Train Epoch: 1 [14720/60000 (25%)]	Loss: 1.958775
Train Epoch: 1 [15360/60000 (26%)]	Loss: 1.818887
Train Epoch: 1 [16000/60000 (27%)]	Loss: 1.854086
Train Epoch: 1 [16640/60000 (28%)]	Loss: 1.728107
Train Epoch: 1 [17280/60000 (29%)]	Loss: 1.741828
Train Epoch: 1 [17920/60000 (30%)]	Loss: 1.623457
Train Epoch: 1 [18560/60000 (31%)]	Loss: 1.593732
Train Epoch: 1 [19200/60000 (32%)]	Loss: 1.672744
Train Epoch: 1 [19840/60000 (33%)]	Loss: 1.629516
Train Epoch: 1 [20480/60000 (34%)]	Loss: 1.573262
Train Epoch: 1 [21120/60000 (35%)]	Loss: 1.393303
Train Epoch: 1 [21760/60000 (36%)]	Loss: 1.442336
Train Epoch: 1 [22400/60000 (37%)]	Loss: 1.346715
Train Epoch: 1 [23040/60000 (38%)]	Loss: 1.341215
Train Epoch: 1 [23680/60000 (39%)]	Loss: 1.329195
Train Epoch: 1 [24320/60000 (41%)]	Loss: 1.331725
Train Epoch: 1 [24960/60000 (42%)]	Loss: 1.226278
Train Epoch: 1 [25600/60000 (43%)]	Loss: 1.045015
Train Epoch: 1 [26240/60000 (44%)]	Loss: 1.052716
Train Epoch: 1 [26880/60000 (45%)]	Loss: 0.936824
Train Epoch: 1 [27520/60000 (46%)]	Loss: 1.002749
Train Epoch: 1 [28160/60000 (47%)]	Loss: 0.746723
Train Epoch: 1 [28800/60000 (48%)]	Loss: 0.922779
Train Epoch: 1 [29440/60000 (49%)]	Loss: 0.863753
Train Epoch: 1 [30080/60000 (50%)]	Loss: 0.892013
Train Epoch: 1 [30720/60000 (51%)]	Loss: 0.761069
Train Epoch: 1 [31360/60000 (52%)]	Loss: 0.823998
Train Epoch: 1 [32000/60000 (53%)]	Loss: 0.761182
Train Epoch: 1 [32640/60000 (54%)]	Loss: 0.952870
Train Epoch: 1 [33280/60000 (55%)]	Loss: 0.773328
Train Epoch: 1 [33920/60000 (57%)]	Loss: 0.686047
Train Epoch: 1 [34560/60000 (58%)]	Loss: 0.658088
Train Epoch: 1 [35200/60000 (59%)]	Loss: 0.614582
Train Epoch: 1 [35840/60000 (60%)]	Loss: 0.611019
Train Epoch: 1 [36480/60000 (61%)]	Loss: 0.643625
Train Epoch: 1 [37120/60000 (62%)]	Loss: 0.553809
Train Epoch: 1 [37760/60000 (63%)]	Loss: 0.621607
Train Epoch: 1 [38400/60000 (64%)]	Loss: 0.543676
Train Epoch: 1 [39040/60000 (65%)]	Loss: 0.536915
Train Epoch: 1 [39680/60000 (66%)]	Loss: 0.565784
Train Epoch: 1 [40320/60000 (67%)]	Loss: 0.466900
Train Epoch: 1 [40960/60000 (68%)]	Loss: 0.647791
Train Epoch: 1 [41600/60000 (69%)]	Loss: 0.632955
Train Epoch: 1 [42240/60000 (70%)]	Loss: 0.502301
Train Epoch: 1 [42880/60000 (71%)]	Loss: 0.432976
Train Epoch: 1 [43520/60000 (72%)]	Loss: 0.531026
Train Epoch: 1 [44160/60000 (74%)]	Loss: 0.340906
Train Epoch: 1 [44800/60000 (75%)]	Loss: 0.488034
Train Epoch: 1 [45440/60000 (76%)]	Loss: 0.418744
Train Epoch: 1 [46080/60000 (77%)]	Loss: 0.509820
Train Epoch: 1 [46720/60000 (78%)]	Loss: 0.443774
Train Epoch: 1 [47360/60000 (79%)]	Loss: 0.378030
Train Epoch: 1 [48000/60000 (80%)]	Loss: 0.594622
Train Epoch: 1 [48640/60000 (81%)]	Loss: 0.385377
Train Epoch: 1 [49280/60000 (82%)]	Loss: 0.596030
Train Epoch: 1 [49920/60000 (83%)]	Loss: 0.435705
Train Epoch: 1 [50560/60000 (84%)]	Loss: 0.467549
Train Epoch: 1 [51200/60000 (85%)]	Loss: 0.352721
Train Epoch: 1 [51840/60000 (86%)]	Loss: 0.570520
Train Epoch: 1 [52480/60000 (87%)]	Loss: 0.299948
Train Epoch: 1 [53120/60000 (88%)]	Loss: 0.461945
Train Epoch: 1 [53760/60000 (90%)]	Loss: 0.473863
Train Epoch: 1 [54400/60000 (91%)]	Loss: 0.487546
Train Epoch: 1 [55040/60000 (92%)]	Loss: 0.585849
Train Epoch: 1 [55680/60000 (93%)]	Loss: 0.412827
Train Epoch: 1 [56320/60000 (94%)]	Loss: 0.424696
Train Epoch: 1 [56960/60000 (95%)]	Loss: 0.455943
Train Epoch: 1 [57600/60000 (96%)]	Loss: 0.286414
Train Epoch: 1 [58240/60000 (97%)]	Loss: 0.497361
Train Epoch: 1 [58880/60000 (98%)]	Loss: 0.374967
Train Epoch: 1 [59520/60000 (99%)]	Loss: 0.635978

Test set: Average loss: 0.3894, Accuracy: 8897/10000 (88%)

Train Epoch: 2 [0/60000 (0%)]	Loss: 0.289303
Train Epoch: 2 [640/60000 (1%)]	Loss: 0.336094
Train Epoch: 2 [1280/60000 (2%)]	Loss: 0.419128
Train Epoch: 2 [1920/60000 (3%)]	Loss: 0.401840
Train Epoch: 2 [2560/60000 (4%)]	Loss: 0.332636
Train Epoch: 2 [3200/60000 (5%)]	Loss: 0.330166
Train Epoch: 2 [3840/60000 (6%)]	Loss: 0.435750
Train Epoch: 2 [4480/60000 (7%)]	Loss: 0.292992
Train Epoch: 2 [5120/60000 (9%)]	Loss: 0.343735
Train Epoch: 2 [5760/60000 (10%)]	Loss: 0.437166
Train Epoch: 2 [6400/60000 (11%)]	Loss: 0.360609
Train Epoch: 2 [7040/60000 (12%)]	Loss: 0.563690
Train Epoch: 2 [7680/60000 (13%)]	Loss: 0.313894
Train Epoch: 2 [8320/60000 (14%)]	Loss: 0.263234
Train Epoch: 2 [8960/60000 (15%)]	Loss: 0.365996
Train Epoch: 2 [9600/60000 (16%)]	Loss: 0.350357
Train Epoch: 2 [10240/60000 (17%)]	Loss: 0.285039
Train Epoch: 2 [10880/60000 (18%)]	Loss: 0.463300
Train Epoch: 2 [11520/60000 (19%)]	Loss: 0.382179
Train Epoch: 2 [12160/60000 (20%)]	Loss: 0.286137
Train Epoch: 2 [12800/60000 (21%)]	Loss: 0.367496
Train Epoch: 2 [13440/60000 (22%)]	Loss: 0.585348
Train Epoch: 2 [14080/60000 (23%)]	Loss: 0.351601
Train Epoch: 2 [14720/60000 (25%)]	Loss: 0.277279
Train Epoch: 2 [15360/60000 (26%)]	Loss: 0.313926
Train Epoch: 2 [16000/60000 (27%)]	Loss: 0.321938
Train Epoch: 2 [16640/60000 (28%)]	Loss: 0.286514
Train Epoch: 2 [17280/60000 (29%)]	Loss: 0.211912
Train Epoch: 2 [17920/60000 (30%)]	Loss: 0.572703
Train Epoch: 2 [18560/60000 (31%)]	Loss: 0.496922
Train Epoch: 2 [19200/60000 (32%)]	Loss: 0.216564
Train Epoch: 2 [19840/60000 (33%)]	Loss: 0.360298
Train Epoch: 2 [20480/60000 (34%)]	Loss: 0.436383
Train Epoch: 2 [21120/60000 (35%)]	Loss: 0.375520
Train Epoch: 2 [21760/60000 (36%)]	Loss: 0.472223
Train Epoch: 2 [22400/60000 (37%)]	Loss: 0.273098
Train Epoch: 2 [23040/60000 (38%)]	Loss: 0.308542
Train Epoch: 2 [23680/60000 (39%)]	Loss: 0.274205
Train Epoch: 2 [24320/60000 (41%)]	Loss: 0.189756
Train Epoch: 2 [24960/60000 (42%)]	Loss: 0.288041
Train Epoch: 2 [25600/60000 (43%)]	Loss: 0.342737
Train Epoch: 2 [26240/60000 (44%)]	Loss: 0.469760
Train Epoch: 2 [26880/60000 (45%)]	Loss: 0.263181
Train Epoch: 2 [27520/60000 (46%)]	Loss: 0.331070
Train Epoch: 2 [28160/60000 (47%)]	Loss: 0.268305
Train Epoch: 2 [28800/60000 (48%)]	Loss: 0.308854
Train Epoch: 2 [29440/60000 (49%)]	Loss: 0.233590
Train Epoch: 2 [30080/60000 (50%)]	Loss: 0.186167
Train Epoch: 2 [30720/60000 (51%)]	Loss: 0.322805
Train Epoch: 2 [31360/60000 (52%)]	Loss: 0.338492
Train Epoch: 2 [32000/60000 (53%)]	Loss: 0.428483
Train Epoch: 2 [32640/60000 (54%)]	Loss: 0.245604
Train Epoch: 2 [33280/60000 (55%)]	Loss: 0.358258
Train Epoch: 2 [33920/60000 (57%)]	Loss: 0.590652
Train Epoch: 2 [34560/60000 (58%)]	Loss: 0.255480
Train Epoch: 2 [35200/60000 (59%)]	Loss: 0.300286
Train Epoch: 2 [35840/60000 (60%)]	Loss: 0.338784
Train Epoch: 2 [36480/60000 (61%)]	Loss: 0.471962
Train Epoch: 2 [37120/60000 (62%)]	Loss: 0.285339
Train Epoch: 2 [37760/60000 (63%)]	Loss: 0.421666
Train Epoch: 2 [38400/60000 (64%)]	Loss: 0.190298
Train Epoch: 2 [39040/60000 (65%)]	Loss: 0.335659
Train Epoch: 2 [39680/60000 (66%)]	Loss: 0.222705
Train Epoch: 2 [40320/60000 (67%)]	Loss: 0.386390
Train Epoch: 2 [40960/60000 (68%)]	Loss: 0.359086
Train Epoch: 2 [41600/60000 (69%)]	Loss: 0.393072
Train Epoch: 2 [42240/60000 (70%)]	Loss: 0.287289
Train Epoch: 2 [42880/60000 (71%)]	Loss: 0.311965
Train Epoch: 2 [43520/60000 (72%)]	Loss: 0.429747
Train Epoch: 2 [44160/60000 (74%)]	Loss: 0.357612
Train Epoch: 2 [44800/60000 (75%)]	Loss: 0.162335
Train Epoch: 2 [45440/60000 (76%)]	Loss: 0.255895
Train Epoch: 2 [46080/60000 (77%)]	Loss: 0.277777
Train Epoch: 2 [46720/60000 (78%)]	Loss: 0.331651
Train Epoch: 2 [47360/60000 (79%)]	Loss: 0.399300
Train Epoch: 2 [48000/60000 (80%)]	Loss: 0.316395
Train Epoch: 2 [48640/60000 (81%)]	Loss: 0.412863
Train Epoch: 2 [49280/60000 (82%)]	Loss: 0.280073
Train Epoch: 2 [49920/60000 (83%)]	Loss: 0.213473
Train Epoch: 2 [50560/60000 (84%)]	Loss: 0.295295
Train Epoch: 2 [51200/60000 (85%)]	Loss: 0.342541
Train Epoch: 2 [51840/60000 (86%)]	Loss: 0.285704
Train Epoch: 2 [52480/60000 (87%)]	Loss: 0.318621
Train Epoch: 2 [53120/60000 (88%)]	Loss: 0.257397
Train Epoch: 2 [53760/60000 (90%)]	Loss: 0.312963
Train Epoch: 2 [54400/60000 (91%)]	Loss: 0.395818
Train Epoch: 2 [55040/60000 (92%)]	Loss: 0.383620
Train Epoch: 2 [55680/60000 (93%)]	Loss: 0.251550
Train Epoch: 2 [56320/60000 (94%)]	Loss: 0.295705
Train Epoch: 2 [56960/60000 (95%)]	Loss: 0.228809
Train Epoch: 2 [57600/60000 (96%)]	Loss: 0.318937
Train Epoch: 2 [58240/60000 (97%)]	Loss: 0.373726
Train Epoch: 2 [58880/60000 (98%)]	Loss: 0.305496
Train Epoch: 2 [59520/60000 (99%)]	Loss: 0.290378

Test set: Average loss: 0.2691, Accuracy: 9201/10000 (92%)

Train Epoch: 3 [0/60000 (0%)]	Loss: 0.253604
Train Epoch: 3 [640/60000 (1%)]	Loss: 0.366233
Train Epoch: 3 [1280/60000 (2%)]	Loss: 0.226188
Train Epoch: 3 [1920/60000 (3%)]	Loss: 0.310036
Train Epoch: 3 [2560/60000 (4%)]	Loss: 0.241364
Train Epoch: 3 [3200/60000 (5%)]	Loss: 0.318453
Train Epoch: 3 [3840/60000 (6%)]	Loss: 0.170781
Train Epoch: 3 [4480/60000 (7%)]	Loss: 0.340357
Train Epoch: 3 [5120/60000 (9%)]	Loss: 0.219478
Train Epoch: 3 [5760/60000 (10%)]	Loss: 0.124181
Train Epoch: 3 [6400/60000 (11%)]	Loss: 0.399135
Train Epoch: 3 [7040/60000 (12%)]	Loss: 0.344675
Train Epoch: 3 [7680/60000 (13%)]	Loss: 0.320108
Train Epoch: 3 [8320/60000 (14%)]	Loss: 0.361676
Train Epoch: 3 [8960/60000 (15%)]	Loss: 0.224187
Train Epoch: 3 [9600/60000 (16%)]	Loss: 0.203880
Train Epoch: 3 [10240/60000 (17%)]	Loss: 0.282545
Train Epoch: 3 [10880/60000 (18%)]	Loss: 0.299579
Train Epoch: 3 [11520/60000 (19%)]	Loss: 0.239704
Train Epoch: 3 [12160/60000 (20%)]	Loss: 0.255080
Train Epoch: 3 [12800/60000 (21%)]	Loss: 0.277137
Train Epoch: 3 [13440/60000 (22%)]	Loss: 0.180433
Train Epoch: 3 [14080/60000 (23%)]	Loss: 0.492446
Train Epoch: 3 [14720/60000 (25%)]	Loss: 0.120972
Train Epoch: 3 [15360/60000 (26%)]	Loss: 0.290161
Train Epoch: 3 [16000/60000 (27%)]	Loss: 0.242381
Train Epoch: 3 [16640/60000 (28%)]	Loss: 0.201860
Train Epoch: 3 [17280/60000 (29%)]	Loss: 0.285005
Train Epoch: 3 [17920/60000 (30%)]	Loss: 0.287966
Train Epoch: 3 [18560/60000 (31%)]	Loss: 0.336895
Train Epoch: 3 [19200/60000 (32%)]	Loss: 0.403253
Train Epoch: 3 [19840/60000 (33%)]	Loss: 0.176360
Train Epoch: 3 [20480/60000 (34%)]	Loss: 0.364085
Train Epoch: 3 [21120/60000 (35%)]	Loss: 0.263244
Train Epoch: 3 [21760/60000 (36%)]	Loss: 0.206899
Train Epoch: 3 [22400/60000 (37%)]	Loss: 0.271085
Train Epoch: 3 [23040/60000 (38%)]	Loss: 0.216636
Train Epoch: 3 [23680/60000 (39%)]	Loss: 0.258667
Train Epoch: 3 [24320/60000 (41%)]	Loss: 0.359856
Train Epoch: 3 [24960/60000 (42%)]	Loss: 0.246253
Train Epoch: 3 [25600/60000 (43%)]	Loss: 0.251472
Train Epoch: 3 [26240/60000 (44%)]	Loss: 0.162061
Train Epoch: 3 [26880/60000 (45%)]	Loss: 0.311855
Train Epoch: 3 [27520/60000 (46%)]	Loss: 0.379714
Train Epoch: 3 [28160/60000 (47%)]	Loss: 0.175857
Train Epoch: 3 [28800/60000 (48%)]	Loss: 0.322427
Train Epoch: 3 [29440/60000 (49%)]	Loss: 0.179820
Train Epoch: 3 [30080/60000 (50%)]	Loss: 0.115677
Train Epoch: 3 [30720/60000 (51%)]	Loss: 0.116972
Train Epoch: 3 [31360/60000 (52%)]	Loss: 0.269514
Train Epoch: 3 [32000/60000 (53%)]	Loss: 0.375716
Train Epoch: 3 [32640/60000 (54%)]	Loss: 0.239872
Train Epoch: 3 [33280/60000 (55%)]	Loss: 0.292330
Train Epoch: 3 [33920/60000 (57%)]	Loss: 0.247001
Train Epoch: 3 [34560/60000 (58%)]	Loss: 0.298523
Train Epoch: 3 [35200/60000 (59%)]	Loss: 0.300907
Train Epoch: 3 [35840/60000 (60%)]	Loss: 0.352134
Train Epoch: 3 [36480/60000 (61%)]	Loss: 0.198379
Train Epoch: 3 [37120/60000 (62%)]	Loss: 0.422052
Train Epoch: 3 [37760/60000 (63%)]	Loss: 0.218586
Train Epoch: 3 [38400/60000 (64%)]	Loss: 0.245664
Train Epoch: 3 [39040/60000 (65%)]	Loss: 0.419740
Train Epoch: 3 [39680/60000 (66%)]	Loss: 0.154121
Train Epoch: 3 [40320/60000 (67%)]	Loss: 0.249203
Train Epoch: 3 [40960/60000 (68%)]	Loss: 0.195843
Train Epoch: 3 [41600/60000 (69%)]	Loss: 0.149473
Train Epoch: 3 [42240/60000 (70%)]	Loss: 0.300395
Train Epoch: 3 [42880/60000 (71%)]	Loss: 0.409359
Train Epoch: 3 [43520/60000 (72%)]	Loss: 0.290570
Train Epoch: 3 [44160/60000 (74%)]	Loss: 0.332417
Train Epoch: 3 [44800/60000 (75%)]	Loss: 0.213544
Train Epoch: 3 [45440/60000 (76%)]	Loss: 0.310301
Train Epoch: 3 [46080/60000 (77%)]	Loss: 0.195416
Train Epoch: 3 [46720/60000 (78%)]	Loss: 0.341038
Train Epoch: 3 [47360/60000 (79%)]	Loss: 0.241986
Train Epoch: 3 [48000/60000 (80%)]	Loss: 0.210743
Train Epoch: 3 [48640/60000 (81%)]	Loss: 0.253939
Train Epoch: 3 [49280/60000 (82%)]	Loss: 0.221072
Train Epoch: 3 [49920/60000 (83%)]	Loss: 0.342825
Train Epoch: 3 [50560/60000 (84%)]	Loss: 0.180369
Train Epoch: 3 [51200/60000 (85%)]	Loss: 0.325047
Train Epoch: 3 [51840/60000 (86%)]	Loss: 0.161454
Train Epoch: 3 [52480/60000 (87%)]	Loss: 0.340352
Train Epoch: 3 [53120/60000 (88%)]	Loss: 0.201719
Train Epoch: 3 [53760/60000 (90%)]	Loss: 0.179605
Train Epoch: 3 [54400/60000 (91%)]	Loss: 0.161826
Train Epoch: 3 [55040/60000 (92%)]	Loss: 0.172597
Train Epoch: 3 [55680/60000 (93%)]	Loss: 0.171595
Train Epoch: 3 [56320/60000 (94%)]	Loss: 0.254696
Train Epoch: 3 [56960/60000 (95%)]	Loss: 0.147562
Train Epoch: 3 [57600/60000 (96%)]	Loss: 0.226753
Train Epoch: 3 [58240/60000 (97%)]	Loss: 0.344109
Train Epoch: 3 [58880/60000 (98%)]	Loss: 0.208220
Train Epoch: 3 [59520/60000 (99%)]	Loss: 0.291278

Test set: Average loss: 0.2113, Accuracy: 9359/10000 (93%)

スポンサーリンク

モデルをTensorRT Engineに変換する

訓練済モデルが用意できたので、state_dictを得ることでレイヤーウェイトを抽出する。

weights = model.state_dict()

ビルド・プロセス用のビルダーとロガーを作成する。

G_LOGGER = trt.infer.ConsoleLogger(trt.infer.LogSeverity.ERROR)
builder = trt.infer.create_infer_builder(G_LOGGER)

次に、上記のネットワーク構造をTensorRTに複製して、numpyアレイの形でPyTorchから重みを抽出する。PyTorch由来のnumpyアレイは、レイヤーの次元を反映しているので、アレイを平坦化する。

network = builder.create_network()

# Name for the input layer, data type, tuple for dimension
data = network.add_input("data", trt.infer.DataType.FLOAT, (1, 28, 28))
assert(data)

#-------------
conv1_w = weights['conv1.weight'].cpu().numpy().reshape(-1)
conv1_b = weights['conv1.bias'].cpu().numpy().reshape(-1)
conv1 = network.add_convolution(data, 20, (5,5),  conv1_w, conv1_b)
assert(conv1)
conv1.set_stride((1,1))

#-------------
pool1 = network.add_pooling(conv1.get_output(0), trt.infer.PoolingType.MAX, (2,2))
assert(pool1)
pool1.set_stride((2,2))

#-------------
conv2_w = weights['conv2.weight'].cpu().numpy().reshape(-1)
conv2_b = weights['conv2.bias'].cpu().numpy().reshape(-1)
conv2 = network.add_convolution(pool1.get_output(0), 50, (5,5), conv2_w, conv2_b)
assert(conv2)
conv2.set_stride((1,1))

#-------------
pool2 = network.add_pooling(conv2.get_output(0), trt.infer.PoolingType.MAX, (2,2))
assert(pool2)
pool2.set_stride((2,2))

#-------------
fc1_w = weights['fc1.weight'].cpu().numpy().reshape(-1)
fc1_b = weights['fc1.bias'].cpu().numpy().reshape(-1)
fc1 = network.add_fully_connected(pool2.get_output(0), 500, fc1_w, fc1_b)
assert(fc1)

#-------------
relu1 = network.add_activation(fc1.get_output(0), trt.infer.ActivationType.RELU)
assert(relu1)

#-------------
fc2_w = weights['fc2.weight'].cpu().numpy().reshape(-1)
fc2_b = weights['fc2.bias'].cpu().numpy().reshape(-1)
fc2 = network.add_fully_connected(relu1.get_output(0), 10, fc2_w, fc2_b)
assert(fc2)

今度はアウトプットレイヤーをマークする必要がある。

fc2.get_output(0).set_name("prob")
network.mark_output(fc2.get_output(0))

ネットワーク用の残りのパラメーター(最大バッチサイズと最大ワークスペース)を設定してエンジンをビルドする。

builder.set_max_batch_size(1)
builder.set_max_workspace_size(1 << 20)

engine = builder.build_cuda_engine(network)
network.destroy()
builder.destroy()

次に、エンジンランタイムを作成して、torchデータローダーからテストケースを生成。

runtime = trt.infer.create_infer_runtime(G_LOGGER)
img, target = next(iter(test_loader))
img = img.numpy()[0]
target = target.numpy()[0]
%matplotlib inline
img.shape
imshow(img[0])
print("Test Case: " + str(target))
img = img.ravel()
Test Case: 9

その次に、エンジン用の実行コンテクストを作成する。

context = engine.create_execution_context()

次に、GPUとCPUに推論結果を保存するための記憶領域をメモリに割り当てる。これらの割り当てサイズは、インプット/予想アウトプット×バッチサイズになる。

output = np.empty(10, dtype = np.float32)

# Allocate device memory
d_input = cuda.mem_alloc(1 * img.nbytes)
d_output = cuda.mem_alloc(1 * output.nbytes)

エンジンは、バインディング(GPUメモリへのポインター)を要求する。PyCUDAは、メモリ割り当ての結果をintにキャスティングすることでこの事を可能にしてくれる。

bindings = [int(d_input), int(d_output)]

推論実行のためにcudaストリームを作成する。

stream = cuda.Stream()

次に、データをGPUに転送して、推論を実行して。結果をCPUに返す。

# Transfer input data to device
cuda.memcpy_htod_async(d_input, img, stream)
#execute model
context.enqueue(1, bindings, stream.handle, None)
# Transfer predictions back
cuda.memcpy_dtoh_async(output, d_output, stream)
# Synchronize threads
stream.synchronize()

推論結果を得るのにnp.argmaxを使うことができる。

print("Test Case: " + str(target))
print ("Prediction: " + str(np.argmax(output)))
Test Case: 9
Prediction: 9

エンジンは後で使えるように保存することもできる。

trt.utils.write_engine_to_file("./pyt_mnist.engine", engine.serialize())
True

tensorrt.utils.load_engineを使って後でエンジンをロードすることができる。

new_engine = trt.utils.load_engine(G_LOGGER, "./pyt_mnist.engine")

最後に、コンテクスト、エンジン、ランタイムを破棄する。

context.destroy()
engine.destroy()
new_engine.destroy()
runtime.destroy()
スポンサーリンク
スポンサーリンク