One of the popular methods to learn the basics of deep learning is with the MNIST dataset. It is the "Hello World" in deep learning. The dataset contains handwritten numbers from 0 - 9 with the total of 60,000 training samples and 10,000 test samples that are already labeled with the size of 28x28 pixels.
Step 1) Preprocess the Data
Before you start the training process, you need to understand the data. In the first step, you will load the dataset using torchvision module. Torchvision will load the dataset and transform the images with the appropriate requirement for the network such as the shape and normalizing the images.
import torchimport torchvisionimport numpy as npfrom torchvision import datasets, models, transforms# This is used to transform the images to Tensor and normalize ittransform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])training = torchvision.datasets.MNIST(root='./data', train=True, download=True, transform=transform)train_loader = torch.utils.data.DataLoader(training, batch_size=4, shuffle=True, num_workers=2)testing = torchvision.datasets.MNIST(root='./data', train=False, download=True, transform=transform)test_loader = torch.utils.data.DataLoader(testing, batch_size=4, shuffle=False, num_workers=2)classes = ('0','1','2','3','4','5','6','7','8','9')import matplotlib.pyplot as pltimport numpy as np#create an iterator for train_loader# get random training imagesdata_iterator =iter(train_loader)images, labels = data_iterator.next()#plot 4 images to visualize the datarows =2columns =2fig=plt.figure()for i inrange(4): fig.add_subplot(rows, columns, i+1) plt.title(classes[labels[i]]) img = images[i]/2+0.5# this is for unnormalize the image img = torchvision.transforms.ToPILImage()(img) plt.imshow(img)plt.show()
The transform function converts the images into tensor and normalizes the value. The function torchvision.transforms.MNIST, will download the dataset (if it's not available) in the directory, set the dataset for training if necessary and do the transformation process.
To visualize the dataset, you use the data_iterator to get the next batch of images and labels. You use matplot to plot these images and their appropriate label. As you can see below our images and their labels.
Step 2) Network Model Configuration
Now you will make a simple neural network for image classification. Here, we introduce you another way to create the Network model in PyTorch. We will use nn.Sequential to make a sequence model instead of making a subclass of nn.Module.
import torch.nn as nn# flatten the tensor into classFlatten(nn.Module):defforward(self,input):returninput.view(input.size(0), -1)#sequential based modelseq_model = nn.Sequential( nn.Conv2d(1, 10, kernel_size=5), nn.MaxPool2d(2), nn.ReLU(), nn.Dropout2d(), nn.Conv2d(10, 20, kernel_size=5), nn.MaxPool2d(2), nn.ReLU(),Flatten(), nn.Linear(320, 50), nn.ReLU(), nn.Linear(50, 10), nn.Softmax(), )net = seq_modelprint(net)
The sequence is that the first layer is a Conv2D layer with an input shape of 1 and output shape of 10 with a kernel size of 5
Next, you have a MaxPool2D layer
A ReLU activation function
a Dropout layer to drop low probability values.
Then a second Conv2d with the input shape of 10 from the last layer and the output shape of 20 with a kernel size of 5
Next a MaxPool2d layer
ReLU activation function.
After that, you will flatten the tensor before you feed it into the Linear layer
Linear Layer will map our output at the second Linear layer with softmax activation function
Step 3) Train the Model
Before you start the training process, it is required to set up the criterion and optimizer function. For the criterion, you will use the CrossEntropyLoss. For the Optimizer, you will use the SGD with a learning rate of 0.001 and a momentum of 0.9.
import torch.optim as optimcriterion = nn.CrossEntropyLoss()optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
The forward process will take the input shape and pass it to the first conv2d layer. Then from there, it will be feed into the maxpool2d and finally put into the ReLU activation function. The same process will occur in the second conv2d layer. After that, the input will be reshaped into (-1,320) and feed into the fc layer to predict the output.
Now, you will start the training process. You will iterate through our dataset 2 times or with an epoch of 2 and print out the current loss at every 2000 batch.
for epoch inrange(2):#set the running loss at each epoch to zero running_loss =0.0# we will enumerate the train loader with starting index of 0# for each iteration (i) and the data (tuple of input and labels)for i, data inenumerate(train_loader, 0): inputs, labels = data# clear the gradient optimizer.zero_grad()#feed the input and acquire the output from network outputs =net(inputs)#calculating the predicted and the expected loss loss =criterion(outputs, labels)#compute the gradient loss.backward()#update the parameters optimizer.step()# print statistics running_loss += loss.item()if i %1000==0:print('[%d, %5d] loss: %.3f'% (epoch +1, i +1, running_loss /1000)) running_loss =0.0
At each epoch, the enumerator will get the next tuple of input and corresponding labels. Before we feed the input to our network model, we need to clear the previous gradient. This is required because after the backward process (backpropagation process), the gradient will be accumulated instead of being replaced. Then, we will calculate the losses from the predicted output from the expected output. After that, we will do a backpropagation to calculate the gradient, and finally, we will update the parameters.
After you train our model, you need to test or evaluate with other sets of images. We will use an iterator for the test_loader, and it will generate a batch of images and labels that will be passed to the trained model. The predicted output will be displayed and compared with the expected output.
#make an iterator from test_loader#Get a batch of training imagestest_iterator =iter(test_loader)images, labels = test_iterator.next()results =net(images)_, predicted = torch.max(results, 1)print('Predicted: ', ' '.join('%5s'% classes[predicted[j]] for j inrange(4)))fig2 = plt.figure()for i inrange(4): fig2.add_subplot(rows, columns, i+1) plt.title('truth '+ classes[labels[i]] +': predict '+ classes[predicted[i]]) img = images[i]/2+0.5# this is to unnormalize the image img = torchvision.transforms.ToPILImage()(img) plt.imshow(img)plt.show()