构建ResNet卷积神经⽹络
2015年,微软亚洲研究院的何凯明团队发布了⼀种特殊的卷积神经⽹络——残差神经⽹络(ResNet)。在残差神经⽹络出现之前,最深的深度神经⽹络只有⼆三⼗层左右,这该神经⽹络却可以在实验中轻松达到上百层甚⾄上千层,另外不会占⽤过多训练时间,也正因如此,图像识别准确率有了显著增强。此模型更是在同年的ImageNet⼤赛中,获得图像分类、定位、检测三个项⽬的冠军。在国际⼤赛上取得如此优异的成绩,证明了残差神经⽹络是个实⽤性强且优异的模型。在本研究中的猫狗⼆分类的实验中,也是基于残差神经⽹络来构建分类模型的。
在本⽂中我们将把kaggle猫狗数据集应⽤于ResNet-18和ResNet-50⽹络模型。使⽤Resnet来探究当前使⽤卷积神经⽹络的准确率。如图4-1为ResNet的经典⽹络结构图——ResNet-18。
北约是什么image.png
ResNet-18都是由BasicBlock组成,从图4-2也可得知50层及以上的ResNet⽹络模型由BottleBlock组成。在我们就需要将我们预处理过的数据集放⼊现有的Resnet-18和ResNet-50模型中去训练,⾸先我们通过前⾯提到的图像预处理把训练图像裁剪成⼀个96x96的正⽅形尺⼨,然后输⼊到我们的模型中,这⾥就介绍⼀下ResNet-18的⽹络模型的结构,因为ResNet50与第五章的ResNet-34模型结构相仿。
profilesResNet-18的模型结构为:⾸先第⼀层是⼀个7×7的卷积核,输⼊特征矩阵为[112,112,64],经过卷积核64,stride为2得到出⼊特征矩阵
[56,56,64]。第⼆层⼀开始是由⼀个3×3的池化层组成的,接着是2个残差结构,⼀开始的输⼊的特征矩阵为[56,56,64],需要输出的特征矩阵shape为[28,28,128], 然⽽主分⽀与shortcut的输出特征矩阵shape必须相同,所以[56,56,64]这个特征矩阵的⾼和宽从56通过主分⽀的stride 为2来缩减为原来的⼀半即为28,再通过128个卷积核来改变特征矩阵的深度。然⽽这⾥的shortcut加上了⼀个1x1的卷积核,stride也为2,通过这个stride,输⼊的特征矩阵的宽和⾼也缩减为原有的⼀半,同时通过128个卷积核将输⼊的特征矩阵的深度也变为了128。第三层,有2个残差结构,输⼊的特征矩阵shape是[28,28,128],输出特征矩阵shape是[14,14,256], 然⽽主分⽀与shortcut的输出特征矩阵shape必须相
同,所以[14,14,256]这个特征矩阵的⾼和宽从14通过主分⽀的stride为2来缩减为原来的⼀半即为7,再通过128个卷积核来改变特征矩阵的深度。然⽽这⾥的shortcut加上了⼀个1×1的卷积核,stride也为2,通过这个stride,输⼊的特征矩阵的宽和⾼也缩减为原有的⼀半,同时通过256个卷积核将输⼊的特征矩阵的深度也变为了256。第四层,有2个残差结构,经过上述的相同的变化过程得到输出的特征矩阵为[7,7,512]。第五层,有2个残差结构, 经过上述的相同的变化过程得到输出的特征矩阵为[1,1,512]。接着是平均池化和全连接层。
image.png
as nn
class BasicBlock(nn.Module):
"""Basic Block for resnet 18 and resnet 34
"""
#BasicBlock and BottleNeck block
#have different output size
expansion = 1
def __init__(lf, in_channels, out_channels, stride=1):
super().__init__()
#residual function
nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=Fal),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=True),
nn.Conv2d(out_channels, out_channels * pansion, kernel_size=3, padding=1, bias=Fal), nn.BatchNorm2d(out_channels * pansion)
)
#shortcut
lf.shortcut = nn.Sequential()
#the shortcut output dimension is not the same with residual function
#u 1*1 convolution to match the dimension
if stride != 1 or in_channels != pansion * out_channels:
lf.shortcut = nn.Sequential(
nn.Conv2d(in_channels, out_channels * pansion, kernel_size=1, stride=stride, bias=Fal), nn.BatchNorm2d(out_channels * pansion)
)
def forward(lf, x):
return nn.ReLU(inplace=True)(lf.residual_function(x) + lf.shortcut(x))
class BottleNeck(nn.Module):
"""Residual block for resnet over 50 layers
"""
expansion = 4
def __init__(lf, in_channels, out_channels, stride=1):
super().__init__()
nn.Conv2d(in_channels, out_channels, kernel_size=1, bias=Fal),
nn.BatchNorm2d(out_channels),
己所不欲勿施于人英文
nn.ReLU(inplace=True),
nn.Conv2d(out_channels, out_channels, stride=stride, kernel_size=3, padding=1, bias=Fal),
nn.BatchNorm2d(out_channels),generation
nn.ReLU(inplace=True),
nn.Conv2d(out_channels, out_channels * pansion, kernel_size=1, bias=Fal),
nn.BatchNorm2d(out_channels * pansion),
)
lf.shortcut = nn.Sequential()
if stride != 1 or in_channels != out_channels * pansion:
lf.shortcut = nn.Sequential(
nn.Conv2d(in_channels, out_channels * pansion, stride=stride, kernel_size=1, bias=Fal),
nn.BatchNorm2d(out_channels * pansion)
什么叫电子邮件地址
)
def forward(lf, x):
return nn.ReLU(inplace=True)(lf.residual_function(x) + lf.shortcut(x)) #激活
class ResNet(nn.Module):
jojobadef __init__(lf, block, num_block, num_class=2):
super().__init__()
lf.in_channels = 64
nn.Conv2d(3, 64, kernel_size=3, padding=1, bias=Fal), # 第⼀个卷积层,输⼊3通道,输出64通道,卷积核⼤⼩3 x 3,padding1 nn.BatchNorm2d(64),
nn.ReLU(inplace=True))
#we u a different inputsize than the original paper
#so conv2_x's stride is 1
# 以下构建残差块,具体参数可以查看resnet参数表
东南亚地震
lf.avg_pool = nn.AdaptiveAvgPool2d((1, 1))
lf.fc = nn.Linear(512 * pansion, num_class) #fully connected layer
def _make_layer(lf, block, out_channels, num_blocks, stride):
"""make resnet layers(by layer i didnt mean this 'layer' was the
same as a neuron netowork layer, ex. conv layer), one layer may
contain more than one residual block
Args:
block: block type, basic block or bottle neck block
out_channels: output depth channel number of this layer
num_blocks: how many blocks per layer
stride: the stride of the first block of this layer
Return:
return a resnet layer
"""
# 扩维超pp连连看
# we have num_block blocks per layer, the first block
# could be 1 or 2, other blocks would always be 1
strides = [stride] + [1] * (num_blocks - 1)#减少特征图尺⼨
layers = []
# 特判第⼀残差块
for stride in strides:
layers.append(block(lf.in_channels, out_channels, stride))#不减少特征图尺⼨
lf.in_channels = out_channels * pansion
return nn.Sequential(*layers)
def forward(lf, x): #forward⽅法,即向前计算,通过该⽅法获取⽹络输⼊数据后的输出值
output = lf.conv1(x) #第⼀次卷积
output = lf.conv2_x(output)
output = lf.conv3_x(output)
output = lf.conv4_x(output)
output = lf.conv5_x(output)
output = lf.avg_pool(output)容易受伤的女人英文版
output = output.view(output.size(0), -1)# resize batch-size output H
output = lf.fc(output)
return output
def resnet18():
让爱永驻心中演讲稿
""" return a ResNet 18 object
"""
return ResNet(BasicBlock, [2, 2, 2, 2])
def resnet34():