ResNet论⽂翻译——中英⽂对照+标注总结
Deep Residual Learning for Image Recognition
⽂章⽬录
Abstract
Deeper neural networks are more difficult to train. We prent a residual learning framework to ea the training of networks that are substantially deeper than tho ud previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that the residual networks are easier to optimize, and can gain accuracy from considerably incread depth. On the ImageNet datat we evaluate residual nets with a depth of up to 152 layers——8× deeper than VGG nets [40] but still having lower complexity. An enmble of the residual nets achieves 3.57% error on the ImageNet test t. This result won the 1st place on the ILSVRC 2015 classification task. We also prent analysis on CIFAR-10 with 100 and 1000 layers.
The depth of reprentations is of central importance for many visual recognition tasks. Solely due to o
ur extremely deep reprentations, we obtain a 28% relative improvement on the COCO object detection datat. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO gmentation.
摘要
更深的神经⽹络更难训练。我们提出了⼀种残差学习框架来减轻⽹络训练,这些⽹络⽐以前使⽤的⽹络更深。我们明确地将层变为学习关于层输⼊的残差函数,⽽不是学习未参考的函数。我们提供了全⾯的经验证据说明这些残差⽹络很容易优化,并可以显著增加深度来提⾼准确性。在ImageNet数据集上我们评估了深度⾼达152层的残差⽹络——⽐VGG[40]深8倍但仍具有较低的复杂度。这些残差⽹络的集合在ImageNet测试集上取得了3.57%的错误率。这个结果在ILSVRC 2015分类任务上赢得了第⼀名。我们也在CIFAR-10上分析了100层和1000层的残差⽹络。
对于许多视觉识别任务⽽⾔,表⽰的深度是⾄关重要的。仅由于我们⾮常深度的表⽰,我们便在COCO⽬标检测数据集上得到了28%的相对提⾼。深度残差⽹络是我们向ILSVRC和COCO 2015竞赛提交的基础,我们也赢得了ImageNet检测任务,ImageNet定位任
务,COCO检测和COCO分割任务的第⼀名。
注1:更深的神经⽹络更难训练。提出了⼀种残差学习框架来减轻⽹络训练,说明这些残差⽹络很容易优化,并可以显著增加深度来提⾼准确性。⽐VGG深8倍但仍具有较低的复杂度。
1. Introduction
Deep convolutional neural networks [22, 21] have led to a ries of breakthroughs for image classification [21, 49, 39]. Deep networks naturally integrate low/mid/high-level features [49] and classifiers in an end-to-end multi-layer fashion, and the “levels” of features can be enriched by the number of stacked layers (depth). Recent evidence [40, 43] reveals that network depth is of crucial importance, and the leading results [40, 43, 12, 16] on the challenging ImageNet datat [35] all exploit “very deep” [40] models, with a depth of sixteen [40] to thirty [16]. Many other non-trivial visual recognition tasks [7, 11, 6, 32, 27] have also greatly benefited from very deep models.
Driven by the significance of depth, a question aris: Is learning better networks as easy as stacking more layers? An obstacle to answering this question was the notorious problem of vanishing/exploding gradients [14, 1, 8], which hamper convergence from the beginning. This problem, however, has been largely addresd by normalized initialization [23, 8, 36, 12] and intermediate normalization layers [16], which enable networks with tens of layers to start converging for stochastic gradient descent (SGD) with backpropagation [22].
1. 引⾔
深度卷积神经⽹络[22, 21]导致了图像分类[21, 49, 39]的⼀系列突破。深度⽹络⾃然地将低/中/⾼级特征[49]和分类器以端到端多层⽅式进⾏集成,特征的“级别”可以通过堆叠层的数量(深度)来丰富。最近的证据[40, 43]显⽰⽹络深度⾄关重要,在具有挑战性的ImageNet数据集上领先的结果都采⽤了“⾮常深”[40]的模型,深度从16 [40]到30 [16]之间。许多其它重要的视觉识别任务[7, 11, 6, 32, 27]也从⾮常深的模型中得到了极⼤受益。
在深度重要性的推动下,出现了⼀个问题:学些更好的⽹络是否像堆叠更多的层⼀样容易?回答这个问题的⼀个障碍是梯度消失/爆炸[14, 1, 8]这个众所周知的问题,它从⼀开始就阻碍了收敛。然⽽,这个问题通过标准初始化[23, 8, 36, 12]和中间标准化层[16]在很⼤程度上已经解决,这使得数⼗层的⽹络能通过具有反向传播的随机梯度下降(SGD)开始收敛。
注2:最近的研究显⽰⽹络深度⾄关重要,在深度重要性的推动下,出现了⼀个问题:学些更好的⽹络是否像堆叠更多的层⼀样容易?回答这个问题的⼀个障碍是梯度消失/爆炸[14, 1, 8]这个众所周知的问题,它从⼀开始就阻碍了收敛。然⽽,这个问题通过标准初始化[23, 8, 36, 12]和中间标准化层[16]在很⼤程度上已经解决,这使得数⼗层的⽹络能通过具有反向传播的随机梯度下降(SGD)开始收敛。(其他博⽂提到过,快捷连接在反向传播时,也会将梯度直接传回到前层,这样减少了梯度衰减。)
When deeper networks are able to start converging, a degradation problem has been expod: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is not caud by overfitting, and adding more layers to a suitably deep model leads to higher training error, as reported in [10, 41] and thoroughly verified by our experiments. Fig. 1 shows a typical example.
Figure 1. Training error (left) and test error (right) on CIFAR-10 with 20-layer and 56-layer “plain” networks. The deeper network has higher training error, and thus test error. Similar phenomena on ImageNet is prented in Fig. 4.
The degradation (of training accuracy) indicates that not all systems are similarly easy to optimize. Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model. The existence of this constructed solution indicates that a deeper model should produce no higher training error than its shallower counterpart. But experiments show that our current solvers on hand are unable to find solutions that are comparably good or better than the constructed solution (or unable to do so in feasible time).
当更深的⽹络能够开始收敛时,暴露了⼀个退化问题:随着⽹络深度的增加,准确率达到饱和(这可能并不奇怪)然后迅速下降。意外的是,这种下降不是由过拟合引起的,并且在适当的深度模型上添加更多的层会导致更⾼的训练误差,正如[10, 41]中报告的那样,并且由我们的实验完全证实。图1显⽰了⼀个典型的例⼦。
图1 20层和56层的“简单”⽹络在CIFAR-10上的训练误差(左)和测试误差(右)。更深的⽹络有更⾼的训练误差和测试误差。ImageNet上的类似现象如图4所⽰。
退化(训练准确率)表明不是所有的系统都很容易优化。让我们考虑⼀个较浅的架构及其更深层次的对象,为其添加更多的层。存在通过构建得到更深层模型的解决⽅案:添加的层是恒等映射,其他层是从学习到的较浅模型的拷贝。 这种构造解决⽅案的存在表明,较深的模型不应该产⽣⽐其对应的较浅模型更⾼的训练误差。但是实验表明,我们⽬前现有的解决⽅案⽆法找到与构建的解决⽅案相⽐相对不错或更好的解决⽅案(或在合理的时间内⽆法实现)。
注3:当更深的⽹络能够开始收敛时,暴露了退化问题:随着⽹络的加深,准确率达到饱和(这可能并不奇怪)然后迅速下降。意外的是,这种下降不是由过拟合引起的,并且在适当的深度模型上添加更多的层会导致更⾼的训练误差,退化(训练准确率)表明不是所有的系统都很容易优化。存在构建更深层模型的解决⽅案:添加的层是恒等映射,其他层是从学习到的较浅模型的拷贝。
In this paper, we address the degradation problem by introducing a deep residual learning framework. Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let the layers fit a residual mapping. Formally, denoting the desired underlying mapping as H(x), we let the stacked nonlinear layers fit another mapping of
F(x):=H(x)−x. The original mapping is recast into F(x)+x. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.
在本⽂中,我们通过引⼊深度残差学习框架解决了退化问题。我们明确地让这些层拟合残差映射,⽽不是希望每⼏个堆叠的层直接拟合期望的基础映射。形式上,将期望的基础映射表⽰为H(x),我们将堆叠的⾮线性层拟合另⼀个映射F(x):=H(x)−x。原始的映射重写为F(x)+x。我们假设残差映射⽐原始的、未参考的映射更容易优化。在极端情况下,如果⼀个恒等映射是最优的,那么将残差置为零⽐通过⼀堆⾮线性层来拟合恒等映射更容易。
注4:在本⽂中,通过引⼊深度残差学习框架解决了退化问题。
The formulation of F(x)+x can be realized by feedforward neural networks with “shortcut connections” (Fig. 2). Shortcut connections [2, 33, 48] are tho skipping one or more layers. In our ca, the shortcut connections simply perform
identity mapping, and their outputs are added to the outputs of the stacked layers (Fig. 2). Identity shortcut connections add neither extra parameter nor computational complexity. The entire network c
an still be trained end-to-end by SGD with backpropagation, and can be easily implemented using common libraries (e.g., Caffe [19]) without modifying the solvers.
Figure 2. Residual learning: a building block.
公式F(x)+x 可以通过带有“快捷连接”的前向神经⽹络(图2)来实现。快捷连接[2, 33, 48]是那些跳过⼀层或更多层的连接。在我们的案例中,快捷连接简单地执⾏恒等映射,并将其输出添加到堆叠层的输出(图2)。恒等快捷连接既不增加额外的参数也不增加计算复杂度。整个⽹络仍然可以由带有反
向传播的SGD进⾏端到端的训练,并且可以使⽤公共库(例如,Caffe [19])轻松实现,⽽⽆需修改求解器。
图2. 残差学习:构建块
We prent comprehensive experiments on ImageNet [35] to show the degradation problem and evaluate our method. We show that: 1) Our extremely deep residual nets are easy to optimize, but the counterpart “plain” nets (that simply stack layers) exhibit higher training error when the depth incr
eas; 2) Our deep residual nets can easily enjoy accuracy gains from greatly incread depth, producing results substantially better than previous networks.
Similar phenomena are also shown on the CIFAR-10 t [20], suggesting that the optimization difficulties and the effects of our method are not just akin to a particular datat. We prent successfully trained models on this datat with over 100 layers, and explore models with over 1000 layers.
我们在ImageNet[35]上进⾏了综合实验来显⽰退化问题并评估我们的⽅法。
我们发现:
1)我们极深的残差⽹络易于优化,但当深度增加时,对应的“简单”⽹络(简单堆叠层)表现出更⾼的训练误差;
2)我们的深度残差⽹络可以从⼤⼤增加的深度中轻松获得准确性收益,⽣成的结果实质上⽐以前的⽹络更好。
CIFAR-10数据集上[20]也显⽰出类似的现象,这表明了优化的困难以及我们的⽅法的影响不仅仅是针对⼀个特定的数据集。我们在这个数据集上展⽰了成功训练的超过100层的模型,并探索了超过1000
层的模型。
注5:在ImageNet上进⾏了综合实验,发现极深的残差⽹络易于优化,但当深度增加时,对应的“简单”⽹络(简单堆叠层)表现出更⾼的训练误差,CIFAR-10数据集上[20]也显⽰出类似的现象,表明这不是⼀种偶然。
On the ImageNet classification datat [35], we obtain excellent results by extremely deep residual nets. Our 152-layer residual net is the deepest network ever prented on ImageNet, while still having lower complexity than VGG nets [40]. Our enmble has 3.57% top-5 error on the ImageNet test t, and won the 1st place in the ILSVRC 2015 classification competition. The extremely deep reprentations also have excellent generalization performance on other recognition tasks, and lead us to further win the 1st places on: ImageNet detection, ImageNet localization, COCO detection, and COCO gmentation in ILSVRC & COCO 2015 competitions. This strong evidence shows that the residual learning principle is generic, and we expect that it is applicable in other vision and non-vision problems.
在ImageNet分类数据集[35]中,我们通过⾮常深的残差⽹络获得了很好的结果。我们的152层残差⽹络是ImageNet上最深的⽹络,同时还具有⽐VGG⽹络[40]更低的复杂性。我们的模型集合在ImageNe
t测试集上有3.57% top-5的错误率,并在ILSVRC 2015分类⽐赛中获得了第⼀名。极深的表⽰在其它识别任务中也有极好的泛化性能,并带领我们在进⼀步赢得了第⼀名:包括ILSVRC & COCO 2015竞赛中的ImageNet检测,ImageNet定位,COCO检测和COCO分割。坚实的证据表明残差学习准则是通⽤的,并且我们期望它适⽤于其它的视觉和⾮视觉问题。
注6:各数据集及各种视觉任务显⽰,残差学习是通⽤的。
2. Related Work
Residual Reprentations. In image recognition, VLAD [18] is a reprentation that encodes by the residual vectors with respect to a dictionary, and Fisher Vector [30] can be formulated as a probabilistic version [18] of VLAD. Both of them are powerful shallow reprentations for image retrieval and classification [4, 47]. For vector quantization, encoding residual vectors [17] is shown to be more effective than encoding original vectors.
In low-level vision and computer graphics, for solving Partial Differential Equations (PDEs), the widely ud Multigrid method [3] reformulates the system as subproblems at multiple scales, where each subproblem is responsible for the residual solution between a coarr and a finer scale. An alternative to Multigrid is hierarchical basis preconditioning [44, 45], which relies on variables that re
prent residual vectors between two scales. It has been shown [3, 44, 45] that the solvers converge much faster than standard solvers that are unaware of the residual nature of the solutions. The methods suggest that a good reformulation or preconditioning can simplify the optimization.
2. 相关⼯作
残差表⽰ 在图像识别中,VLAD[18]是⼀种通过关于字典的残差向量进⾏编码的表⽰形式,Fisher⽮量[30]可以表⽰为VLAD的概率版本[18]。它们都是图像检索和图像分类[4,47]中强⼤的浅层表⽰。对于⽮量量化,编码残差⽮量[17]被证明⽐编码原始⽮量更有效。
在低级视觉和计算机图形学中,为了求解偏微分⽅程(PDE),⼴泛使⽤的Multigrid⽅法[3]将系统重构为在多个尺度上的⼦问题,其中每个⼦问题负责较粗尺度和较细尺度的残差解。Multigrid的替代⽅法是层次化基础预处理[44,45],它依赖于表⽰两个尺度之间残差向量的变量。已经被证明[3,44,45]这些求解器⽐不知道解的残差性质的标准求解器收敛得更快。这些⽅法表明,良好的重构或预处理可以简化优化。
Shortcut Connections. Practices and theories that lead to shortcut connections [2, 33, 48] have been studied for a long time. An early practice of training multi-layer perceptrons (MLPs) is to add a linear layer connected from the network input to the output [33, 48]. In [43, 24], a few intermediate layers ar
e directly connected to auxiliary classifiers for addressing vanishing/exploding gradients. The papers of [38, 37, 31, 46] propo methods for centering layer respons, gradients, and propagated errors, implemented by shortcut connections. In [43], an “inception” layer is compod of a shortcut branch and a few deeper branches.
Concurrent with our work, “highway networks” [41, 42] prent shortcut connections with gating functions [15]. The gates are data-dependent and have parameters, in contrast to our identity shortcuts that are parameter-free. When a gated shortcut is “clod” (approaching zero), the layers in highway networks reprent non-residual functions. On the contrary, our formulation always learns residual functions; our identity shortcuts are never clod, and all information is always pasd through, with additional residual functions to be learned. In addition, highway networks have not demonstrated accuracy gains with extremely incread depth (e.g., over 100 layers).