残差学习+注意⼒机制+软阈值函数:残差收缩⽹络(附代码)
顾名思义,深度残差收缩⽹络是在“残差学习ResNet”基础上的⼀种改进⽹络,是由“残差学习”和“收缩”两部分所组成的。其中,ResNet在2016年斩获了ImageNet图像识别竞赛的冠军,⽬前已经成为了深度学习领域的基础⽹络;“收缩”指的是“软阈值化”,是许多信号降噪算法的关键步骤。在深度残差收缩⽹络中,软阈值化所需要的阈值,实质上是借助注意⼒机制设置的。
在本⽂中,我们⾸先对残差⽹络、软阈值化和注意⼒机制的基础知识进⾏了简要的回顾,然后对深度残差收缩⽹络的动机、算法和应⽤展开解读。
1.基础回顾
1.1 残差⽹络
从本质上讲,残差⽹络(⼜称深度残差⽹络、深度残差学习)是⼀种卷积神经⽹络。相较于普通的卷积神经⽹络,残差⽹络采⽤了跨层恒等连接,以减轻卷积神经⽹络的训练难度。残差⽹络的⼀种基本模块如图所⽰。
图1 残差⽹络的⼀种基本模块
1.2 软阈值化
软阈值化是许多信号降噪⽅法的核⼼步骤。它的⽤处是将绝对值低于某个阈值的特征置为零,将其他的特征也朝着零进⾏调整,也就是“收缩”。在这⾥,阈值是⼀个需要预先设置的参数,其取值⼤⼩对于降噪的结果有着直接的影响。软阈值化的输⼊与输出之间的关系如下图所⽰。
图2 软阈值化
从图2可以看出,软阈值化是⼀种⾮线性变换,有着与ReLU激活函数⾮常相似的性质:梯度要么是0,要么是1。因此,软阈值化也能够作为神经⽹络的激活函数。事实上,⼀些神经⽹络已经将软阈值化作为激活函数进⾏了使⽤。
1.3 注意⼒机制
注意⼒机制就是将注意⼒集中于局部关键信息的机制,可以分为两步:第⼀,通过扫描全局信息,发现局部有⽤信息;第⼆,增强有⽤信息并抑制冗余信息。
Squeeze-and-Excitation Network是⼀种⾮常经典的注意⼒机制下的深度学习⽅法。它可以通过⼀个⼩型的⼦⽹络,⾃动学习得到⼀组权重,对特征图的各个通道进⾏加权。其含义在于,某些特征通道是⽐较重要的,⽽另⼀些特征通道是信息冗余的;那么,我们就可以通过这种⽅式增强有⽤特征通道、削弱冗余特征通道。Squeeze-and-Excitation Network的⼀种基本模块如下图所⽰。
图3 Squeeze-and-Excitation Network的⼀种基本模块
值得指出的是,通过这种⽅式,每个样本都可以有⾃⼰独特的⼀组权重,可以根据样本⾃⾝的特点,进⾏独特的特征通道加权调整。例如,样本A的第⼀特征通道是重要的,第⼆特征通道是不重要的;⽽样本B的第⼀特征通道是不重要的,第⼆特征通道是重要的;通过这种⽅式,样本A可以有⾃⼰的⼀组权重,以加强第⼀特征通道,削弱第⼆特征通道;同样地,样本B可以有⾃⼰的⼀组权重,以削弱第⼀特征通道,加强第⼆特征通道。
2.深度残差收缩⽹络理论
2.1 动机
⾸先,现实世界中的数据,或多或少都含有⼀些冗余信息。那么我们就可以尝试将软阈值化嵌⼊残差⽹络中,以进⾏冗余信息的消除。
其次,各个样本中冗余信息含量经常是不同的。那么我们就可以借助注意⼒机制,根据各个样本的情况,⾃适应地给各个样本设置不同的阈值。
2.2 算法
与残差⽹络和Squeeze-and-Excitation Network相似,深度残差收缩⽹络也是由许多基本模块堆叠⽽成的。每个基本模块都有⼀个⼦⽹络,⽤于⾃动学习得到⼀组阈值,⽤于特征图的软阈值化。值得指出的是,通过这种⽅式,每个样本都有着⾃⼰独特的⼀组阈值。深度残差收缩⽹络的⼀种基本模块如下图所⽰。
图4 深度残差收缩⽹络的⼀种基本模块
深度残差收缩⽹络的整体结构如下图所⽰,是由输⼊层、许多基本模块以及最后的全连接输出层等组成的。
freestyle什么意思图5 深度残差收缩⽹络的整体结构
2.3 应⽤
在原始论⽂中,深度残差收缩⽹络是应⽤于基于振动信号的旋转机械故障诊断。但是从原理上来讲,深度残差收缩⽹络⾯向的是数据集含有冗余信息的情况,⽽冗余信息是⽆处不在的。例如,在图像识别的时候,图像中总会包含⼀些与标签⽆关的区域;在语⾳识别的时候,⾳频中经常会含有各种形式的噪声。因此,深度残差收缩⽹络,或者说这种“深度学习”+“软阈值化”+“注意⼒机制”的思路,有着较为⼴泛的研究前景。
3.Keras和TFLearn程序简介
本程序以图像分类为例,构建了⼩型的深度残差收缩⽹络,超参数也未进⾏优化。为追求⾼准确率的话,可以适当增加深度,增加训练迭代次数,以及适当调整超参数。下⾯是Keras程序:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat Dec 28 23:24:05 2019
Implemented using TensorFlow 1.0.1 and Keras 2.2.1
M. Zhao, S. Zhong, X. Fu, et al., Deep Residual Shrinkage Networks for Fault Diagnosis,
IEEE Transactions on Industrial Informatics, 2019, DOI: 10.1109/TII.2019.2943898
@author: super_9527
"""
from __future__ import print_function
import keras
import numpy as np
from keras.datats import mnist
from keras.layers import Den, Conv2D, BatchNormalization, Activation
from keras.layers import AveragePooling2D, Input, GlobalAveragePooling2D
from keras.optimizers import Adam
ularizers import l2
from keras import backend as K
dels import Model
from import Lambda
K.t_learning_pha(1)
# Input image dimensions
img_rows, img_cols =28,28
# The data, split between train and test ts
(x_train, y_train),(x_test, y_test)= mnist.load_data()
if K.image_data_format()=='channels_first':
x_train = shape(x_train.shape[0],1, img_rows, img_cols)
x_test = shape(x_test.shape[0],1, img_rows, img_cols)
什么是工商管理学input_shape =(1, img_rows, img_cols)
el:
x_train = shape(x_train.shape[0], img_rows, img_cols,1)
x_test = shape(x_test.shape[0], img_rows, img_cols,1)
input_shape =(img_rows, img_cols,1)
# Noid datawearside
x_train = x_train.astype('float32')/255.+0.5*np.random.random([x_train.shape[0], img_rows, img_cols,1])
x_test = x_test.astype('float32')/255.+0.5*np.random.random([x_test.shape[0], img_rows, img_cols,1])
print('x_train shape:', x_train.shape)
print(x_train.shape[0],'train samples')
print(x_test.shape[0],'test samples')
# convert class vectors to binary class matrices
y_train = _categorical(y_train,10)
y_test = _categorical(y_test,10)
def abs_backend(inputs):
return K.abs(inputs)
return K.abs(inputs)
def expand_dim_backend(inputs):
pand_pand_dims(inputs,1),1)韩语你好怎么说
def sign_backend(inputs):
return K.sign(inputs)
def pad_backend(inputs, in_channels, out_channels):
pad_dim =(out_channels - in_channels)//2
inputs = K.expand_dims(inputs,-1)
inputs = K.spatial_3d_padding(inputs,((0,0),(0,0),(pad_dim,pad_dim)),'channels_last')
return K.squeeze(inputs,-1)
# Residual Shrinakge Block
def residual_shrinkage_block(incoming, nb_blocks, out_channels, downsample=Fal,
downsample_strides=2):
residual = incoming
in_channels = _shape().as_list()[-1]
for i in range(nb_blocks):
identity = residual
if not downsample:
downsample_strides =1
residual = BatchNormalization()(residual)
residual = Activation('relu')(residual)
residual = Conv2D(out_channels,3, strides=(downsample_strides, downsample_strides),
padding='same', kernel_initializer='he_normal',
kernel_regularizer=l2(1e-4))(residual)
residual = BatchNormalization()(residual)
residual = Activation('relu')(residual)
residual = Conv2D(out_channels,3, padding='same', kernel_initializer='he_normal',
kernel_regularizer=l2(1e-4))(residual)
# Calculate global means
residual_abs = Lambda(abs_backend)(residual)
abs_mean = GlobalAveragePooling2D()(residual_abs)
# Calculate scaling coefficients
scales = Den(out_channels, activation=None, kernel_initializer='he_normal',
kernel_regularizer=l2(1e-4))(abs_mean)
scales = BatchNormalization()(scales)
scales = Activation('relu')(scales)
scales = Den(out_channels, activation='sigmoid', kernel_regularizer=l2(1e-4))(scales)
scales = Lambda(expand_dim_backend)(scales)
# Calculate thresholds
thres = keras.layers.multiply([abs_mean, scales])
# Soft thresholding
sub = keras.layers.subtract([residual_abs, thres])
zeros = keras.layers.subtract([sub, sub])
n_sub = keras.layers.maximum([sub, zeros])
residual = keras.layers.multiply([Lambda(sign_backend)(residual), n_sub])
# Downsampling (it is important to u the pooL-size of (1, 1))
if downsample_strides >1:
identity = AveragePooling2D(pool_size=(1,1), strides=(2,2))(identity)
# Zero_padding to match channels (it is important to u zero padding rather than 1by1 convolution) if in_channels != out_channels:
if in_channels != out_channels:
identity = Lambda(pad_backend, arguments={'in_channels':in_channels,'out_channels':out_channels})(identity) residual = keras.layers.add([residual, identity])
return residual
# define and train a model
inputs = Input(shape=input_shape)
suggest的用法net = Conv2D(8,3, padding='same', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(inputs)
net = residual_shrinkage_block(net,1,8, downsample=True)
net = BatchNormalization()(net)
net = Activation('relu')(net)
net = GlobalAveragePooling2D()(net)the hangover
outputs = Den(10, activation='softmax', kernel_initializer='he_normal', kernel_regularizer=l2(1e-4))(net)
model = Model(inputs=inputs, outputs=outputs)
model.fit(x_train, y_train, batch_size=100, epochs=5, verbo=1, validation_data=(x_test, y_test))
魅力女性课程
# get results
K.t_learning_pha(0)
DRSN_train_score = model.evaluate(x_train, y_train, batch_size=100, verbo=0)
print('Train loss:', DRSN_train_score[0])
print('Train accuracy:', DRSN_train_score[1])
DRSN_test_score = model.evaluate(x_test, y_test, batch_size=100, verbo=0)
print('Test loss:', DRSN_test_score[0])
print('Test accuracy:', DRSN_test_score[1])
下⾯是TFLearn程序:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Mon Dec 23 21:23:09 2019
Implemented using TensorFlow 1.0 and TFLearn 0.3.2
M. Zhao, S. Zhong, X. Fu, B. Tang, M. Pecht, Deep Residual Shrinkage Networks for Fault Diagnosis,
IEEE Transactions on Industrial Informatics, 2019, DOI: 10.1109/TII.2019.2943898
@author: super_9527
"""
from __future__ import division, print_function, absolute_import
import tflearn
import numpy as np
import tensorflow as tf
from v import conv_2d
# Data loading
from tflearn.datats import cifar10
(X, Y),(testX, testY)= cifar10.load_data()
# Add noi
X = X + np.random.random((50000,32,32,3))*0.1
testX = testX + np.random.random((10000,32,32,3))*0.1
distance是什么意思# Transform labels to one-hot format
Y = tflearn._categorical(Y,10)foreignization
testY = tflearn._categorical(testY,10)
def residual_shrinkage_block(incoming, nb_blocks, out_channels, downsample=Fal,
downsample_strides=2, activation='relu', batch_norm=True,
bias=True, weights_init='variance_scaling',
厌倦是什么意思bias_init='zeros', regularizer='L2', weight_decay=0.0001,
trainable=True, restore=True, reu=Fal, scope=None,