CV中的attention机制之(cSE,sSE,scSE)

更新时间:2023-06-06 02:48:25 阅读: 评论:0

CV中的attention机制之(cSE,sSE,scSE)
CV中的attention机制之(cSE,sSE,scSE)
提出scSE模块论⽂的全称是:《Concurrent Spatial and Channel ‘Squeeze & Excitation’ in Fully Convolutional Networks 》。这篇⽂章对SE模块进⾏了改进,提出了SE模块的三个变体cSE、sSE、scSE,并通过实验证明了了这样的模块可以增强有意义的特征,抑制⽆⽤特征。实验是基于两个医学上的数据集MALC Datat和Visceral Datat进⾏实验的。
语义分割模型⼤部分都是类似于U-Net这样的encoder-decoder的形式,先进⾏下采样,然后进⾏上采样到与原图⼀样的尺⼨。其添加SE 模块可以添加在每个卷积层之后,⽤于对feature map信息的提炼。具体⽅案如下图所⽰:
然后开始分别介绍由SE改进的三个模块,⾸先说明⼀下图例:
英语口语班
1、下⾯是cSE模块:
这个模块类似BAM模块⾥的Channel attention模块,通过观察这个图就很容易理解其实现⽅法,具体流程如下:
1、将feature map通过global average pooling⽅法从[C, H, W]变为[C, 1, 1]
2、然后使⽤两个1×1×1卷积进⾏信息的处理,最终得到C维的向量
3、然后使⽤sigmoid函数进⾏归⼀化,得到对应的mask
4、最后通过channel-wi相乘,得到经过信息校准过的feature map
import torch
as nn
class cSE(nn.Module):
def__init__(lf, in_channels):
super().__init__()
lf.avgpool = nn.AdaptiveAvgPool2d(1)
lf.Conv_Squeeze = nn.Conv2d(in_channels,
in_channels //2,
kernel_size=1,
bias=Fal)
lf.Conv_Excitation = nn.Conv2d(in_channels //2,
in_channels,
kernel_size=1,
bias=Fal)
< = nn.Sigmoid()
def forward(lf, U):
z = lf.avgpool(U)# shape: [bs, c, h, w] to [bs, c, 1, 1]
z = lf.Conv_Squeeze(z)# shape: [bs, c/2, 1, 1]extreme什么意思
z = lf.Conv_Excitation(z)# shape: [bs, c, 1, 1]
z = lf.norm(z)
return U * z.expand_as(U)
londonbridgeif __name__ =="__main__":
bs, c, h, w =10,3,64,64
in_tensor = s(bs, c, h, w)
c_ = cSE(c)
print("in shape:", in_tensor.shape)
out_tensor = c_(in_tensor)
print("out shape:", out_tensor.shape)
2、接下来是sSE模块:
打针injection
track上图是空间注意⼒机制的实现,与BAM中的实现确实有很⼤不同,实现过程变得很简单,具体分析如下:
1、直接对feature map使⽤1×1×1卷积, 从[C, H, W]变为[1, H, W]的features
2、然后使⽤sigmoid进⾏激活得到spatial attention map
3、然后直接施加到原始feature map中,完成空间的信息校准
NOTE: 这⾥需要注意⼀点,先使⽤1×1×1卷积,后使⽤sigmoid函数,这个信息⽆法从图中直接获取,需要理解论⽂。
import torch
as nn
class sSE(nn.Module):
def__init__(lf, in_channels):
super().__init__()
lf.Conv1x1 = nn.Conv2d(in_channels,1, kernel_size=1, bias=Fal)
< = nn.Sigmoid()
def forward(lf, U):
q = lf.Conv1x1(U)# U:[bs,c,h,w] to q:[bs,1,h,w]
q = lf.norm(q)
return U * q # ⼴播机制
if __name__ =="__main__":
bs, c, h, w =10,3,64,64
in_tensor = s(bs, c, h, w)
s_ = sSE(c)
print("in shape:", in_tensor.shape)
out_tensor = s_(in_tensor)
print("out shape:", out_tensor.shape)
3、scSe模块
可见就如他的名字⼀样,scSE就是将sSE和cSE相加起来⽽已。
代码:
import torch
as nn
class sSE(nn.Module):
星期三 英文def__init__(lf, in_channels):
super().__init__()
lf.Conv1x1 = nn.Conv2d(in_channels,1, kernel_size=1, bias=Fal)
< = nn.Sigmoid()
def forward(lf, U):自己申请出国留学
q = lf.Conv1x1(U)# U:[bs,c,h,w] to q:[bs,1,h,w]
q = lf.norm(q)
return U * q  # ⼴播机制
compod
class cSE(nn.Module):
def__init__(lf, in_channels):
super().__init__()
lf.avgpool = nn.AdaptiveAvgPool2d(1)
lf.Conv_Squeeze = nn.Conv2d(in_channels, in_channels //2, kernel_size=1, bias=Fal)        lf.Conv_Excitation = nn.Conv2d(in_channels//2, in_channels, kernel_size=1, bias=Fal)        lf.norm = nn.Sigmoid()
def forward(lf, U):
z = lf.avgpool(U)# shape: [bs, c, h, w] to [bs, c, 1, 1]
z = lf.Conv_Squeeze(z)# shape: [bs, c/2]
z = lf.Conv_Excitation(z)# shape: [bs, c]
z = lf.norm(z)
return U * z.expand_as(U)
xiaoxue
issues是什么意思
class scSE(nn.Module):
def__init__(lf, in_channels):
super().__init__()
lf.cSE = cSE(in_channels)
lf.sSE = sSE(in_channels)
def forward(lf, U):
U_s = lf.sSE(U)
U_c = lf.cSE(U)
return U_c+U_s
if __name__ =="__main__":
bs, c, h, w =10,3,64,64
in_tensor = s(bs, c, h, w)
sc_ = scSE(c)
print("in shape:",in_tensor.shape)
out_tensor = sc_(in_tensor)
print("out shape:", out_tensor.shape)
下⾯是作者给出的对⽐结果:
参考⾃(GaintpandaCV)

本文发布于:2023-06-06 02:48:25,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/90/135453.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:模块   信息   得到   空间   实现
相关文章
留言与评论(共有 0 条评论)
   
验证码:
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图