各种注意⼒机制PyTorch实现
给出了整个系列的PyTorch的代码实现,以及使⽤⽅法。
各种注意⼒机制
Pytorch implementation of “Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks—arXiv 2020.05.05”
Pytorch implementation of “Attention Is All You Need—NIPS2017”
Pytorch implementation of “Squeeze-and-Excitation Networks—CVPR2018”
Pytorch implementation of “Selective Kernel Networks—CVPR2019”
police的复数
Pytorch implementation of “CBAM: Convolutional Block Attention Module—ECCV2018”
Pytorch implementation of “BAM: Bottleneck Attention Module—BMCV2018”
Pytorch implementation of “ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks—CVPR2020”
Pytorch implementation of “Dual Attention Network for Scene Segmentation—CVPR2019”
Pytorch implementation of “EPSANet: An Efficient Pyramid Split Attention Block on Convolutional Neural Network—arXiv 2020.05.30”
Pytorch implementation of “ResT: An Efficient Transformer for Visual Recognition—arXiv 2020.05.28”
1. 外部注意⼒
1.1. 论⽂
“Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks”shoal
1.2. 概要
1.3. 代码
from attention.ExternalAttention import ExternalAttention
import torch
input=torch.randn(50,49,512)
外语角
ea = ExternalAttention(d_model=512,S=8)
wheelman
output=ea(input)
print(output.shape)
2. ⾃注意⼒
2.1. 论⽂
非诚勿扰男生出场音乐“Attention Is All You Need”
1.2. 概要
1.3. 代码
from attention.SelfAttention import ScaledDotProductAttention
import torch
input=torch.randn(50,49,512)
sa = ScaledDotProductAttention(d_model=512, d_k=512, d_v=512, h=8)
output=sa(input,input,input)
print(output.shape)
3. 简化的⾃注意⼒
3.1. 论⽂
None
3.2. 概要
长宽高英文缩写3.3. 代码
from attention.SimplifiedSelfAttention import SimplifiedScaledDotProductAttention import torch
input=torch.randn(50,49,512)
relexssa = SimplifiedScaledDotProductAttention(d_model=512, h=8)
output=ssa(input,input,input)
print(output.shape)
4. Squeeze-and-Excitation 注意⼒
4.1. 论⽂
“Squeeze-and-Excitation Networks”
4.2. 概要
4.3. 代码
from attention.SEAttention import SEAttention
import torch
input=torch.randn(50,512,7,7)
= SEAttention(channel=512,reduction=8)
output=(input)
print(output.shape)
5. SK 注意⼒
5.1. 论⽂
“Selective Kernel Networks”
5.2. 概要
5.3. 代码
from attention.SKAttention import SKAttention
import torch
input=torch.randn(50,512,7,7)
= SKAttention(channel=512,reduction=8)
output=(input)
print(output.shape)
6. CBAM 注意⼒
6.1. 论⽂
“CBAM: Convolutional Block Attention Module”
6.2. 概要
6.3. 代码
from attention.CBAM import CBAMBlock
import torch
input=torch.randn(50,512,7,7)
kernel_size=input.shape[2]
cbam = CBAMBlock(channel=512,reduction=16,kernel_size=kernel_size) output=cbam(input)
print(output.shape)
7. BAM 注意⼒
7.1. 论⽂
“BAM: Bottleneck Attention Module”
7.2. 概要
7.3. 代码
from attention.BAM import BAMBlock
import torch
input=torch.randn(50,512,7,7)
bam = BAMBlock(channel=512,reduction=16,dia_val=2)2012年7月28日
output=bam(input)
print(output.shape)
8. ECA 注意⼒
英语词汇学8.1. 论⽂
“ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks”
8.2. 概要
8.3. Code
from attention.ECAAttention import ECAAttention
import torch
wcg是什么
input=torch.randn(50,512,7,7)
eca = ECAAttention(kernel_size=3)
output=eca(input)
print(output.shape)
9. DANet 注意⼒
9.1. 论⽂
“Dual Attention Network for Scene Segmentation”
9.2. 概要