轻量级网络论文:SearchingforMobileNetV3及其PyTorch实现

更新时间:2023-05-27 09:10:10 阅读: 评论:0

轻量级⽹络论⽂:SearchingforMobileNetV3及其PyTorch实现
1 概述
MobileNet V3 = MobileNet v2 + SE结构 + hard-swish activation +⽹络结构头尾微调。除了激活函数,看不出有什么亮点。
2 ⽹络架构搜索
关于⽹络架构搜索(NAS)
2-1 模块级的搜索(Block-wi Search)
资源受限的NAS(platform-aware NAS)在资源受限条件下搜索⽹络的各个模块。
清新手机壁纸在这⾥插⼊图⽚描述
2-2 层级的搜索(Layer-wi Search)
NetAdapt对各个模块确定之后的⽹络层进⾏微调。
在这⾥插⼊图⽚描述
3 ⽹络设计
3-1 ⽹络结构优化
a. ⽹络端部加速
砍掉最后的⼀些层来提速,即先使⽤global average pooling降低计算代价。
b. ⽹络头部加速
通道数减半;
使⽤h-swish代替ReLU 或者 swish;
3-2 激活函数(h-swish)
swish x【1】 能有效改进⽹络精度,但是⽐较耗时,
因此提出swish x 的近似版h-swish
h-swish 实现⽐ swish 快,但是⽐ relu 还是要慢不少。
【1】哈桑怀特塞德
3-3 MobileNetV3 ⽹络结构
论文的框架4 实验结果
4-1 图像分类(Classification)
在这⾥插⼊图⽚描述
4-2 ⽬标检测(Detection)
在这⾥插⼊图⽚描述4-3 语义分割(Semantic Segmentation)
Lite R-ASPP结构:蘑菇房子
在这⾥插⼊图⽚描述
在这⾥插⼊图⽚描述
PyTorch代码:
import torch
as nn
激的组词
import torchvision
class HardSwish(nn.Module):
def __init__(lf, inplace=True):
super(HardSwish, lf).__init__()
def forward(lf, x):
return lu6(x+3)/6
def ConvBNActivation(in_channels,out_channels,kernel_size,stride,activate):
return nn.Sequential(
nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=stride, padding=(kernel_size-1)//2, groups=in_c hannels),
nn.BatchNorm2d(out_channels),
nn.ReLU6(inplace=True) if activate == 'relu' el HardSwish()
)
def Conv1x1BNActivation(in_channels,out_channels,activate):
return nn.Sequential(
nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=1, stride=1),
nn.BatchNorm2d(out_channels),
墨香nn.ReLU6(inplace=True) if activate == 'relu' el HardSwish()
)
)
def Conv1x1BN(in_channels,out_channels):锒铛怎么读
return nn.Sequential(
nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=1, stride=1),
nn.BatchNorm2d(out_channels)
)
class SqueezeAndExcite(nn.Module):
def __init__(lf, in_channels, out_channels,_kernel_size, divide=4):
super(SqueezeAndExcite, lf).__init__()
mid_channels = in_channels // divide
lf.pool = nn.AvgPool2d(kernel_size=_kernel_size,stride=1)
lf.SEblock = nn.Sequential(
nn.Linear(in_features=in_channels, out_features=mid_channels),
nn.ReLU6(inplace=True),
nn.Linear(in_features=mid_channels, out_features=out_channels),
HardSwish(inplace=True),
)
def forward(lf, x):
b, c, h, w = x.size()
out = lf.pool(x)
out = out.view(b, -1)
out = lf.SEblock(out)
out = out.view(b, c, 1, 1)
return out * x
class SEInvertedBottleneck(nn.Module):
def __init__(lf, in_channels, mid_channels, out_channels, kernel_size, stride,activate, u_, _kernel_size=1):        super(SEInvertedBottleneck, lf).__init__()
lf.stride = stride
lf.u_ = u_
# mid_channels = (in_channels * expansion_factor)
lf.depth_conv = ConvBNActivation(mid_channels, mid_channels, kernel_size,stride,activate)
if lf.u_:
lf.SEblock = SqueezeAndExcite(mid_channels, mid_channels, _kernel_size)
lf.point_conv = Conv1x1BNActivation(mid_channels, out_channels,activate)
if lf.stride == 1:
lf.shortcut = Conv1x1BN(in_channels, out_channels)
def forward(lf, x):
out = lf.depth_v(x))
if lf.u_:
out = lf.SEblock(out)
out = lf.point_conv(out)
out = (out + lf.shortcut(x)) if lf.stride == 1 el out
return out
class MobileNetV3(nn.Module):
def __init__(lf, num_class=1000,type='large'):
最好看的手机
super(MobileNetV3, lf).__init__()
lf.first_conv = nn.Sequential(
nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, stride=2, padding=1),
nn.BatchNorm2d(16),
HardSwish(inplace=True),
)
if type=='large':

本文发布于:2023-05-27 09:10:10,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/89/936594.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:搜索   微调   手机
相关文章
留言与评论(共有 0 条评论)
   
验证码:
推荐文章
排行榜
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图