关键点匹配——精选推荐

更新时间:2023-06-25 18:52:35 阅读: 评论:0

夸领导关键点匹配
要看的⽂章:SpyNet [31], PWC-Net [38] and LiteFlowNet [14],SelFlow(⾃监督)
能检测遮挡的光流:
UnFlow: Unsupervid learning of optical flow with a bidirectional census loss
Occlusion aware unsupervid learning of optical flow
Unsupervid learning of multi-frame optical flow with occlusions
Ddflow: Learning optical flow with unlabeled data distillation
Back to basics:Unsupervid learning of optical flow via brightness constancy and motion smoothness
Unsupervid deep learning for optical flow estimation
. Unsupervid monocular depth estimation with left-right consistency.
月偏旁的字有哪些Optical flow estimation with channel constancy. 2014ECCV
Image quality asssment:from error visibility to structural similarity. 2004
Learning den correspondence via 3d-guided cycle consistency 2016 CVPR
Convolutional neural network architecture for geometric matching CVPR 2017
SIFT+RANSAC的进化版:
更好的特征描述:Superpoint: Self-supervid interest point detection and description. 2018
Geometric image correspondence verification by den pixel matching 2019
Geodesc:Learning local descriptors by integrating geometry constraints. 2018
Working hard to know your neighbor’s margins: Local descriptor learning loss NIPS2017
R2d2: Reliable and repeatable detector and descripto NIPS 2019
L2-net: Deep learning of discriminative patch descriptor in euclidean space CVPR 2017
RANSAC:Learning two-view correspondences and geometry using order-aware network ICCV 2019
Deep fundamental matrix estimation ECCV 2018
Pointnet: Deep learning on point ts for 3d classification and gmentation CVPR 2017
Neural nearest neighbors networks NIPS 2018
光流:
Flow Fields: Den Correspondence Fields for Highly Accurate Large Displacement Optical Flow Estimation ⼤范围光流场估计2015ICCV
使⽤局部特征进⾏光流匹配:Large displacement optical flow CVPR 2009
: Epicflow: Edge-prerving interpolation of correspondences for optical flow
Flow fields: Den correspondence fields for highly accurate large displacement optical flow estimation
: Efficient coar-to-fine patchmatch for large displacement optical flow
超越视觉相似性的光流:SIFTFlow,FlowWeb: Joint Image Set Alignment by
Weaving Consistent, Pixel-wi Correspondences
Do Convnets Learn Correspondence?
Universal correspondence network婚礼开场白主持词
Neighbourhood connsus networks
深度光流:Geonet: Unsupervid learning of den depth, optical flow and camera po
Learning correspondence from the cycle-consistency of time CVPR 2019
: Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume CVPR 2018
Dgc-net: Den geometric correspondence network
Geometric image correspondence verification by den pixel matching
making convolutional networks shift-invariant again
1、⽴体匹配算法主要可分为两⼤类:基于局部约束和基于全局约束的⽴体匹配算法.
(⼀)基于全局约束的⽴体匹配算法:在本质上属于优化算法,它是将⽴体匹配问题转化为寻找全局能量函数的最优化问题,其代表算法主要有图割算法、置信度传播算法和协同优化算法等.全局算法能够获得较低的总误匹配率,但算法复杂度较⾼,很难满⾜实时的需求,不利于在实际⼯程中使⽤.葡萄的作文
(⼆)基于局部约束的⽴体匹配算法:主要是利⽤匹配点周围的局部信息进⾏计算,由于其涉及到的信息量较少,匹配时间较短,因此受到了⼴泛关注,其代表算法主要有 SAD、SSD、NCC等
1.SuperGlue: Learning Feature Matching with Graph Neural Networks(2020)
学困生辅导记录表
⽹络结构:
代码:
(1)keypoint encoder
对应于⽂中
lf.kenc = fig['descriptor_dim'], lf.config['keypoint_encoder'])
def__init__(lf, feature_dim, layers):
其中,MLP是多个1维的卷积,类似于FC,从通道3(x,y,c)依次变为通道[32,64,128,256],再变为256 descriptor_dim。
layers.append( nn.Conv1d(channels[i -1], channels[i], kernel_size=1, bias=True))
desc0 = desc0 + lf.kenc(kpts0, data['scores0'])
def forward(lf, kpts, scores):
inputs =[anspo(1,2), scores.unsqueeze(1)]
der(torch.cat(inputs, dim=1))
2. Graph Neural Network with multiple lf and cross-attention layers
默认值:lf.config['descriptor_dim']=256,lf.config['GNN_layers']=['lf','cross']*9,
< = AttentionalGNN( lf.config['descriptor_dim'], lf.config['GNN_layers'])
lf.layers = nn.ModuleList([ AttentionalPropagation(feature_dim,4)for _ in range(len(layer_names))])#len(layer_names)=18,
AttentionalPropagation(feature_dim,4)
lf.attn = MultiHeadedAttention(num_heads, feature_dim)#MultiHeadedAttention(4, 256)
lf.dim = d_model // num_heads #256//4=64
lf.num_heads = num_heads #4
< = nn.Conv1d(d_model, d_model, kernel_size=1)#nn.Conv1d(256, 256, kernel_size=1)
lf.proj = nn.ModuleList([)for _ in range(3)])
forward的时候:def forward(lf, query, key, value):
放射性污染query, key, value =[l(x).view(batch_dim, lf.dim, lf.num_heads,-1)for l, x in zip(lf.proj,(query, key, value))]#lf.merge(
lf.mlp = MLP([feature_dim*2, feature_dim*2, feature_dim])#MLP([256*2, 256*2, 256])
desc0, desc1 = lf.gnn(desc0, desc1)
2.Flow2Stereo: Effective Self-Supervid Learning of Optical Flow and Stereo Matching
我的阳光牧场
采⽤⾃监督的⽅式,能够使⽤两个时间(t,t+1)和两个视⾓的4张图像,估计出图像间的光流和⽴体视差。
(1)⾸先,计算两个时间(t,t+1)和两个视⾓的4张图像,他们之间所满⾜的四边形限制和三⾓形限制
(2)采⽤两阶段的策略,teacher model和student model,其中teacher model阶段的损失函数为Photometric loss(基于:对于没有遮挡的像素的亮度⼀致性假设),三⾓形限制和四边形限制;student model阶段的损失函数为⾃监督损失(基于teacher model预测的光流和confidence map)
四边形限制和三⾓形限制的计算:
两阶段的⽹络:
3. Ddflow: Learning optical flow with unlabeled data distillation
(1)在⽆监督学习中,使⽤亮度⼀致性构建损失函数是合理的,但是这条法则并不适⽤于有遮挡的像素
(2)所以,⾸先训练teacher model。其中,在训练teacher model时可以根据前项光流和后项光流的不⼀致性识别被遮挡的像素(这个假设不是完全成⽴的)。训练teacher model时只使⽤了亮度⼀致性损失
(3)训练完teacher model后,训练student model。对于有遮挡的地⽅,采⽤teacher model的预测结果作为label,计算损失。对于没有遮挡的地⽅,使⽤亮度⼀致性损失。
5.UnFlow: Unsupervid Learning of Optical Flow with a Bidirectional Census Loss
(1)定义被遮挡的区域:因为遮挡区域不满⾜亮度⼀致性假设,所以单独处理被遮挡的区域。我们假设对于没有遮挡的区域,前向
flow(I1->I2)应该是后向flow(I2->I1)的inver,⽽对于遮挡的区域,即为此条假设不成⽴的地⽅:
(2)对于没有遮挡的区域,⾃监督的损失由3部分组成:亮度⼀致性损失ED,光流平滑性ES(smooth),前后向光流的⼀致相反性EC.对于被遮挡的区域,只计算光流平滑性损失。
桑椹怎么洗

本文发布于:2023-06-25 18:52:35,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/82/1038256.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:算法   损失   光流   匹配   遮挡   致性   区域
相关文章
留言与评论(共有 0 条评论)
   
验证码:
推荐文章
排行榜
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图