【SLAM】VINS-MONO解析——回环检测和重定位创作中..
9. 回环检测与重定位
本部分内容涉及到的代码⼤部分在po_graph⽂件夹下,少部分在vins_estimator⾥。
原创内容,转载请先与我联系并注明出处,谢谢!
系列内容请点击:
知识点:
make love
如何利⽤描述⼦-词袋进⾏回环检测并进⾏异常值剔除;
为什么引⼊序列这⼀概念;
如何理解世界坐标系矫正矩阵和漂移矫正矩阵的区别;
如何进⾏图优化更新所有位姿实现位姿的平滑。
9.1 主⼊⼝和回调函数
先进⼊到po_graph_node.cpp⾥的main():
开始是⼀些初始化ros node和从配置⽂件⾥读参数,这些我们不去关注它,然后是⼀些重要的订阅和回调函数:
//订阅topic并执⾏各⾃回调函数
ros::Subscriber sub_imu_forward = n.subscribe("/vins_estimator/imu_propagate", 2000, imu_forward_callback);
ros::Subscriber sub_vio = n.subscribe("/vins_estimator/odometry", 2000, vio_callback);
ros::Subscriber sub_image = n.subscribe(IMAGE_TOPIC, 2000, image_callback);
ros::Subscriber sub_po = n.subscribe("/vins_estimator/keyframe_po", 2000, po_callback);
ros::Subscriber sub_extrinsic = n.subscribe("/vins_estimator/extrinsic", 2000, extrinsic_callback);
ros::Subscriber sub_point = n.subscribe("/vins_estimator/keyframe_point", 2000, point_callback);
ros::Subscriber sub_relo_relative_po = n.subscribe("/vins_estimator/relo_relative_po", 2000, rel
o_relative_po_callback);
这些订阅的数据从哪来,是什么含义,订阅之后要⼲什么,都⾮常重要。其中extrinsic_callback的含义⽐较明显,就不介绍了。然后是发布的⼀些topic,在这⾥,我们先不管它发布的什么,在主流程碰到了再去研究:
//发布的topic
pub_match_img = n.adverti<nsor_msgs::Image>("match_image", 1000);
pub_camera_po_visual = n.adverti<visualization_msgs::MarkerArray>("camera_po_visual", 1000);
pub_key_odometrys = n.adverti<visualization_msgs::Marker>("key_odometrys", 1000);
pub_vio_path = n.adverti<nav_msgs::Path>("no_loop_path", 1000);
pub_match_points = n.adverti<nsor_msgs::PointCloud>("match_points", 100);
随后是创建回环检测和重定位的主流程函数process(),这⾥创建了⼀个线程⼲这个事情:
measurement_process = std::thread(process);
9.1.1 话题 “imu_propagate”的数据来源和回调函数imu_forward_callback()
数据来源是vins_estimator下visualization.cpp的pubLatestOdometry(),这个函数是在estimator_node.cpp的imu_callback()下被调⽤的。我们知道后端优化的位姿是以image的频率实现的,⽽IMU的频率是⾼于image的,所以在imu_callback()中,它会找⼀个最新优化的位姿,在这个基础上依托这个位姿之后收到的imu数据,积分在这个位姿上,实现输出⼀个⾼频的⾥程计的作⽤,所以
“imu_propagate”话题下接收的是⼀个⾼频的实时⾥程计。进⼊到imu_forward_callback(),你会发现这个回调函数把这个位姿先跟进重定位的结果进⾏了矫正:
vio_t = pograph.w_r_vio * vio_t + pograph.w_t_vio;
vio_q = pograph.w_r_vio * vio_q;
vio_t = pograph.r_drift * vio_t + pograph.t_drift;
vio_q = pograph.r_drift * vio_q;
这⾥有个⾮常有趣的位姿变换,就是先⽤w_r_vio和w_t_vio矫正⼀次,再⽤r_drift和t_drift矫正⼀次。我⽬前对他们的理解是前者是把当前位姿从错误的世界坐标系纠正到正确的坐标系上,后者是对短期位姿漂移的矫正。
然后把这个位姿从body->world变成cam->world:
Vector3d vio_t_cam;
Quaterniond vio_q_cam;
vio_t_cam = vio_t + vio_q * tic;
vio_q_cam = vio_q * qic;
最后⽤于可视化。
<();
camerapovisual.add_po(vio_t_cam, vio_q_cam);
camerapovisual.publish_by(pub_camera_po_visual, forward_msg->header);
9.1.2 话题 “odometry”的数据来源和回调函数vio_callback()
数据来源是vins_estimator下visualization.cpp的pubOdometry(),这个函数是在estimator_node.cpp的主线程process()下调⽤的,我们看看它发的是哪个的位姿:
nav_msgs::Odometry odometry;
odometry.header = header;
odometry.header.frame_id = "world";
odometry.child_frame_id = "world";
Quaterniond tmp_Q;
tmp_Q = Quaterniond(estimator.Rs[WINDOW_SIZE]);
odometry.po.po.position.x = estimator.Ps[WINDOW_SIZE].x();
odometry.po.po.position.y = estimator.Ps[WINDOW_SIZE].y();
odometry.po.po.position.z = estimator.Ps[WINDOW_SIZE].z();
odometry.ientation.x = tmp_Q.x();
darby
odometry.ientation.y = tmp_Q.y();
odometry.ientation.z = tmp_Q.z();
odometry.ientation.w = tmp_Q.w();
odometry.twist.twist.linear.x = estimator.Vs[WINDOW_SIZE].x();
odometry.twist.twist.linear.y = estimator.Vs[WINDOW_SIZE].y();
odometry.twist.twist.linear.z = estimator.Vs[WINDOW_SIZE].z();
pub_odometry.publish(odometry);
发布的位姿是滑窗的最新帧,假如说滑窗⼤⼩是10,那发布的是第11帧的位姿。再看看它的回调函数,实际上和前⼀个差不多,先根据重定位的结果进⾏矫正,再转化为cam->world的表⽰:
vio_t = pograph.w_r_vio * vio_t + pograph.w_t_vio;
vio_q = pograph.w_r_vio * vio_q;
vio_t = pograph.r_drift * vio_t + pograph.t_drift;
vio_q = pograph.r_drift * vio_q;
vio_t_cam = vio_t + vio_q * tic;
vio_q_cam = vio_q * qic;
把相机位姿的平移分类放到buffer⾥:
odometry_buf.push(vio_t_cam);
这个buffer是为了给key_odometrys提供相机运动过程中⼀系列空间坐标数据的:
for (unsigned int i = 0; i < odometry_buf.size(); i++)
{
geometry_msgs::Point po_marker;
Vector3d vio_t;
vio_t = odometry_buf.front();
odometry_buf.pop();
po_marker.x = vio_t.x();
po_marker.y = vio_t.y();
po_marker.z = vio_t.z();
key_odometrys.points.push_back(po_marker);
odometry_buf.push(vio_t);
goty}
⽽key_odometrys的作⽤在vins中也是⽤于可视化的。
pub_key_odometrys.publish(key_odometrys)
武汉达内
9.1.3 话题"keyframe_po"的数据来源和回调函数po_callback()
数据来源是vins_estimator下visualization.cpp的pubKeyframe(),这个函数是在estimator_node.cpp的主线程process()下调⽤的,我们看看它发的是哪个的位姿:
int i = WINDOW_SIZE - 2;
Vector3d P = estimator.Ps[i];
Quaterniond R = Quaterniond(estimator.Rs[i]);
nav_msgs::Odometry odometry;
odometry.header = estimator.Headers[WINDOW_SIZE - 2];
odometry.header.frame_id = "world";
odometry.po.po.position.x = P.x();
odometry.po.po.position.y = P.y();
odometry.po.po.position.z = P.z();
odometry.ientation.x = R.x();
odometry.ientation.y = R.y();
odometry.ientation.z = R.z();
iso9000odometry.ientation.w = R.w();
pub_keyframe_po.publish(odometry);
你发现了吗,是滑窗内倒数第⼆帧的位姿!也就是说,po_graph收到的数据,处理的对象,都是由estimator滑窗在各个时刻的倒数第⼆帧提供的。我们再回到这个回调函数po_callback():
void po_callback(const nav_msgs::Odometry::ConstPtr &po_msg)
{
if(!LOOP_CLOSURE) return;
m_buf.lock();
po_buf.push(po_msg);
m_buf.unlock();
}
把位姿放到po_buf⾥去,po_buf会在process()⾥⽤到。
9.1.4 话题 “keyframe_point”的数据来源和回调函数point_callback()
数据来源是vins_estimator下visualization.cpp的pubKeyframe(),这个函数是在estimator_node.cpp的主线程process()下调⽤的,我们看看它发的是哪个的点:
nsor_msgs::PointCloud point_cloud;
point_cloud.header = estimator.Headers[WINDOW_SIZE - 2];
//遍历滑窗内的所有点
for (auto &it_per_id : estimator.f_manager.feature)
{
//如果在WINDOW_SIZE - 2这⼀帧上看到过
int frame_size = it_per_id.feature_per_frame.size();
if(it_per_id.start_frame < WINDOW_SIZE - 2 && it_per_id.start_frame + frame_size - 1 >= WINDOW_SIZE - 2 && it_per_id.solve_flag == 1)
{
//找到这些特征点的世界坐标
int imu_i = it_per_id.start_frame;
Vector3d pts_i = it_per_id.feature_per_frame[0].point * it_per_id.estimated_depth;
Vector3d w_pts_i = estimator.Rs[imu_i] * (estimator.ric[0] * pts_i + estimator.tic[0])
+ estimator.Ps[imu_i];
geometry_msgs::Point32 p;
task
p.x = w_pts_i(0);
p.y = w_pts_i(1);
p.z = w_pts_i(2);
point_cloud.points.push_back(p);
//找到这些特征点在WINDOW_SIZE - 2这⼀帧上的归⼀化平⾯坐标,像素坐标,id
int imu_j = WINDOW_SIZE - 2 - it_per_id.start_frame;
nsor_msgs::ChannelFloat32 p_2d;
p_2d.values.push_back(it_per_id.feature_per_frame[imu_j].point.x());
p_2d.values.push_back(it_per_id.feature_per_frame[imu_j].point.y());
p_2d.values.push_back(it_per_id.feature_per_frame[imu_j].uv.x());
p_2d.values.push_back(it_per_id.feature_per_frame[imu_j].uv.y());
p_2d.values.push_back(it_per_id.feature_id);
point_cloud.channels.push_back(p_2d);
}
客家话翻译
}
pub_keyframe_point.publish(point_cloud);
仍然是是滑窗内倒数第⼆帧的点。它⾸先扫描了滑窗内所有的点,如果某⼀个点在倒数第⼆帧前被看到,过了倒数第⼆帧仍然被看到,那么就把他的w系下的坐标,倒数第⼆帧的归⼀化坐标,像素坐标记录下来,发布出去。
再回到回调函数,你会发现和上⼀个也是类似的:
void point_callback(const nsor_msgs::PointCloudConstPtr &point_msg)
{
toysrusif(!LOOP_CLOSURE) return;
m_buf.lock();
point_buf.push(point_msg);
m_buf.unlock();
}
9.1.5 话题 IMAGE_TOPIC的数据来源和回调函数image_callback()
void image_callback(const nsor_msgs::ImageConstPtr &image_msg)
{
//ROS_INFO("image_callback!");
if(!LOOP_CLOSURE)
return;
m_buf.lock();
image_buf.push(image_msg);
m_buf.unlock();
//printf(" image time %f \n", image_msg->Sec());
/
/ detect unstable camera stream
if (last_image_time == -1)
last_image_time = image_msg->Sec();
//若新到达的图像时间已超过上⼀帧图像时间1s或⼩于上⼀帧,则是新的图像序列
el if (image_msg->Sec() - last_image_time > 1.0 || image_msg->Sec() < last_image_time)
{
ROS_WARN("image discontinue! detect a new quence!");
new_quence();
}
last_image_time = image_msg->Sec();
}
你会发现,在这⾥,如果image时间戳错乱或者跟丢了的话,他会new_quence()⼀下,quence这个概念在po_graph⾥⾯多次出现,⽽且概念⾮常绕:
void new_quence()
{
printf("new quence\n");
quence++;
printf("quence cnt %d \n", quence);
if (quence > 5)
{
ROS_WARN("only support 5 quences since it's boring to copy code for more quences.");
ROS_BREAK();
}
//重新初始化,重新构建地图。
pograph.pograph_visualization->ret();
pograph.publish();
m_buf.lock();
while(!pty())
image_buf.pop();
while(!pty())
point_buf.pop();德语入门书
while(!pty())
po_buf.pop();
while(!pty())
masqueradeodometry_buf.pop();
m_buf.unlock();
}