RGB - D斯拉姆数据集和基准(RGB-D SLAM Datat and Benchmark)

更新时间:2023-07-30 14:15:35 阅读: 评论:0

RGB - D斯拉姆数据集和基准(RGB-D SLAM Datat
and Benchmark)
数据介绍:
人民币的英文We provide a large datat containing RGB-D data and ground-truth data with the goal to establish a novel benchmark for the evaluation of visual odometry and visual SLAM systems. Our datat contains the color and depth images of a Microsoft Kinect nsor along the
ground-truth trajectory of the nsor. The data was recorded at full frame rate (30 Hz) and nsor resolution (640×480). The ground-truth trajectory was obtained from a high-accuracy motion-capture system with eight high-speed tracking cameras (100 Hz). Further, we provide the accelerometer data from the Kinect. Finally, we propo an evaluation criterion for measuring the quality of the estimated camera trajectory of visual SLAM systems.
关键词:
RGB-D,地面实况,基准,测程,轨迹,
RGB-D,ground-truth,benchmark,odometry,trajectory,
数据格式:
IMAGE
数据详细介绍:
RGB-D SLAM Datat and Benchmark
Contact: Jürgen Sturm
We provide a large datat containing RGB-D data and ground-truth data with the goal to establish a novel benchmark for the evaluation of visual odometry and visual SLAM systems. Our datat contains the color and depth images of a Microsoft Kinect nsor along the ground-truth trajectory of the nsor. The data was recorded at full frame rate (30 Hz) and nsor resolution (640×480). The ground-truth trajectory was obtained from a high-accuracy motion-capture system with eight high-speed tracking cameras (100 Hz). Further, we provide the accelerometer data from the Kinect. Finally, we propo an evaluation criterion for measuring the quality of the estimated camera trajectory of visual SLAM systems.
How can I u the RGB-D Benchmark to evaluate my SLAM system?
1. Download one or more of the RGB-D benchmark quences (file鲁智深的性格特征
formats, uful tools)
2. Run your favorite visual odometry/visual SLAM algorithm (for example,
RGB-D SLAM)
3. Save the estimated camera trajectory to a file (file formats, example
trajectory)
4. Evaluate your algorithm by comparing the estimated trajectory with the夕阳简笔画
ground truth trajectory. We provide an automated evaluation tool to help you with the evaluation. There is also an online version of the tool. Further remarks
Jo Luis Blanco has added our datat to the mobile robot programming toolkit (MRPT) repository. The datat (including example code and tools) can be downloaded here.
∙If you have any questions about the datat/benchmark/evaluation/file formats, plea don't hesitate to contact Jürgen Sturm.
∙We are happy to share our data with other rearchers. Plea refer to the respective publication when using this data.
Related publications
2011
Conference and Workshop Papers
Real-Time Visual Odometry from Den RGB-D Images (F. Steinbruecker, J. Sturm, D. Cremers), In Workshop on Live Den Reconstruction with Moving Cameras at the Intl. Conf. on Computer Vision (ICCV), 2011. [bib] [pdf] Towards a benchmark for RGB-D SLAM evaluation (J. Sturm, S. Magnenat, N. Engelhard, F. Pomerleau, F. Colas, W. Burgard, D. Cremers, R. Siegwart), In Proc. of the RGB-D Workshop on Advanced Reasoning with Depth Cameras at Robotics: Science and Systems Conf. (RSS), 2011. [bib] [pdf] [pdf]
Real-time 3D visual SLAM with a hand-held camera (N. Engelhard, F. Endres, J. Hess, J. Sturm, W. Burgard), In Proc. of the RGB-D Workshop on 3D Perception in Robotics at the European Robotics Forum, 2011. [bib] [pdf] [video] [video] [video]
File Formats
We provide the RGB-D datats from the Kinect in the following format:
Color images and depth maps
We provide the time-stamped color and depth images as a gzipped tar file (TGZ).
∙The color images are stored as 640×480 8-bit RGB images in PNG format.
∙The depth maps are stored as 640×480 16-bit monochrome images in PNG format.
∙The color and depth images are already pre-registered using the OpenNI driver from PrimeSen, i.e., the pixels in the color and depth
images correspond already 1:1.
∙The depth images are scaled by a factor of 5000, i.e., a pixel value of 5000 in the depth image corresponds to a distance of 1 meter from the
camera, 10000 to 2 meter distance, etc. A pixel value of 0 means
missing value/no data.
Ground-truth trajectories
We provide the groundtruth trajectory as a text file containing the translation and orientation of the ca
mera in a fixed coordinate frame. Note that also our automatic evaluation tool expects both the groundtruth and estimated trajectory to be in this format.
∙Each line in the text file contains a single po.
∙The format of each line is 'timestamp tx ty tz qx qy qz qw'
∙timestamp (float) gives the number of conds since the Unix epoch.
∙tx ty tz (3 floats) give the position of the optical center of the color camera with respect to the world origin as defined by the motion capture system.
∙qx qy qz qw (4 floats) give the orientation of the optical center of the color camera in form of a unit quaternion with respect to the world origin as defined by the motion capture system.
∙The file may contain comments that have to start with ”#”.
Intrinsic Camera Calibration of the Kinect
The Kinect has a factory calibration stored onboard, bad on a high level polynomial warping functi
on. The OpenNI driver us this calibration for undistorting the images, and for registering the depth images (taken by the IR camera) to the RGB images. Therefore, the depth images in our datats are reprojected into the frame of the color camera, which means that there is a 1:1 correspondence between pixels in the depth map and the color image.
The conversion from the 2D images to 3D point clouds works as follows. Note that the focal lengths (fx/fy), the optical center (cx/cy), the distortion parameters (d0-d4) and the depth correction factor are different for each camera. The Python code below illustrates how the 3D point can be computed from the pixel coordinates and the depth value:
fx = 525.0 # focal length x
fy = 525.0 # focal length y
cx = 319.5 # optical center x
cy = 239.5 # optical center y
ds = 1.0 # depth scaling
factor = 5000 # for the 16-bit PNG files
# OR: factor = 1 # for the 32-bit float images in the ROS bag files
for v in range(depth_image.height):
for u in range(depth_image.width):
Z = (depth_image[v,u] / factor) * ds;
X = (u - cx) * Z / fx;
Y = (v - cy) * Z / fy;
Note that the above script us the default (uncalibrated) intrinsic parameters. The intrinsic parameters for the Kinects ud in the fr1 and fr2 datat are as follows:
Calibration of the color camera鲁滨逊漂流记好词好句好段
We computed the intrinsic parameters of the RGB camera from the
rgbd_datat_freiburg1/2_rgb_calibration.bag.
Camera fx fy cx cy d0 d1 d2 d3 d4
(ROS
525.0 525.0 319.5 239.5 0.0 0.0 0.0 0.0 0.0 default)
Freiburg 1
517.3 516.5 318.6 255.3 0.2624 -0.9531 -0.0054 0.0026 1.1633 RGB
Freiburg 2
520.9 521.0 325.1 249.7 0.2312 -0.7849 -0.0033 -0.0001 0.9172 RGB
Calibration of the depth images
We verified the depth values by comparing the reported depth values to the depth estimated from the RGB checkerboard. In this experiment, we found that the reported depth values from the Kinect were off by a constant scaling factor, as given in the following table:
Camera ds
Freiburg 1 Depth 1.035
写活动的作文Freiburg 2 Depth 1.031
Calibration of the infrared camera
We also provide the intrinsic parameters for the infrared
camera. Note that the depth images provided in our datat are already
pre-registered to the RGB images. Therefore, rectifying the depth images bad on the intrinsic parameters is not straight forward.
Camera fx fy cx cy d0 d1 d2 d3 d4 Freiburg 1 IR591.1 590.1 331.0 234.0 -0.0410 0.3286 0.0087 0.0051 -0.5643
金鸾殿
Freiburg 2 IR580.8 581.8 308.8 253.0 -0.2297 1.4766 0.0005 -0.0075 -3.4194 Movies for visual inspection
For visual inspection of the individual datats, we also provide movies of the Kinect (RGB and depth) and of an external camcorder. The movie format is mpeg4 stored in an AVI container.
Alternate file formats
ROS bag
For people using ROS, we also provide ROS bag files that contain the color images, monochrome images, depth images, camera infos, point clouds and transforms – including the groundtruth transformation from the /world frame all in a single file. The bag files (ROS diamondback) contain the following message topics:
∙/camera/depth/camera_info (nsor_msgs/CameraInfo) contains the intrinsic camera parameters for the depth/infrared camera, as
reported by the OpenNI driver
∙/camera/depth/image (nsor_msgs/Image) contains the depth map ∙/camera/rgb/camera_info (nsor_msgs/CameraInfo) contains the intrinsic camera parameters for the RGB camera, as reported by the
OpenNI driver
奶油面包的做法
∙/camera/rgb/image_color (nsor_msgs/Image) contains the color image from the RGB camera
∙/imu (nsor_msgs/Imu), contains the accelerometer data from the Kinect
∙/tf (tf/tfMessage), contains:
o the ground-truth data from the mocap (/world to /Kinect)
o the calibration betwenn mocap and the optical center of the Kinect's color camera (/Kinect to /openni_camera),
o and the ROS-specific, internal transformations (/openni_camera to /openni_rgb_frame to /openni_rgb_optical_frame).
实质细胞If you need the point clouds and monochrome images, you can u the adding_point_clouds_to_ros_bag_files script to add them:
∙/camera/rgb/image_mono (nsor_msgs/Image) contains the monochrome image from the RGB camera
∙/camera/rgb/points (nsor_msgs/PointCloud2) contains the colored point clouds

本文发布于:2023-07-30 14:15:35,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/89/1102042.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:基准   数据   测程   面包   简笔画   活动   细胞   奶油
相关文章
留言与评论(共有 0 条评论)
   
验证码:
推荐文章
排行榜
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图