• stereo-inertial-gnss-lidar device


    在这里插入图片描述
    Fig. 6: Our handheld device for data collection, (a) shows our
    minimum system, with a total weight of 2.09 Kg; (b) an additional
    D-GPS RTK system and an ArUco marker board [25] are used to
    evaluate the system accuracy
    R
    3LIVE: A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state
    Estimation and mapping package

    在这里插入图片描述
    Fig. 1. Falcon 4 UAV platform. Our Falcon 4 platform used for this work is the
    successor of our Falcon 450 platform [1] and is equipped with a 3D LiDAR, an
    Open Vision Computer (OVC) 3 [2] which has a hardware synchronized IMU
    and stereo cameras, an Intel NUC onboard computer, and a Pixhawk 4 flight
    controller. The platform has a total weight of 4.2 kg and a 30-minute flight time.

    Large-Scale Autonomous Flight With Real-Time
    Semantic SLAM Under Dense Forest Canopy

    在这里插入图片描述

    Fig. 3. Multispectral stereo setup with thermal (left) and visible (right)
    cameras.

    Sky-GVINS: a sky-segmentation aided GNSS-Visual-Inertial system for robust navigation in urban canyons
    在这里插入图片描述

    IC-GVINS: A Robust, Real-time, INS-Centric
    GNSS-Visual-Inertial Navigation System for
    Wheeled Robot
    在这里插入图片描述
    在这里插入图片描述
    The proposed IC-GVINS is implemented using C++ under
    the framework of Robot Operating System (ROS), which is
    suitable for real-time application. The dataset collected by a
    wheeled robot is adopted for the evaluation. The equipment
    setup of the wheeled robot is showed in Fig. 3. The sensors
    include a global shutter camera with the resolution of
    1280x1024 (Allied Vision Mako-G131), an industrial-grade
    MEMS IMU (ADI ADIS16465), and a dual-antenna GNSSRTK receiver (NovAtel OEM-718D). All the sensors have been
    synchronized through hardware trigger to the GNSS time. The
    intrinsic and extrinsic parameters of the camera have been well
    calibrated using the Kalibr [26] in advance. An on-board
    computer (NVIDIA Xavier) is employed to record the multisensor dataset. A navigation-grade [4] GNSS/INS integrated
    navigation system is adopted as the ground-truth system. The
    average speed of the wheeled robot is about 1.5 m/s.

    GVINS: Tightly Coupled GNSS–Visual–Inertial
    Fusion for Smooth and Consistent State Estimation

    在这里插入图片描述
    在这里插入图片描述
    As illustrated in Fig. 10, the device used in our real-world
    experiments is a helmet with a VI-Sensor [35] and a u-blox
    ZED-F9P GNSS receiver4 attached. The detailed specifications
    of each sensor are shown in Table IV. Although the VI-Sensor
    provides two cameras as a stereo pair, we only use the left one for
    all experiments. The u-blox ZED-F9P is a low-cost multiband
    receiver with multiconstellation support. In addition, the ZEDF9P has an internal RTK engine, which is capable of providing
    the receiver’s location at an accuracy of 1 cm in an open area.
    The real-time RTCM stream from a nearby base station is fed
    to the ZED-F9P receiver for the ground truth RTK solution.
    In terms of time synchronization, the camera and the IMU are
    synchronized by the VI-Sensor, and the local time is aligned with
    the global GNSS time via the pulse per second (PPS) signal of
    the ZED-F9P and hardware trigger of the VI-Sensor.

    A LiDAR-Inertial-Visual SLAM System with Loop Detection

    在这里插入图片描述

    LiDAR-Visual-Inertial Odometry Based on Optimized Visual Point-Line Features
    在这里插入图片描述
    在这里插入图片描述

    VILO SLAM: Tightly Coupled Binocular Vision–Inertia SLAM Combined with LiDAR

    在这里插入图片描述
    In order to verify the accuracy and effectiveness of the tightly coupled pose estimation algorithm based on binocular VILO proposed in this paper, some extreme experimental environments need to be selected, such as insufficient light or darkness, lack of texture characteristics (indoor white walls), and frequent movement of dynamic obstacles (people). Therefore, this section focuses on the corridor environment of the indoor experimental building with extreme conditions for algorithm comparison and verification experiments. In this paper, we compare the performance of VILO (ours), VINS Fusion and ORB-SLAM2.

    In this pose-estimation experiment of the three types of algorithms, it is necessary to use the cumulative error of each algorithm to measure the excellence of each algorithm. Therefore, the visual loop detection function is not turned on during the operation of the three types of algorithms because it would eliminate the cumulative pose error. All the methods are executed on a computing device equipped with an Intel i7-8700 CPU using the robot operating system (ROS) in Ubuntu Linux. The sensor mounting platforms are shown in Figure 4.
    Experimental configuration description: The algorithm verification environment is a corridor environment on the first floor of the experimental building with a width of 3 m, a long length, corners, and a relatively empty hall environment. The area on the first floor is about 250 × 100 m2. The scene of this experiment is shown in Figure 5, where (a) is the satellite map of the experimental building, in which you can clearly see the outline of the corridor, and (b–d) show the scene inside the corridor. The robot is controlled to traverse every scene in the corridor as much as possible, the linear velocity of movement is maintained at 0.5 m/s, and the angular velocity is maintained at 0.5 rad/s. A large number of fixed marking points are arranged inside the corridor to obtain the real position of the robot in order to analyze the positioning error of the robot.

    A Tightly Coupled LiDAR-Inertial SLAM for Perceptually Degraded Scenes
    在这里插入图片描述

    在这里插入图片描述

    We conducted a series of quantitative and qualitative analysis experiments on the performance of the proposed tightly coupled LiDAR-IMU fusion SLAM algorithm and compared the results with those of other state-of-the-art LiDAR-SLAM methods. All methods were tested under the same conditions. The hardware platform was an inspection robot with sensors and an on-board computer, as shown in Figure 2. The sensor consisted of a LiDAR (Velodyne VLP-16) with a sampling frequency of 10 Hz and an IMU (hipnuc-CH110) with a sampling frequency of 200 Hz. The on-board computer was an Intel Core i7 with a 2.7 GHz clock, eight cores and 16 G of RAM. All algorithms were implemented in C++ and executed on an Ubuntu 18.04 system using the medoic version of ROS.

    An Integrated SLAM Framework for Industrial AMRs with Multiple 2D
    LiDAR-Visual-Inertial based on Binary Pose Fusion

    在这里插入图片描述
    在这里插入图片描述
    4.1. Experimental and Evaluation Setup
    In this part, we describe the evaluation results of the proposed SLAM framework. The experimental setup
    is demonstrated with two platforms. First, one system is conducted in our laboratory setting to analyze the
    performance and robustness. The second system is the double-holonomic robot in manufacturing to confirm the
    overall performance.
    4.1.1. Experimental Setup
    A double-holonomic mobile robot platform with eight mecanum wheels is utilized to evaluate the performance
    in manufacturing, as shown in Fig. 2. The sensor system for the robot is equipped with two Sick 2D LiDAR sensors
    and two Zed 2 cameras, IMU. We used a Linux-based operating system and ROS Melodic with C++ language on
    the embedded computer NVIDIA Jetson AGX Xavier to manage the SLAM system.
    In order to analyze the performance and robustness, we also build a sensor setup similar to the actual robot
    with two 2D LiDAR sensors, two stereos Zed 2 cameras and an Xsense MTi-10 IMU, as shown in Fig. 9. Table 1
    shows the characteristics of sensors used in the proposed method, which is not used high-performance sensors like
    in the actual holonomic robot.
    We implement Algorithm 2 following the pipeline, as described in Fig. 3. The SLAM system is handled with
    multiple parallel threads for synchronization, laser merging and filtering, state estimation, and SLAM solver.

    Stereo Visual Inertial LiDAR Simultaneous Localization and Mapping

    在这里插入图片描述
    A. Platform and software
    We built a platform (Fig. 1(a)) with two megapixel
    cameras, a 16 scan-line LiDAR, an IMU (400Hz), and
    a 4GHz computer (with 4 physical cores). We built a
    custom microcontroller based time synchronization circuit
    that synchronizes the cameras, LiDAR, IMU and computer
    by simulating GPS time signals. The software pipeline is
    implemented in C++ with ROS communication interface. We
    use GTSAM library [40] to build the fixed-lag smoother
    in the VIO. For loop closure, we use ICP module from
    LibPointMatcher [41] to align point clouds, DBoW3 [42] to
    build the visual dictionary, and iSAM2 [36] implementation
    in GTSAM [40] to conduct global optimization.

    On Visual-Aided LiDAR-Inertial
    Odometry System in Challenging
    Subterranean Environments
    Hengrui Zhang
    CMU-RI-TR-21-35
    August 11, 2021
    https://www.ri.cmu.edu/app/uploads/2021/08/Henry_Thesis_Final.pdf
    在这里插入图片描述
    在这里插入图片描述
    In the DS Drone setup, we choose IMU internal clock time as our time server,
    and the other sensors synchronize their timestamps to the IMU time server.
    To synchronize the Velodyne to the IMU clock, the PPS signal from the IMU
    (based on its internal clock) is directed to the Velodyne. From the NUC computer
    side, when it receives a PPS IMU data packet, it creates a fake NMEA GPS message
    with a timestamp rounded to closest integral second and sends the NMEA message
    to the Velodyne through an Ethernet connection.
    For the uEye camera, the system uses a Teensy 3.2 microcontroller to manage the
    camera external triggering signal. The IMU PPS signal is wired to a GPIO pin on
    the Teensy so it has IMU internal clock information available. Upon image request,
    the uEye driver sends the start camera trigger command to the Teensy, and then
    Teensy generates a 15 hz camera trigger signal starting from the next IMU PPS.
    This ensures that the camera grabs frame at 15 hz, and the first image aligns with
    an integer second in the IMU time frame. The camera internal clock timestamp of
    the first image is also stored, and used as the base stamp to calculate incremental
    time differences for the following images (as shown in figure 3.3). Note that this
    scheme assumes that the total flight time of the drone is short enough that the camera
    internal clock and IMU internal clock (server) do not drift too much. We do confirm
    this is the case for DS drones with its 15 minutes flight time.
    The above synchronization scheme avoids modifying the system clock, which takes
    time when syncing through tools like Chrony, and potentially disrupts other processes
    that rely on computer system time. This is especially important when launching
    drones autonomously in the field in a short period of time.
    3.3 Image Quality
    Image quality is crucial for visual odometry systems. To get the best out of our
    camera sensor, we did some custom tuning to the image quality. In this section,
    we will present the camera model, exposure compensation, and image brightness
    adjustment to achieve good image quality.
    Camera Sensor and Lens Model
    The main camera on DS drones is the uEye UI-3271LE-C-HQ with a Sony IMX265
    imaging sensor. It has a 1/1.8” global shutter sensor as shown in figure 3.4. Comparing
    with its Intel RealSense [11] counterparts on the ground vehicles, the global shuttering
    sensor reduces the skew of objects in images under fast motion. Without an online
    compensation for rolling shutter effects, global shutter cameras are considered to be
    the better choice for state estimation purposes.
    We choose the Lensagon BF5M2023S23C fisheye lens for the camera. The wideangle fisheye lens gives us 195◦ horizontal and vertical field of view. As shown in the
    figure 3.5, due to the limited sensor size, some cutoff of FOV appears in the vertical
    direction, but the overall FOV satisfies the requirements and gives us a better view
    of the scene.
    To model the lens for VIO purposes, we adopt to use a pinhole camera model
    with equidistant lens distortion model [10]. The equidistant distortion can model our
    lens distortion well. An undistorted image is shown in figure 3.5.

    Super Odometry: IMU-centric LiDAR-Visual-Inertial Estimator for
    Challenging Environments
    在这里插入图片描述

    We collected our test dataset with Team Explorer’s DS
    drones (Fig.4(e)), deployed in the DARPA Subterranean
    Challenge. It has a multi-sensor setup including a Velodyne
    VLP-16 LiDAR, an Xsens IMU, a uEye camera with a wideangle fisheye lens, and an Intel NUC onboard computer.
    The data sequences were designed to include both visually
    and geometrically degraded scenarios, which are particularly
    troublesome for camera- and LiDAR-based state-estimation.
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述

    LVI-SAM
    This repository contains code for a lidar-visual-inertial odometry and mapping system, which combines the advantages of LIO-SAM and Vins-Mono at a system level.
    在这里插入图片描述
    The datasets used in the paper can be downloaded from Google Drive. The data-gathering sensor suite includes: Velodyne VLP-16 lidar, FLIR BFS-U3-04S2M-CS camera, MicroStrain 3DM-GX5-25 IMU, and Reach RS+ GPS.

    https://drive.google.com/drive/folders/1q2NZnsgNmezFemoxhHnrDnp1JV_bqrgV?usp=sharing
    Note that the images in the provided bag files are in compressed format. So a decompression command is added at the last line of launch/module_sam.launch. If your own bag records the raw image data, please comment this line out.
    在这里插入图片描述
    在这里插入图片描述

    在这里插入图片描述

    LiDAR-Visual-Inertial Odometry Based on Optimized Visual
    Point-Line Features

    To evaluate the performance of the algorithm we conducted in the outdoor environment, the Hong Kong dataset was used for performance evaluation and it was compared
    with other similar advanced algorithms. The experimental equipment and environment
    are shown in Figure 14. The sensor models are as follows: the camera is BFLY-U3-23S6CC, the LiDAR is HDL 32E Velodyne, IMU is Xsens Mti 10, and the GNSS receiver is u-blox
    M8T. In addition, we utilized the high-grade RTK GNSS/INS integrated navigation system, NovAtel SPAN-CPT, as the ground truth

    在这里插入图片描述

    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述
    在这里插入图片描述

  • 相关阅读:
    北斗导航 | 基于奇偶矢量法的RAIM之SSE探索(附奇偶矢量法源代码)
    如何根据自身的需求来选择合适的工控触摸平板电脑?
    DDD基础_微服务设计为什么要选择DDD?
    CentOS修改主机名
    2.X3-解析器语义动作
    【备忘录】JAVASDK连接MinIO,附完整代码
    若依微服务上传图片文件代理配置
    C++界面开发框架Qt新手入门指南 - 如何创建Qt Quick UI项目
    springboot毕设项目滁州市电动车牌照管理系统cfc49(java+VUE+Mybatis+Maven+Mysql)
    Win8如何删除临时文件?
  • 原文地址:https://blog.csdn.net/qq_40247880/article/details/133822313