Affiliation:
1. Western Michigan University
Abstract
<div class="section abstract"><div class="htmlview paragraph">Sensor calibration plays an important role in determining overall navigation
accuracy of an autonomous vehicle (AV). Calibrating the AV’s perception sensors,
typically, involves placing a prominent object in a region visible to the
sensors and then taking measurements to further analyses. The analysis involves
developing a mathematical model that relates the AV’s perception sensors using
the measurements taken of the prominent object. The calibration process has
multiple steps that require high precision, which tend to be tedious and
time-consuming. Worse, calibration has to be repeated to determine new extrinsic
parameters whenever either one of the sensors move. Extrinsic calibration
approaches for LiDAR and camera depend on objects or landmarks with distinct
features, like hard edges or large planar faces that are easy to identify in
measurements. The current work proposes a method for extrinsically calibrating a
LiDAR and a forward-facing monocular camera using 3D and 2D bounding boxes. The
proposed algorithm was tested using the KITTI dataset and experimental data. The
rotation matrix is evaluated by calculating its Euler angles and comparing them
to the ideal Euler angles that describe the ideal angular orientation of the
LiDAR with respect to the camera. The comparison shows that calibration
algorithm’s rotation matrix is approximately close to both the ideal and the
KITTI dataset rotation matrices. The corresponding translation vector is shown
to be close to expected values as well. The results from the experimental data
were evaluated and verified by projecting cluster measurements of the prominent
objects on to corresponding images.</div></div>