The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLII-1/W1
https://doi.org/10.5194/isprs-archives-XLII-1-W1-543-2017
https://doi.org/10.5194/isprs-archives-XLII-1-W1-543-2017
31 May 2017
 | 31 May 2017

ASSESSING TEMPORAL BEHAVIOR IN LIDAR POINT CLOUDS OF URBAN ENVIRONMENTS

J. Schachtschneider, A. Schlichting, and C. Brenner

Keywords: LiDAR, Mobile Mapping, Alignment, Strip Adjustment, Change Detection

Abstract. Self-driving cars and robots that run autonomously over long periods of time need high-precision and up-to-date models of the changing environment. The main challenge for creating long term maps of dynamic environments is to identify changes and adapt the map continuously. Changes can occur abruptly, gradually, or even periodically.

In this work, we investigate how dense mapping data of several epochs can be used to identify the temporal behavior of the environment. This approach anticipates possible future scenarios where a large fleet of vehicles is equipped with sensors which continuously capture the environment. This data is then being sent to a cloud based infrastructure, which aligns all datasets geometrically and subsequently runs scene analysis on it, among these being the analysis for temporal changes of the environment.

Our experiments are based on a LiDAR mobile mapping dataset which consists of 150 scan strips (a total of about 1 billion points), which were obtained in multiple epochs. Parts of the scene are covered by up to 28 scan strips. The time difference between the first and last epoch is about one year. In order to process the data, the scan strips are aligned using an overall bundle adjustment, which estimates the surface (about one billion surface element unknowns) as well as 270,000 unknowns for the adjustment of the exterior orientation parameters. After this, the surface misalignment is usually below one centimeter. In the next step, we perform a segmentation of the point clouds using a region growing algorithm. The segmented objects and the aligned data are then used to compute an occupancy grid which is filled by tracing each individual LiDAR ray from the scan head to every point of a segment. As a result, we can assess the behavior of each segment in the scene and remove voxels from temporal objects from the global occupancy grid.