The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Download
Publications Copernicus
Download
Citation
Articles | Volume XLII-2/W17
https://doi.org/10.5194/isprs-archives-XLII-2-W17-355-2019
https://doi.org/10.5194/isprs-archives-XLII-2-W17-355-2019
29 Nov 2019
 | 29 Nov 2019

SEMANTIC ENRICHMENT OF POINT CLOUD BY AUTOMATIC EXTRACTION AND ENHANCEMENT OF 360° PANORAMAS

A. Tabkha, R. Hajji, R. Billen, and F. Poux

Keywords: 3D Point cloud, Semantic information, Feature extraction, point cloud representation, Deep learning, Image recognition

Abstract. The raw nature of point clouds is an important challenge for their direct exploitation in architecture, engineering and construction applications. Particularly, their lack of semantics hinders their utility for automatic workflows (Poux, 2019). In addition, the volume and the irregularity of the structure of point clouds makes it difficult to directly and automatically classify datasets efficiently, especially when compared to the state-of-the art 2D raster classification. Recently, with the advances in deep learning models such as convolutional neural networks (CNNs) , the performance of image-based classification of remote sensing scenes has improved considerably (Chen et al., 2018; Cheng et al., 2017). In this research, we examine a simple and innovative approach that represent large 3D point clouds through multiple 2D projections to leverage learning approaches based on 2D images. In other words, the approach in this study proposes an automatic process for extracting 360° panoramas, enhancing these to be able to leverage raster data to obtain domain-base semantic enrichment possibilities. Indeed, it is very important to obtain a rigorous characterization for use in the classification of a point cloud. Especially because there is a very large variety of 3D point cloud domain applications. In order to test the adequacy of the method and its potential for generalization, several tests were performed on different datasets. The developed semantic augmentation algorithm uses only the attributes X, Y, Z and camera positions as inputs.