Point Cloud Semantic Segmentation Network Based on Multi-Scale Feature Fusion

Abstract
The semantic segmentation of small objects in point clouds is currently one of the most demanding tasks in photogrammetry and remote sensing applications. Multi-resolution feature extraction and fusion can significantly enhance the ability of object classification and segmentation, so it is widely used in the image field. For this motivation, we propose a point cloud semantic segmentation network based on multi-scale feature fusion (MSSCN) to aggregate the feature of a point cloud with different densities and improve the performance of semantic segmentation. In our method, random downsampling is first applied to obtain point clouds of different densities. A Spatial Aggregation Net (SAN) is then employed as the backbone network to extract local features from these point clouds, followed by concatenation of the extracted feature descriptors at different scales. Finally, a loss function is used to combine the different semantic information from point clouds of different densities for network optimization. Experiments were conducted on the S3DIS and ScanNet datasets, and our MSSCN achieved accuracies of 89.80% and 86.3%, respectively, on these datasets. Our method showed better performance than the recent methods PointNet, PointNet++, PointCNN, PointSIFT, and SAN.
Funding Information
  • National Natural Science Foundation of China (41971424, 61701191)
  • key technical project of Xiamen ocean bureau (18CZB033HJ11, 3502Z20191018, 3502Z20201007, 3502Z20191022, 3502Z20203057, JAT190321,JAT190318,JAT190315)

This publication has 40 references indexed in Scilit: