Please use this identifier to cite or link to this item: http://hdl.handle.net/10397/88263
Title: Relation-based modelling of point cloud for 3D city reconstruction
Authors: Li, Yuan
Degree: Ph.D.
Issue Date: 2020
Abstract: Point cloud data obtained from different platforms, such as aerial laser scanning (ALS), mobile laser scanning (MLS) and photogrammetric point clouds generated from dense image matching (DIM), can intuitively present the three-dimensional (3D) geometric features of objects and are therefore important data sources for 3D city reconstruction. In recent years, with rising demand for the development of digital or smart cities, a great deal of research has investigated the semantic interpretation of point clouds and 3D city reconstruction from such data sources. However, most of these studies are applicable to only relatively simple urban scenes with low-rise buildings. Considering the rapid development and importance of city planning and management, modern cities, especially metropolises such as Hong Kong, have an urgent need for effective 3D city reconstruction methods. However, the complexity of urban scenes (e.g., dense environments, various types of objects and high-rise buildings with manifold structures) in modern cities and the inevitable defects of point clouds (e.g., noise, loss of data and density anisotropy) make the automatic modelling of point clouds a challenging task. To overcome these difficulties, this research investigated the multiple relations contained in point clouds and exploited them for point cloud interpretation and 3D city reconstruction. Specifically, the multiple relations include geometric, contextual and topological relations. Geometric relations refer to local homogeneities of geometric properties, such as density isotropy, normal consistency, planarity, linearity and scattering. Contextual relations are related to neighbouring or adjacent relationships that could be associated with the specific labels assigned to the entities. Topological relations guarantee topological correctness during the generation of watertight and manifold 3D models that conform to CityGML. These multiple relations are comprehensively incorporated in the point cloud modelling process in three stages—segmentation, classification and 3D reconstruction. Based on the assumption that ground objects can be regarded as combinations of different simple shapes, the segmentation of point clouds partitions objects into groups of linear, planar and scatter shapes. The segmentation method first eliminates outliers with a filter that is robust to the varying point density and then generates supervoxels with adaptive sizes based on the homogeneities at the point level. Relations between the supervoxels are then derived and used to cluster adjacent supervoxels with similar geometric properties into structural components. This segmentation method interprets the point clouds on the geometric level and helps to provide essential clues for subsequent semantic interpretation. This is of particular importance when the point clouds (such as MLS data) present abundant details of the objects.
In the classification stage, structural information presenting the relations between structural components is derived at various scales. Such information can be of great help in distinguishing between objects with global or local similarities. In this research, structural, geometric and contextual information is comprehensively incorporated and encoded into a conditional random field (CRF) to make unary and pairwise inferences. High-order potentials defined upon regions independent of connection relationships are also introduced into the CRF to eliminate regional label noise. The classification finally outputs a point cloud with semantic labelling that is spatially smooth. In the 3D reconstruction stage, points labelled as building are clustered into individual buildings and treated as inputs to produce polygonal 3D models. To avoid complex topological computation, a space-partition-and-approximation strategy is used. The building surface is first approximated by a set of planar primitives that are refined based on several geometric relation-based rules. With these planar primitives, the space occupied by the bounding box of the building is partitioned into non-overlapping convex cells based on a half binary space partition tree. The 3D space occupied by the building can be approximated by cells that are inside the building, and the interfaces between the inside and outside cells constitute the surface of the final building model. To ensure optimal selection of the inside cells, topological relations are extracted as interface facets and intersection edges, and are introduced into a global energy function, which can be solved as a linear programming problem with binary integer variables. The surface components of the building are generated from the selected cells and each is assigned a specific surface type defined in CityGML. The relationships between the surface components, e.g., adjacency, parallelism and perpendicularity, are determined based on the relationships between the cell, facet and edge complexes. Experiments with point clouds from three representative data sources, including two MLS point clouds (in Paris and Hong Kong) and a photogrammetric point cloud (in Hong Kong), were carried out to evaluate the performances of the proposed methods in various scenarios. Nine and eleven different classes were recognised from the laser scanning point clouds with overall accuracies of 97.13% and 95.79%, respectively, indicating the effectiveness of the proposed classification method. For the photogrammetric point cloud, the classification result for a specific class, building, was evaluated and found to have a considerably good result, with an F1-score of 82.40%. The buildings extracted from the photogrammetric data were further used to generate 3D building models in CityGML format via the proposed reconstruction method. The reconstruction results were qualitatively and quantitatively compared with the results of previous studies, and the comparisons suggested that the proposed method in this research performed best in terms of robustness and producing regular and geometrically accurate building models, with an average root-mean-square error of less than 0.9 m. This research investigates the use of multiple relations in the pipeline of segmentation, classification and modelling of unordered point clouds for 3D city reconstruction. The developed pipeline shows promising ability to interpret point clouds and reconstruct 3D building models in complex urban scenes. In addition, it has high levels of automation and efficiency. The developed methods advance the current 3D city modelling technology from point cloud data with more automation and better performance. The final output of the 3D city models in the CityGML format can facilitate their use in various applications. The presented research and developments are significant for 3D city reconstruction and modelling, which will facilitate the construction of spatial data infrastructure for a smart city and have great potential to support applications in various domains, such as urban planning and design, urban management, and urban environmental studies.
Subjects: Three-dimensional display systems
City planning
Urban renewal
Geographic information systems
Hong Kong Polytechnic University -- Dissertations
Pages: xvi, 157 pages : color illustrations
Appears in Collections:Thesis

Show full item record

Page views

35
Last Week
0
Last month
Citations as of Mar 24, 2024

Google ScholarTM

Check


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.