Back to results list
Please use this identifier to cite or link to this item:
|Title:||Automatic fusion of photogrammetric imagery and laser scanner point clouds||Authors:||Forkuo, Eric Kwabena||Keywords:||Hong Kong Polytechnic University -- Dissertations
Image processing -- Digital techniques
|Issue Date:||2005||Publisher:||The Hong Kong Polytechnic University||Abstract:||Close-range photogrammetry and the relatively new technology of terrestrial laser scanning can be considered as complementary rather than competitive technologies. For instance, terrestrial laser scanners (TLS) have the ability to rapidly collect high-resolution 3D surface information about an object. The same type of data can be generated using close-range photogrammetric (CRP) techniques, but image disparities common to close-range scenes makes this an operator intensive task. The imaging systems of some TLSs do not have very high radiometric resolution whereas high-resolution digital cameras used in modern CRP do. Finally, TLSs are essentially earth-bound whereas cameras can be moved at will around the object being imaged. This thesis, therefore, explores and attempts to provide a solution to the problems of developing a methodology to fuse terrestrial laser scanner generated 3D data and high-resolution digital images. Four phases of the methodology have been investigated:- data pre-processing (fusion of data from the two sensors), automatic measurements (feature detection and correspondence matching), mapping (creation of point cloud visual index), and orientation (calculation of exterior orientation parameters). Individual phases were initially investigated in a manually controlled environment, typically using commercial photogrammetric software, and then combined in a completely automated system.
Focusing on the amount of geometric primitives, three different scenes (data set A, data set B, and data set C) representing three levels of complexity (low, medium and high) were scanned with the laser scanner, and for each scan, a 2D photographic image was taken with a digital camera. To overcome the differences in datasets, a hybrid matching (both feature and area-based) algorithm was successfully developed and implemented. The fidelity of the concept of generating synthetic camera images has been tested by determining the exterior orientation of the synthetic camera images and the real camera images relative to the point cloud. This orientation process was first achieved by using manual methods and existing photogrammetric application software. The results verified that there were no conceptual errors in the developed methods. However, in order to meet the objective of this thesis, an automatic technique with photogrammetric bundle adjustment was developed. Three different sets of data were used to check the validity and reliability of the developed methodology. The results of measurements on interest points and correspondence matching are presented. Also, the results of manual and automatic exterior orientation are presented. The results indicate that the concept of the synthetic camera image is a feasible method for multisensor fusion. The greatest promise is offered by the point cloud visual index.
|Description:||xviii, 213 leaves : ill. (some col.) ; 30 cm. + 1 computer optical disc
PolyU Library Call No.: [THS] LG51 .H577P LSGI 2005 Forkuo
|URI:||http://hdl.handle.net/10397/3586||Rights:||All rights reserved.|
|Appears in Collections:||Thesis|
Show full item record
Files in This Item:
|b18181211_link.htm||For PolyU Users||161 B||HTML||View/Open|
|b18181211_ir.pdf||For All Users (Non-printable)||8.99 MB||Adobe PDF||View/Open|
Citations as of Jul 10, 2018
Citations as of Jul 10, 2018
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.