University of Twente Student Theses

Login
This website will be unavailable due to maintenance December 1st between 8:00 and 12:00 CET.

Deep learning-based DTM extraction from LIDAR point cloud

Rizaldy, Aldino (2018) Deep learning-based DTM extraction from LIDAR point cloud.

[img] PDF
6MB
Abstract:In a recent study, a Convolutional Neural Network (CNN) has been used for DTM extraction;following the popularityof deep learning for various classification tasks. Since CNNsare designed to work with images, point-to-image conversion is mandatory in order to process point cloudsby CNN. Even though the error rates of the result using CNN are lower than any other methods, the method has a drawback. The point-to-image conversion is slow because each point is converted into a separate image thus leads to highly redundant computation. The objective of this study is to design a more efficient deep learning-based DTM extraction. The goal is achieved by converting the whole point cloud into a single image. The classification itself is performed by employing Fully Convolutional Network (FCN), a modified version of CNN which is specially designed for pixel-wise semantic classification. In the experiment, the proposed method wassignificantly faster than CNN as the state-of-the-art method. It is 78 times faster for point-to-image conversion and 16 times faster for the testing time. An alternative method wasalso proposed by extracting features manually and traininga Multi-Layer Perceptron (MLP) classifier. Random Forest (RF) was also used as acomparison classifier. Theexperiment using the ISPRS Filter Test dataset shows that FCN resultsin 5.22% of total error, 4.10% of type I error, and 15.07% of type II error. It has lower total error and type I error than MLP, CNN, RF and LAStools software.Meanwhile, the alternative method using MLP led to worseaccuracies than FCN or CNN. The FCN approachwas also tested on AHN dataset, a very high point density LIDAR point cloud, resulting in 3.63% of total error, 0.93% of type I error and 6.03% of typeII error. Those error rates are almost similar to the result from LAStools software which has 3.33% of total error, 1.50% of type I error and 5.16 of type II error.Furthermore, the FCN method wasextended in order to separate non-ground points into vegetation and buildingon AHN dataset, so that three classes were obtained in the end. The FCN resultsin 92.83% correctness and 92.67% completeness. As a comparison, the same dataset was classified byMLP and produces90.90% correctness and 89.44% completeness.
Item Type:Essay (Master)
Faculty:ITC: Faculty of Geo-information Science and Earth Observation
Programme:Geoinformation Science and Earth Observation MSc (75014)
Link to this item:https://purl.utwente.nl/essays/85872
Export this item as:BibTeX
EndNote
HTML Citation
Reference Manager

 

Repository Staff Only: item control page