Repository logo
 

Machine learning, classification of 3D UAV-SFM point clouds in the University of KwaZulu-Natal (Howard College)

dc.contributor.advisorForbes, Angus Mcfarlane.
dc.contributor.authorNtuli, Simiso Siphenini.
dc.date.accessioned2022-10-19T08:24:08Z
dc.date.available2022-10-19T08:24:08Z
dc.date.created2020
dc.date.issued2020
dc.descriptionMasters Degrees. University of KwaZulu- Natal, Durban.en_US
dc.description.abstractThree-dimensional (3D) point clouds derived using cost-effective and time-efficient photogrammetric technologies can provide information that can be utilized for decision-making in engineering, built environment and other related fields. This study focuses on the use of machine learning to automate the classification of points in a heterogeneous 3D scene situated in the University of KwaZulu-Natal, Howard College Campus sports field. The state of the camera mounted on the unmanned aerial vehicle (UAV) was evaluated through the process of camera calibration. Nadir aerial images captured using a UAV were used to generate a 3D point cloud employing the structure-from-motion (SfM) photogrammetric technique. The generated point cloud was georeferenced using natural ground control points (GCPs). Supervised and unsupervised classification approaches were used to classify points into three classes: ground, high vegetation and building. The supervised classification algorithm used a multi-scale dimensionality analysis to classify points. A georeferenced orthomosaic was used to generate random points for cross-validation. The accuracy of classification was evaluated, employing both qualitative and quantitative analysis. The camera calibration results showed negligible discrepancies when a comparison was made between the results obtained and the manufacturer’s specifications in parameters of the camera lens; hence the camera was in the excellent state of being used as a measuring device. Site visits and ground truth surveys were conducted to validate the classified point cloud. An overall root-mean-square (RMS) error of 0.053m was achieved from georeferencing the 3D point cloud. A root-mean-square error of 0.032m was achieved from georeferencing the orthomosaic. The multi-scale dimensionality analysis classified a point cloud and achieved an accuracy of 81.3% and a Kappa coefficient of 0.70. Good results were also achieved from the qualitative analysis. The classification results obtained indicated that a 3D heterogeneous scene can be classified into different land cover categories. These results show that the classification of 3D UAV-SfM point clouds provides a helpful tool for mapping and monitoring complex 3D environments.en_US
dc.identifier.urihttps://researchspace.ukzn.ac.za/handle/10413/20959
dc.language.isoenen_US
dc.subject.otherPhotogrammetric technologies.en_US
dc.subject.otherHeterogeneous 3D scene.en_US
dc.subject.otherUnmanned aerial vehicle.en_US
dc.subject.otherGeoreferenced orthomosaic.en_US
dc.subject.otherRoot-Mean-Square.en_US
dc.titleMachine learning, classification of 3D UAV-SFM point clouds in the University of KwaZulu-Natal (Howard College)en_US
dc.typeThesisen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Ntuli_Siphenini_Simiso_2020.pdf
Size:
2.61 MB
Format:
Adobe Portable Document Format
Description:

License bundle

Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.64 KB
Format:
Item-specific license agreed upon to submission
Description: