Masters Degrees (Computer Engineering)
Permanent URI for this collectionhttps://hdl.handle.net/10413/6913
Browse
Browsing Masters Degrees (Computer Engineering) by Issue Date
Now showing 1 - 19 of 19
- Results Per Page
- Sort Options
Item Investigation of virtual learning behaviour in an Eastern Cape high school biology course.(2003) Kavuma, Henry.; Yates, Steven.Transformation in education over the decades has failed to keep abreast of the rapidly advancing technological environment of modern society. This implies that curricula, learning paradigms and tools employed by educational institutions are not in sync with the technologically oriented lifestyle of modern society. Learners are therefore unable to apply and assimilate their daily life experiences into the learning process. This disparity warrants radical transformation in education, so as to furnish the appropriate education system where learners are able to construct their knowledge on the basis of pre-existing ideas and experiences. However, any transformation in the e~ucation approach should essentially be complemented by the adoption of appropriate learning environments and paradigms that can capitalize on learners' life experiences as well as elicit the appropriate learning behaviour and attitudes for effective and life-long learning. Much of the literature reviewed affirms the efficacy of virtual learning environments as mediums that can facilitate effective learner-centred electronic-learning suitable for modern society. They are asserted as liberators of learning in respect of instructivist ideals, information access and the confines of the physical classroom. This is confirmed by findings of this research, which are generally in favour of the virtual learning environment's ability to enhance the learning experiences of learners but remained inconclusive on their learning outcomes.Item Volumetric reconstruction of rigid objects from image sequences.(2012) Ramchunder, Naren.; Naidoo, Bashan.Live video communications over bandwidth constrained ad-hoc radio networks necessitates high compression rates. To this end, a model based video communication system that incorporates flexible and accurate 3D modelling and reconstruction is proposed in part. Model-based video coding (MBVC) is known to provide the highest compression rates, but usually compromises photorealism and object detail. High compression ratios are achieved at the encoder by extracting and transmit- ting only the parameters which describe changes to object orientation and motion within the scene. The decoder uses the received parameters to animate reconstructed objects within the synthesised scene. This is scene understanding rather than video compression. 3D reconstruction of objects and scenes present at the encoder is the focus of this research. 3D Reconstruction is accomplished by utilizing the Patch-based Multi-view Stereo (PMVS) frame- work of Yasutaka Furukawa and Jean Ponce. Surface geometry is initially represented as a sparse set of orientated rectangular patches obtained from matching feature correspondences in the input images. To increase reconstruction density these patches are iteratively expanded, and filtered using visibility constraints to remove outliers. Depending on the availability of segmentation in- formation, there are two methods for initialising a mesh model from the reconstructed patches. The first method initialises the mesh from the object's visual hull. The second technique initialises the mesh directly from the reconstructed patches. The resulting mesh is then refined by enforcing patch reconstruction consistency and regularization constraints for each vertex on the mesh. To improve robustness to outliers, two enhancements to the above framework are proposed. The first uses photometric consistency during feature matching to increase the probability of selecting the correct matching point first. The second approach estimates the orientation of the patch such that its photometric discrepancy score for each of its visible images is minimised prior to optimisation. The overall reconstruction algorithm is shown to be flexible and robust in that it can reconstruct 3D models for objects and scenes. It is able to automatically detect and discard outliers and may be initialised by simple visual hulls. The demonstrated ability to account for surface orientation of the patches during photometric consistency computations is a key performance criterion. Final results show that the algorithm is capable of accurately reconstructing objects containing fine surface details, deep concavities and regions without salient textures.Item Granting privacy and authentication in mobile ad hoc networks.(2012) Balmahoon, Reevana.; Peplow, Roger Charles Samuel.The topic of the research is granting privacy and authentication in Mobile Ad Hoc Networks (MANETs) that are under the authority of a certificate authority (CA) that is often not available. Privacy is implemented in the form of an anonymous identity or pseudonym, and ideally has no link to the real identity. Authentication and privacy are conflicting tenets of security as the former ensures a user's identity is always known and certified and the latter hides a user's identity. The goal was to determine if it is possible for a node to produce pseudonyms for itself that would carry the authority of the CA while being traceable by the CA, and would be completely anonymous. The first part of the dissertation places Vehicular Ad Hoc Networks (VANETs) into context, as this is the application of MANETs considered. This is followed by a detailed survey and analysis of the privacy aspects of VANETs. Thereafter, the solution is proposed, documented and analysed. Lastly, the dissertation is concluded and the contributions made are listed. The solution implements a novel approach for making proxies readily available to vehicles, and does indeed incorporate privacy and authentication in VANETs such that the pseudonyms produced are always authentic and traceable.Item Fingerprint identification using distributed computing.(2012) Khanyile, Nontokozo Portia.; Dube, Erick.; Tapamo, Jules-Raymond.Biometric systems such as face, palm and fingerprint recognition are very computationally expensive. The ever growing biometric database sizes have posed a need for faster search algorithms. High resolution images are expensive to process and slow down less powerful extraction algorithms. There is an apparent need to improve both the signal processing and the searching algorithms. Researchers have continually searched for new ways of improving the recognition algorithms in order to keep up with the high pace of the scientific and information security world. Most such developments, however, are architecture- or hardware-specific and do not port well to other platforms. This research proposes a cheaper and portable alternative. With the use of the Single Program Multiple Data programming architecture, a distributed fingerprint recognition algorithm is developed and executed on a powerful cluster. The first part in the parallelization of the algorithm is distributing the image enhancement algorithm which comprises of a series of computationally intensive image processing operations. Different processing elements work concurrently on different parts of the same image in order to speed up the processing. The second part of parallelization speeds up searching/ matching through a parallel search. A database is partitioned as evenly as possible amongst the available processing nodes which work independently to search their respective partitions. Each processor returns a match with the highest similarity score and the template with the highest score among those returned is returned as match given that the score is above a certain threshold. The system performance with respect to response time is then formalized in a form of a performance model which can be used to predict the performance of a distributed system given network parameters and number of processing nodes. The proposed algorithm introduces a novel approach to memory distribution of block-wise image processing operations and discusses three different ways to process pixels along the partitioning axes of the distributed images. The distribution and parallelization of the recognition algorithm gains up to as much as 12.5 times performance in matching and 10.2 times in enhancement.Item Fusion of time of flight (ToF) camera's ego-motion and inertial navigation.(2013) Ratshidaho, Thikhathali Terence.; Tapamo, Jules-Raymond.For mobile robots to navigate autonomously, one of the most important and challenging task is localisation. Localisation refers to the process whereby a robot locates itself within a map of a known environment or with respect to a known starting point within an unknown environment. Localisation of a robot in unknown environment is done by tracking the trajectory of a robot whilst knowing the initial pose. Trajectory estimation becomes challenging if the robot is operating in an unknown environment that has scarcity of landmarks, is GPS denied, is slippery and dark such as in underground mines. This dissertation addresses the problem of estimating a robot's trajectory in underground mining environments. In the past, this problem has been addressed by using a 3D laser scanner. 3D laser scanners are expensive and consume lot of power even though they have high measurements accuracy and wide eld of view. For this research work, trajectory estimation is accomplished by the fusion of an ego-motion provided by Time of Flight(ToF) camera and measurement data provided by a low cost Inertial Measurement Unit(IMU). The fusion is performed using Kalman lter algorithm on a mobile robot moving in a 2D planar surface. The results shows a signi cant improvement on the trajectory estimation. Trajectory estimation using ToF camera only is erroneous especially when the robot is rotating. The fused trajectory estimation algorithm is able to estimate accurate ego-motion even when the robot is rotating.Item Parallel patch-based volumetric reconstruction from images.(2014) Jermy, Robert Sydney.; Naidoo, Bashan.; Tapamo, Jules-Raymond.Three Dimensional (3D) reconstruction relates to the creating of 3D computer models from sets of Two Dimensional (2D) images. 3D reconstruction algorithms tend to have long execution times, meaning they are ill suited to real time 3D reconstruction tasks. This is a significant limitation which this dissertation attempts to address. Modern Graphics Processing Units (GPUs) have become fully programmable and have spawned the field known as General Purpose GPU (GPGPU) processing. Using this technology it is possible to of- fload certain types of tasks from the Central Processing Unit (CPU) to the GPU. GPGPU processing is designed for problems that have data parallelism. This means that a particular task can be split into many smaller tasks that can run in parallel, the results of which and are not dependent upon the order in which the tasks are completed. Therefore to properly make use of both CPU parallelism and GPGPU processing a 3D reconstruction algorithm with data parallelism was required. The selected algorithm was the Patch-Based Multi-View Stereopsis (PMVS) method, proposed and implemented by Yasutaka Furukawa and Jean Ponce. This algorithm uses small oriented rectangular patches to model a surface and is broken into four major steps: Feature detection; feature matching, expansion and filtering. The reconstructed patches are independent and as such the algorithm is data parallel. Some segments of the PMVS algorithm were programmed for GPGPU and others for CPU parallelism. Results show that the feature detection stage runs 10 times faster on the GPU than the equivalent CPU implementation. The patch creation and expansion stages also benefited from GPU implementation. Which brought an improvement in the execution time of two times for large images, and equivalent execution times for small images, when compared to the CPU implementation. These results show that the use of GPGPU and CPU parallelism can indeed improve the performance of this 3D reconstruction algorithm.Item Flat fingerprint classification using a rule-based technique, based on directional patterns and similar points.(2016) Dorasamy, Kribashnee.; Webb-Ray, Leandra.; Tapamo, Jules-Raymond.Abstract available in PDF file.Item Unsupervised feature selection for anomaly-based network intrusion detection using cluster validity indices.(2016) Naidoo, Tyrone.; Tapamo, Jules-Raymond.; McDonald, Andre Martin.In recent years, there has been a rapid increase in Internet usage, which has in turn led to a rise in malicious network activity. Network Intrusion Detection Systems (NIDS) are tools that monitor network traffic with the purpose of rapidly and accurately detecting malicious activity. These systems provide a time window for responding to emerging threats and attacks aimed at exploiting vulnerabilities that arise from issues such as misconfigured firewalls and outdated software. Anomaly-based network intrusion detection systems construct a profile of legitimate or normal traffic patterns using machine learning techniques, and monitor network traffic for deviations from the profile, which are subsequently classified as threats or intrusions. Due to the richness of information contained in network traffic, it is possible to define large feature vectors from network packets. This often leads to redundant or irrelevant features being used in network intrusion detection systems, which typically reduces the detection performance of the system. The purpose of feature selection is to remove unnecessary or redundant features in a feature space, thereby improving the performance of learning algorithms and as a result the classification accuracy. Previous approaches have performed feature selection via optimization techniques, using the classification accuracy of the NIDS on a subset of the data as an objective function. While this approach has been shown to improve the performance of the system, it is unrealistic to assume that labelled training data is available in operational networks, which precludes the use of classification accuracy as an objective function in a practical system. This research proposes a method for feature selection in network intrusion detection that does not require any access to labelled data. The algorithm uses normalized cluster validity indices as an objective function that is optimized over the search space of candidate feature subsets via a genetic algorithm. Feature subsets produced by the algorithm are shown to improve the classification performance of an anomaly{based network intrusion detection system over the NSL-KDD dataset. Despite not requiring access to labelled data, the classification performance of the proposed system approaches that of efective feature subsets that were derived using labelled training data.Item Gaussian mixture model classifiers for detection and tracking in UAV video streams.(2017) Pillay, Treshan.; Naidoo, Bashan.Manual visual surveillance systems are subject to a high degree of human-error and operator fatigue. The automation of such systems often employs detectors, trackers and classifiers as fundamental building blocks. Detection, tracking and classification are especially useful and challenging in Unmanned Aerial Vehicle (UAV) based surveillance systems. Previous solutions have addressed challenges via complex classification methods. This dissertation proposes less complex Gaussian Mixture Model (GMM) based classifiers that can simplify the process; where data is represented as a reduced set of model parameters, and classification is performed in the low dimensionality parameter-space. The specification and adoption of GMM based classifiers on the UAV visual tracking feature space formed the principal contribution of the work. This methodology can be generalised to other feature spaces. This dissertation presents two main contributions in the form of submissions to ISI accredited journals. In the first paper, objectives are demonstrated with a vehicle detector incorporating a two stage GMM classifier, applied to a single feature space, namely Histogram of Oriented Gradients (HoG). While the second paper demonstrates objectives with a vehicle tracker using colour histograms (in RGB and HSV), with Gaussian Mixture Model (GMM) classifiers and a Kalman filter. The proposed works are comparable to related works with testing performed on benchmark datasets. In the tracking domain for such platforms, tracking alone is insufficient. Adaptive detection and classification can assist in search space reduction, building of knowledge priors and improved target representations. Results show that the proposed approach improves performance and robustness. Findings also indicate potential further enhancements such as a multi-mode tracker with global and local tracking based on a combination of both papers.Item Facial expression recognition using covariance matrix descriptors and local texture patterns.(2017) Naidoo, Ashaylin.; Tapamo, Jules-Raymond.; Khutlang, Rethabile.Facial expression recognition (FER) is a powerful tool that is emerging rapidly due to increased computational power in current technologies. It has many applications in the fields of human-computer interaction, psychological behaviour analysis, and image understanding. However, FER presently is not fully realised due to the lack of an effective facial feature descriptor. The covariance matrix as a feature descriptor is popular in object detection and texture recognition. Its innate ability to fuse multiple local features within a domain is proving to be useful in applications such as biometrics. Developing methods also prevalent in pattern recognition are local texture patterns such as Local Binary Pattern (LBP) and Local Directional Pattern (LDP) because of their fast computation and robustness against illumination variations. This study will examine the performance of covariance feature descriptors that incorporate local texture patterns concerning applications in facial expression recognition. The proposed method will focus on generating feature descriptors to extract robust and discriminative features that can aid against extrinsic factors affecting facial expression recognition, such as illumination, pose, scale, rotation and occlusion. The study also explores the influence of using holistic versus componentbased approaches to FER. A novel feature descriptor referred to as Local Directional Covariance Matrices (LDCM) is proposed. The covariance descriptors will consist of fusing features such as location, intensity and filter responses, and include LBP and LDP into the covariance structure. Tests conducted will examine the accuracy of different variations of covariance features and the impact of segmenting the face into equal sized blocks or special landmark regions, i.e. eyes, nose and mouth, for classification. The results on JAFFE, CK+ and ISED facial expression databases establish that the proposed descriptor achieves a high level of performance for FER at a reduced feature size. The effectiveness of using a component-based approach with special landmarks displayed stable results across different datasets and environments.Item Using facial expression recognition for crowd monitoring.(2017) Holder, Ross Philip.; Tapamo, Jules-Raymond.In recent years, Crowd Monitoring techniques have attracted emerging interest in the eld of computer vision due to their ability to monitor groups of people in crowded areas, where conventional image processing methods would not suffice. Existing Crowd Monitoring techniques focus heavily on analyzing a crowd as a single entity, usually in terms of their density and movement pattern. While these techniques are well suited for the task of identifying dangerous and emergency situations, such as a large group of people exiting a building at once, they are very limited when it comes to identifying emotion within a crowd. By isolating different types of emotion within a crowd, we aim to predict the mood of a crowd even in scenes of non-panic. In this work, we propose a novel Crowd Monitoring system based on estimating crowd emotion using Facial Expression Recognition (FER). In the past decade, both FER and activity recognition have been proposed for human emotion detection. However, facial expression is arguably more descriptive when identifying emotion and is less likely to be obscured in crowded environments compared to body pos- ture. Given a crowd image, the popular Viola and Jones face detection algorithm is used to detect and extract unobscured faces from individuals in the crowd. A ro- bust and efficient appearance based method of FER, such as Gradient Local Ternary Pattern (GLTP), is used together with a machine learning algorithm, Support Vec- tor Machine (SVM), to extract and classify each facial expression as one of seven universally accepted emotions (joy, surprise, anger, fear, disgust, sadness or neutral emotion). Crowd emotion is estimated by isolating groups of similar emotion based on their relative size and weighting. To validate the effectiveness of the proposed system, a series of cross-validation tests are performed using a novel Crowd Emotion dataset with known ground-truth emotions. The results show that the system presented is able to accurately and efficiently predict multiple classes of crowd emotion even in non-panic situations where movement and density information may be incomplete. In the future, this type of system can be used for many security applications; such as helping to alert authorities to potentially aggressive crowds of people in real-time.Item Error performance analysis of n-ary Alamouti scheme with signal space diversity.(2018) Sibanda, Nathael.; Xu, Hongjun.In this dissertation, a high-rate Alamouti scheme with Signal Space Diversity is developed to improve both the spectral efficiency and overall error performance in wireless communication links. This scheme uses high modulation techniques (M-ary quadrature amplitude modulation (M-QAM) and N-ary phase shift keying modulation (N-PSK)). Hence, this dissertation presents the mathematical models, design methodology and theoretical analysis of this high-rate Alamouti scheme with Signal Space Diversity.To improve spectral efficiency in multiple-input multiple-output (MIMO) wireless communications an N-ary Alamouti M-ary quadrature amplitude modulation (M-QAM) scheme is proposed in this thesis. The proposed N-ary Alamouti M-QAM Scheme uses N-ary phase shift keying modulation (NPSK) and M-QAM. The proposed scheme is investigated in Rayleigh fading channels with additive white Gaussian noise (AWGN). Based on union bound a theoretical average bit error probability (ABEP) of the system is formulated. The simulation results validate the theoretical ABEP. Both theoretical results and simulation results show that the proposed scheme improves spectral efficiency by 0.5 bit/sec/Hz in 2 × 4 16-PSK Alamouti 16-QAM system compared to the conventional Alamouti scheme (16-QAM). To further improve the error performance of the proposed N-ary Alamouti M-QAM Scheme an 𝑁𝑇 ×𝑁𝑅 N-ary Alamouti coded M-QAM scheme with signal space diversity (SSD) is also proposed in this thesis. In this thesis, based on the nearest neighbour (NN) approach a theoretical closed-form expression of the ABEP is further derived in Rayleigh fading channels. Simulation results also validate the theoretical ABEP for N-ary Alamouti M-QAM scheme with SSD. Both theoretical and simulation results further show that the 2 × 4 4-PSK Alamouti 256-QAM scheme with SSD can achieve 0.8 dB gain compared to the 2 × 4 4-PSK Alamouti 256-QAM scheme without SSD.Item Hybrid generalized non-orthogonal multiple access for the 5G wireless networks.(2018) Zitha, Samson Manyani.; Walingo, Tom Mmbasu.The deployment of 5G networks will lead to an increase in capacity, spectral efficiency, low latency and massive connectivity for wireless networks. They will still face the challenges of resource and power optimization, increasing spectrum efficiency and energy optimization, among others. Furthermore, the standardized technologies to mitigate against the challenges need to be developed and are a challenge themselves. In the current predecessor LTE-A networks, orthogonal frequency multiple access (OFDMA) scheme is used as the baseline multiple access scheme. It allows users to be served orthogonally in either time or frequency to alleviate narrowband interference and impulse noise. Further spectrum limitations of orthogonal multiple access (OMA) schemes have resulted in the development of non-orthogonal multiple access (NOMA) schemes to enable 5G networks to achieve high spectral efficiency and high data rates. NOMA schemes unorthogonally co-multiplex different users on the same resource elements (RE) (i.e. time-frequency domain, OFDMA subcarrier, or spreading code) via power domain (PD) or code domain (CD) at the transmitter and successfully separating them at the receiver by applying multi-user detection (MUD) algorithms. The current developed NOMA schemes, refered to as generalized-NOMA (G-NOMA) technologies includes; Interleaver Division Multiple Access (IDMA, Sparse code multiple access (SCMA), Low-density spreading multiple access (LDSMA), Multi-user shared access (MUSA) scheme and the Pattern Division Multiple Access (PDMA). These protocols are currently still under refinement, their performance and applicability has not been thoroughly investigated. The first part of this work undertakes a thorough investigation and analysis of the performance of the existing G-NOMA schemes and their applicability. Generally, G-NOMA schemes perceives overloading by non-orthogonal spectrum resource allocation, which enables massive connectivity of users and devices, and offers improved system spectral efficiency. Like any other technologies, the G-NOMA schemes need to be improved to further harvest their benefits on 5G networks leading to the requirement of Hybrid G-NOMA (G-NOMA) schemes. The second part of this work develops a HG-NOMA scheme to alleviate the 5G challenges of resource allocation, inter and cross-tier interference management and energy efficiency. This work develops and investigates the performance of an Energy Efficient HG-NOMA resource allocation scheme for a two-tier heterogeneous network that alleviates the cross-tier interference and improves the system throughput via spectrum resource optimization. By considering the combinatorial problem of resource pattern assignment and power allocation, the HG-NOMA scheme will enable a new transmission policy that allows more than two macro-user equipment’s (MUEs) and femto-user equipment’s (FUEs) to be co-multiplexed on the same time-frequency RE increasing the spectral efficiency. The performance of the developed model is shown to be superior to the PD-NOMA and OFDMA schemes.Item Hybrid generalized non-orthogonal multiple access for the 5G wireless networks.(2018) Zitha, Samson Manyani.; Walingo, Tom Mmbasu.The deployment of 5G networks will lead to an increase in capacity, spectral efficiency, low latency and massive connectivity for wireless networks. They will still face the challenges of resource and power optimization, increasing spectrum efficiency and energy optimization, among others. Furthermore, the standardized technologies to mitigate against the challenges need to be developed and are a challenge themselves. In the current predecessor LTE-A networks, orthogonal frequency multiple access (OFDMA) scheme is used as the baseline multiple access scheme. It allows users to be served orthogonally in either time or frequency to alleviate narrowband interference and impulse noise. Further spectrum limitations of orthogonal multiple access (OMA) schemes have resulted in the development of non-orthogonal multiple access (NOMA) schemes to enable 5G networks to achieve high spectral efficiency and high data rates. NOMA schemes unorthogonally co-multiplex different users on the same resource elements (RE) (i.e. time-frequency domain, OFDMA subcarrier, or spreading code) via power domain (PD) or code domain (CD) at the transmitter and successfully separating them at the receiver by applying multi-user detection (MUD) algorithms. The current developed NOMA schemes, refered to as generalized-NOMA (G-NOMA) technologies includes; Interleaver Division Multiple Access (IDMA, Sparse code multiple access (SCMA), Low-density spreading multiple access (LDSMA), Multi-user shared access (MUSA) scheme and the Pattern Division Multiple Access (PDMA). These protocols are currently still under refinement, their performance and applicability has not been thoroughly investigated. The first part of this work undertakes a thorough investigation and analysis of the performance of the existing G-NOMA schemes and their applicability. Generally, G-NOMA schemes perceives overloading by non-orthogonal spectrum resource allocation, which enables massive connectivity of users and devices, and offers improved system spectral efficiency. Like any other technologies, the G-NOMA schemes need to be improved to further harvest their benefits on 5G networks leading to the requirement of Hybrid G-NOMA (G-NOMA) schemes. The second part of this work develops a HG-NOMA scheme to alleviate the 5G challenges of resource allocation, inter and cross-tier interference management and energy efficiency. This work develops and investigates the performance of an Energy Efficient HG-NOMA resource allocation scheme for a two-tier heterogeneous network that alleviates the cross-tier interference and improves the system throughput via spectrum resource optimization. By considering the combinatorial problem of resource pattern assignment and power allocation, the HG-NOMA scheme will enable a new transmission policy that allows more than two macro-user equipment’s (MUEs) and femto-user equipment’s (FUEs) to be co-multiplexed on the same time-frequency RE increasing the spectral efficiency. The performance of the developed model is shown to be superior to the PD-NOMA and OFDMA schemes.Item Smart attendance monitoring system using computer vision.(2019) Mothwa, Louis.; Tapamo, Jules-Raymond.; Mapayi, Temitope.Monitoring of student’s attendance remains the fundamental and vital part of any educational institution. The attendance of students to classes can have an impact on their academic performance. With the gradual increase in the number of students, it becomes a challenge for institutions to manage their attendance. The traditional attendance monitoring system requires considerable amount of time due to manual recording of names and circulation of the paper-based attendance sheet for students to sign their names. The paper-based attendance recording method and some existing automated systems such as mobile applications, Radio Frequency Identification (RFID), Bluetooth, and fingerprint attendance models are prone to fake results and time wasting. The limitations of the traditional attendance monitoring system stimulated the adoption of computer vision to stand in the gap. Student’s attendance can be monitored with biometric candidate’s systems such as iris recognition system and face recognition system. Among these, face recognition have a greater potential because of its non-intrusive nature. Although some automated attendance monitoring systems have been proposed, poor system modelling negatively affects the systems. In order to improve success of the automated systems, this research proposes the smart attendance monitoring system that uses facial recognition to monitor student’s attendance in a classroom. A time integrated model is provided to monitor student’s attendance throughout the lecture period by registering the attendance information at regular time intervals. Multi-camera system is also proposed to guarantee an accurate capturing of students. The proposed multi-camera based system is tested using a real-time database in an experimental class from the University of KwaZulu-Natal (UKZN). The results show that the proposed smart attendance monitoring System is reliable, with the average accuracy rate of 98%.Item Enhanced spectral efficiency schemes for space-time block coded spatial modulation.(2019) Motsa, Sibusiso Thabiso.; Xu, Hongjun.The ever-growing demand for high data rate, low latency and energy efficient transmission schemes has seen an increasing popularity of multiple-input multiple-output (MIMO) scheme. One such scheme is the orthogonal space-time block code (STBC) scheme introduced by Alamouti which provides full diversity without sacrificing its data rate. Introduction of spatial multiplexing to STBC through spatial modulation (SM) improves the performance and spectral efficiency whilst eliminating transmit antenna synchronization and inter-channel interference (ICI) at the receiver. In this dissertation, we investigate and evaluate the error performance of both STBC and SM MIMO schemes. As such, we exploit the advantage of both schemes in space-time block coded spatial modulation (STBC-SM) scheme resulting in a high spectral efficient scheme. Motivated by the requisite for higher data rate transmission schemes, we expand the orthogonal STBC transmission matrix to further improve the spectral efficiency of space-time block coded spatial modulation. The fundamental idea is keeping the size of the amplitude/phase modulator (APM) symbol set of STBC the same. Therefore, unitary matrix transformation technique is introduced to the conventional STBC matrix. This technique prevents an increase in the peak-to-average power ratio of the transmitted symbols. A decrease in the phase angle of the unitary matrix yields an increase in the number of information bits transmitted, subsequently increasing the spectral efficiency of a system. A new system referred to as enhanced spectral efficiency space-time block coded spatial modulation (E-STBC-SM) is proposed. Moreover, a tight closed-form lower-bound is derived to estimate the average BER of the E-STBC-SM system over Rayleigh frequency-flat fading channel and validated with Monte Carlo simulations. Comparisons of the proposed E-STBC-SM scheme and conventional STBC-SM scheme are carried out with four receive antennas in all cases. The E-STBC-SM scheme virtually retains the BER performance of the STBC-SM scheme with a maximum attenuation of 0.6 dB throughout modulation order 16, 32 and 64 of a PSK modulator. An increase of between 2 to 5 information bits are obtained across the motioned modulation orders through altering of the phase angle of the unitary matrix transform incorporated with the conventional STBC-SM scheme thus improving the spectral efficiency. In a rare occurrence of 𝑀=32 and 𝜃=𝜋2 configured E-STBC-SM scheme, an improvement of 0.2 dB in error performance was experienced.Item A new automatic repeat request protocol based on Alamouti space-time block code over Rayleigh fading channels.(2020) Lubisi, Muzi.; Xu, Hongjun.Spatial and multiplexing diversity of multiple-input multiple-output (MIMO) schemes improves link reliability and data rates of wireless networks. MIMO-based space-time block codes (STBCs) improve wireless network reliability by using different copies of the receiver’s original data. Recently automatic repeat request (ARQ) technique was introduced for MIMO schemes to enhance the system's link reliability. ARQ improves the link reliability by using acknowledgments and timeouts to ensure efficient transmission of data over an insecure system. In this dissertation, we propose a new ARQ protocol based on Alamouti space-time block code (STBC) over Rayleigh fading channels. The proposed system transmits data by employing two transmit antennas ( ) and four receive antennas , and it is developed by applying the recent technique called uncoded space-time labeling diversity (USTLD). The main idea behind the proposed technique is to use two distinct mappers to improve the error performance of the system. The theoretical expression of the proposed technique is derived employing the union bound approach, and the theoretical analysis is validated with the simulation results. Furthermore, the results revealed that there is a symbol error probability (SEP) performance improvement of 4 dB for 16-QAM and 4.90 dB for 64-QAM when one mapper is employed as compared to the Alamouti system at a SEP of . The results also revealed that when the proposed system uses two mappers, there is a SEP performance improvement of 7.98 dB for 16-QAM and 9.8 dB for 64-QAM compared to the Alamouti system at a SEP of .Item Investigating machine and deep-learning model combinations for a two-stage IDS for IoT networks.(2021) Van der Walt, André.; Quazi, Tahmid Al-Mumit.; Van Niekerk, Brett.By 2025, there will be upwards of 75 billion IoT devices connected to the internet. Notable security incidents have shown that many IoT devices are insecure or misconfigured, leaving them vulnerable, often with devastating results. AI’s learning, adaptable and flexible nature can be leveraged to provide networking monitoring for IoT networks. This work proposes a novel two-stage IDS, using layered machine- and deep-learning models. The applicability of seven algorithms is investigated using the BoT-IoT dataset. After replicating four algorithms from literature, modifications to these algorithms' application are then explored along with their ability to classify in three scenarios: 1) binary attack/benign, 2) multi-class attack with benign and 3) multi-class attack only. Three additional algorithms are also considered. The modifications are shown to achieve higher F1-scores by 22.75% and shorter training times by 35.68 seconds on average than the four replicated algorithms. Potential benefits of the proposed two-stage system are examined, showing a reduction of threat detection/identification time by 0.51s on average and an increase of threat classification F1-score by 0.05 on average. In the second half of the dissertation, algorithm combinations, layered in the two-stage system, are investigated. To facilitate comparison of time metrics, the classification scenarios from the first half of the dissertation are re-evaluated on the test PC CPU. All two-stage combinations are then tested. The results show a CNN binary classifier at stage one and a KNN 4-Class model at stage two performs best, outperforming the 5-Class (attack and benign) system of either algorithm. This system's first stage improves upon the 5-Class system's classification time by 0.25 seconds. The benign class F1-score is improved by 0.23, indicating a significant improvement in false positive rate. The system achieves an overall F1-score of 0.94. This shows the two-stage system would perform well as an IDS. Additionally, investigations arising from findings during the evaluation of the two-stage system are presented, namely GPU data-transfer overhead, the effect of data scaling and the effect of benign samples on stage two, giving a better understanding of how the dataset interacts with AI models and how they may be improved in future work.Item Machine learning approach to thermite weld defects detection and classification.(2021) Molefe, Mohale Emmanuel.; Tapamo, Jules-Raymond.The defects formed during the thermite welding process between two sections of rails require the welded joints to be inspected for quality purpose. The commonly used non-destructive method for inspection is Radiography testing. However, the detection and classification of various defects from the generated radiography imagesremains a costly, lengthy and subjective process as it is purely conducted manually by trained experts. It has been shown that most rail breaks occur due to a crack that initiated from the weld joint defect that was not detected. To meet the requirements of the modern technologies, the development of an automated detection and classification model is significantly demanded by the railway industry. This work presents a method based on image processing and machine learning techniques to automatically detect and classify welding defects. Radiography images are first enhanced using the Contrast Limited Adaptive Histogram Equalisation method; thereafter, the Chan-Vese Active Contour Model is applied to the enhanced images to segment and extract the weld joint as the Region of Interest from the image background. A comparative investigation between the Local Binary Patterns descriptor and the Bag of Visual Words approach with Speeded Up Robust Features descriptor was carried out for extracting features in the weld joint images. The effectiveness of the aforementioned feature extractors was evaluated using the Support Vector Machines, K-Nearest Neighbours and Naive Bayes classifiers. This study’s experimental results showed that the Bag of Visual Words approach when used with the Support Vector Machines classifier, achieves the best overall classification accuracy of 94.66%. The proposed method can be expanded in other industries where Radiography testing is used as the inspection tool.