Computer Engineering
Permanent URI for this communityhttps://hdl.handle.net/10413/6531
Browse
Browsing Computer Engineering by Issue Date
Now showing 1 - 20 of 28
- Results Per Page
- Sort Options
Item Investigation of virtual learning behaviour in an Eastern Cape high school biology course.(2003) Kavuma, Henry.; Yates, Steven.Transformation in education over the decades has failed to keep abreast of the rapidly advancing technological environment of modern society. This implies that curricula, learning paradigms and tools employed by educational institutions are not in sync with the technologically oriented lifestyle of modern society. Learners are therefore unable to apply and assimilate their daily life experiences into the learning process. This disparity warrants radical transformation in education, so as to furnish the appropriate education system where learners are able to construct their knowledge on the basis of pre-existing ideas and experiences. However, any transformation in the e~ucation approach should essentially be complemented by the adoption of appropriate learning environments and paradigms that can capitalize on learners' life experiences as well as elicit the appropriate learning behaviour and attitudes for effective and life-long learning. Much of the literature reviewed affirms the efficacy of virtual learning environments as mediums that can facilitate effective learner-centred electronic-learning suitable for modern society. They are asserted as liberators of learning in respect of instructivist ideals, information access and the confines of the physical classroom. This is confirmed by findings of this research, which are generally in favour of the virtual learning environment's ability to enhance the learning experiences of learners but remained inconclusive on their learning outcomes.Item Volumetric reconstruction of rigid objects from image sequences.(2012) Ramchunder, Naren.; Naidoo, Bashan.Live video communications over bandwidth constrained ad-hoc radio networks necessitates high compression rates. To this end, a model based video communication system that incorporates flexible and accurate 3D modelling and reconstruction is proposed in part. Model-based video coding (MBVC) is known to provide the highest compression rates, but usually compromises photorealism and object detail. High compression ratios are achieved at the encoder by extracting and transmit- ting only the parameters which describe changes to object orientation and motion within the scene. The decoder uses the received parameters to animate reconstructed objects within the synthesised scene. This is scene understanding rather than video compression. 3D reconstruction of objects and scenes present at the encoder is the focus of this research. 3D Reconstruction is accomplished by utilizing the Patch-based Multi-view Stereo (PMVS) frame- work of Yasutaka Furukawa and Jean Ponce. Surface geometry is initially represented as a sparse set of orientated rectangular patches obtained from matching feature correspondences in the input images. To increase reconstruction density these patches are iteratively expanded, and filtered using visibility constraints to remove outliers. Depending on the availability of segmentation in- formation, there are two methods for initialising a mesh model from the reconstructed patches. The first method initialises the mesh from the object's visual hull. The second technique initialises the mesh directly from the reconstructed patches. The resulting mesh is then refined by enforcing patch reconstruction consistency and regularization constraints for each vertex on the mesh. To improve robustness to outliers, two enhancements to the above framework are proposed. The first uses photometric consistency during feature matching to increase the probability of selecting the correct matching point first. The second approach estimates the orientation of the patch such that its photometric discrepancy score for each of its visible images is minimised prior to optimisation. The overall reconstruction algorithm is shown to be flexible and robust in that it can reconstruct 3D models for objects and scenes. It is able to automatically detect and discard outliers and may be initialised by simple visual hulls. The demonstrated ability to account for surface orientation of the patches during photometric consistency computations is a key performance criterion. Final results show that the algorithm is capable of accurately reconstructing objects containing fine surface details, deep concavities and regions without salient textures.Item Energy efficient medium access protocol for DS-CDMA based wireless sesor networks.(2012) Thippeswamy, Muddenahalli Nagendrappa.; Takawira, Fambirai.Wireless Sensor Networks (WSN), a new class of devices, has the potential to revolutionize the capturing, processing, and communication of critical data at low cost. Sensor networks consist of small, low-power, and low-cost devices with limited computational and wireless communication capabilities. These sensor nodes can only transmit a finite number of messages before they run out of energy. Thus, reducing the energy consumption per node for end-to-end data transmission is an important design consideration for WSNs. The Medium Access Control (MAC) protocols aim at providing collision-free access to the wireless medium. MAC protocols also provide the most direct control over the utilization of the transceiver, which consumes most of the energy of the sensor nodes. The major part of this thesis is based on a proposed MAC protocol called Distributed Receiver-oriented MAC (DRMACSN) protocol for code division multiple access (CDMA) based WSNs. The proposed MAC protocol employs the channel load blocking scheme to reduce energy consumption in the network. The performance of the proposed MAC protocol is verified through simulations for average packet throughput, average delay and energy consumption. The performance of the proposed MAC protocol is also compared to the IEEE 802.15.4 MAC and the MAC without the channel load sensing scheme via simulations. An analytical model is derived to analyse the average packet throughput and average energy consumption performance for the DRMACSN MAC protocol. The packet success probability, the message success and blocking probabilities are derived for the DRMACSN MAC protocol. The discrete-time multiple vacation queuing models are used to model the delay behaviour of the DRMACSN MAC protocol. The Probability Generating Functions (PGF) of the arrivals of new messages in sleep, back-off and transmit states are derived. The PGF of arrivals of retransmitted packets of a new message are also derived. The queue length and delay expressions for both the Bernoulli and Poisson message arrival models are derived. Comparison between the analytical and simulation results shows that the analytical model is accurate. The proposed MAC protocol is aimed at having an improved average packet throughput, a reduced packet delay, reduced energy consumption performance for WSN.Item Granting privacy and authentication in mobile ad hoc networks.(2012) Balmahoon, Reevana.; Peplow, Roger Charles Samuel.The topic of the research is granting privacy and authentication in Mobile Ad Hoc Networks (MANETs) that are under the authority of a certificate authority (CA) that is often not available. Privacy is implemented in the form of an anonymous identity or pseudonym, and ideally has no link to the real identity. Authentication and privacy are conflicting tenets of security as the former ensures a user's identity is always known and certified and the latter hides a user's identity. The goal was to determine if it is possible for a node to produce pseudonyms for itself that would carry the authority of the CA while being traceable by the CA, and would be completely anonymous. The first part of the dissertation places Vehicular Ad Hoc Networks (VANETs) into context, as this is the application of MANETs considered. This is followed by a detailed survey and analysis of the privacy aspects of VANETs. Thereafter, the solution is proposed, documented and analysed. Lastly, the dissertation is concluded and the contributions made are listed. The solution implements a novel approach for making proxies readily available to vehicles, and does indeed incorporate privacy and authentication in VANETs such that the pseudonyms produced are always authentic and traceable.Item Fingerprint identification using distributed computing.(2012) Khanyile, Nontokozo Portia.; Dube, Erick.; Tapamo, Jules-Raymond.Biometric systems such as face, palm and fingerprint recognition are very computationally expensive. The ever growing biometric database sizes have posed a need for faster search algorithms. High resolution images are expensive to process and slow down less powerful extraction algorithms. There is an apparent need to improve both the signal processing and the searching algorithms. Researchers have continually searched for new ways of improving the recognition algorithms in order to keep up with the high pace of the scientific and information security world. Most such developments, however, are architecture- or hardware-specific and do not port well to other platforms. This research proposes a cheaper and portable alternative. With the use of the Single Program Multiple Data programming architecture, a distributed fingerprint recognition algorithm is developed and executed on a powerful cluster. The first part in the parallelization of the algorithm is distributing the image enhancement algorithm which comprises of a series of computationally intensive image processing operations. Different processing elements work concurrently on different parts of the same image in order to speed up the processing. The second part of parallelization speeds up searching/ matching through a parallel search. A database is partitioned as evenly as possible amongst the available processing nodes which work independently to search their respective partitions. Each processor returns a match with the highest similarity score and the template with the highest score among those returned is returned as match given that the score is above a certain threshold. The system performance with respect to response time is then formalized in a form of a performance model which can be used to predict the performance of a distributed system given network parameters and number of processing nodes. The proposed algorithm introduces a novel approach to memory distribution of block-wise image processing operations and discusses three different ways to process pixels along the partitioning axes of the distributed images. The distribution and parallelization of the recognition algorithm gains up to as much as 12.5 times performance in matching and 10.2 times in enhancement.Item Fusion of time of flight (ToF) camera's ego-motion and inertial navigation.(2013) Ratshidaho, Thikhathali Terence.; Tapamo, Jules-Raymond.For mobile robots to navigate autonomously, one of the most important and challenging task is localisation. Localisation refers to the process whereby a robot locates itself within a map of a known environment or with respect to a known starting point within an unknown environment. Localisation of a robot in unknown environment is done by tracking the trajectory of a robot whilst knowing the initial pose. Trajectory estimation becomes challenging if the robot is operating in an unknown environment that has scarcity of landmarks, is GPS denied, is slippery and dark such as in underground mines. This dissertation addresses the problem of estimating a robot's trajectory in underground mining environments. In the past, this problem has been addressed by using a 3D laser scanner. 3D laser scanners are expensive and consume lot of power even though they have high measurements accuracy and wide eld of view. For this research work, trajectory estimation is accomplished by the fusion of an ego-motion provided by Time of Flight(ToF) camera and measurement data provided by a low cost Inertial Measurement Unit(IMU). The fusion is performed using Kalman lter algorithm on a mobile robot moving in a 2D planar surface. The results shows a signi cant improvement on the trajectory estimation. Trajectory estimation using ToF camera only is erroneous especially when the robot is rotating. The fused trajectory estimation algorithm is able to estimate accurate ego-motion even when the robot is rotating.Item Parallel patch-based volumetric reconstruction from images.(2014) Jermy, Robert Sydney.; Naidoo, Bashan.; Tapamo, Jules-Raymond.Three Dimensional (3D) reconstruction relates to the creating of 3D computer models from sets of Two Dimensional (2D) images. 3D reconstruction algorithms tend to have long execution times, meaning they are ill suited to real time 3D reconstruction tasks. This is a significant limitation which this dissertation attempts to address. Modern Graphics Processing Units (GPUs) have become fully programmable and have spawned the field known as General Purpose GPU (GPGPU) processing. Using this technology it is possible to of- fload certain types of tasks from the Central Processing Unit (CPU) to the GPU. GPGPU processing is designed for problems that have data parallelism. This means that a particular task can be split into many smaller tasks that can run in parallel, the results of which and are not dependent upon the order in which the tasks are completed. Therefore to properly make use of both CPU parallelism and GPGPU processing a 3D reconstruction algorithm with data parallelism was required. The selected algorithm was the Patch-Based Multi-View Stereopsis (PMVS) method, proposed and implemented by Yasutaka Furukawa and Jean Ponce. This algorithm uses small oriented rectangular patches to model a surface and is broken into four major steps: Feature detection; feature matching, expansion and filtering. The reconstructed patches are independent and as such the algorithm is data parallel. Some segments of the PMVS algorithm were programmed for GPGPU and others for CPU parallelism. Results show that the feature detection stage runs 10 times faster on the GPU than the equivalent CPU implementation. The patch creation and expansion stages also benefited from GPU implementation. Which brought an improvement in the execution time of two times for large images, and equivalent execution times for small images, when compared to the CPU implementation. These results show that the use of GPGPU and CPU parallelism can indeed improve the performance of this 3D reconstruction algorithm.Item Flat fingerprint classification using a rule-based technique, based on directional patterns and similar points.(2016) Dorasamy, Kribashnee.; Webb-Ray, Leandra.; Tapamo, Jules-Raymond.Abstract available in PDF file.Item Unsupervised feature selection for anomaly-based network intrusion detection using cluster validity indices.(2016) Naidoo, Tyrone.; Tapamo, Jules-Raymond.; McDonald, Andre Martin.In recent years, there has been a rapid increase in Internet usage, which has in turn led to a rise in malicious network activity. Network Intrusion Detection Systems (NIDS) are tools that monitor network traffic with the purpose of rapidly and accurately detecting malicious activity. These systems provide a time window for responding to emerging threats and attacks aimed at exploiting vulnerabilities that arise from issues such as misconfigured firewalls and outdated software. Anomaly-based network intrusion detection systems construct a profile of legitimate or normal traffic patterns using machine learning techniques, and monitor network traffic for deviations from the profile, which are subsequently classified as threats or intrusions. Due to the richness of information contained in network traffic, it is possible to define large feature vectors from network packets. This often leads to redundant or irrelevant features being used in network intrusion detection systems, which typically reduces the detection performance of the system. The purpose of feature selection is to remove unnecessary or redundant features in a feature space, thereby improving the performance of learning algorithms and as a result the classification accuracy. Previous approaches have performed feature selection via optimization techniques, using the classification accuracy of the NIDS on a subset of the data as an objective function. While this approach has been shown to improve the performance of the system, it is unrealistic to assume that labelled training data is available in operational networks, which precludes the use of classification accuracy as an objective function in a practical system. This research proposes a method for feature selection in network intrusion detection that does not require any access to labelled data. The algorithm uses normalized cluster validity indices as an objective function that is optimized over the search space of candidate feature subsets via a genetic algorithm. Feature subsets produced by the algorithm are shown to improve the classification performance of an anomaly{based network intrusion detection system over the NSL-KDD dataset. Despite not requiring access to labelled data, the classification performance of the proposed system approaches that of efective feature subsets that were derived using labelled training data.Item Investigation of feature extraction algorithms and techniques for hyperspectral images.(2017) Adebanjo, Hannah Morenike.; Tapamo, Jules-Raymond.Hyperspectral images (HSIs) are remote-sensed images that are characterized by very high spatial and spectral dimensions and nd applications, for example, in land cover classi cation, urban planning and management, security and food processing. Unlike conventional three bands RGB images, their high dimensional data space creates a challenge for traditional image processing techniques which are usually based on the assumption that there exists su cient training samples in order to increase the likelihood of high classi cation accuracy. However, the high cost and di culty of obtaining ground truth of hyperspectral data sets makes this assumption unrealistic and necessitates the introduction of alternative methods for their processing. Several techniques have been developed in the exploration of the rich spectral and spatial information in HSIs. Speci cally, feature extraction (FE) techniques are introduced in the processing of HSIs as a necessary step before classi cation. They are aimed at transforming the high dimensional data of the HSI into one of a lower dimension while retaining as much spatial and/or spectral information as possible. In this research, we develop semi-supervised FE techniques which combine features of supervised and unsupervised techniques into a single framework for the processing of HSIs. Firstly, we developed a feature extraction algorithm known as Semi-Supervised Linear Embedding (SSLE) for the extraction of features in HSI. The algorithm combines supervised Linear Discriminant Analysis (LDA) and unsupervised Local Linear Embedding (LLE) to enhance class discrimination while also preserving the properties of classes of interest. The technique was developed based on the fact that LDA extracts features from HSIs by discriminating between classes of interest and it can only extract C 1 features provided there are C classes in the image by extracting features that are equivalent to the number of classes in the HSI. Experiments show that the SSLE algorithm overcomes the limitation of LDA and extracts features that are equivalent to ii iii the number of classes in HSIs. Secondly, a graphical manifold dimension reduction (DR) algorithm known as Graph Clustered Discriminant Analysis (GCDA) is developed. The algorithm is developed to dynamically select labeled samples from the pool of available unlabeled samples in order to complement the few available label samples in HSIs. The selection is achieved by entwining K-means clustering with a semi-supervised manifold discriminant analysis. Using two HSI data sets, experimental results show that GCDA extracts features that are equivalent to the number of classes with high classi cation accuracy when compared with other state-of-the-art techniques. Furthermore, we develop a window-based partitioning approach to preserve the spatial properties of HSIs when their features are being extracted. In this approach, the HSI is partitioned along its spatial dimension into n windows and the covariance matrices of each window are computed. The covariance matrices of the windows are then merged into a single matrix through using the Kalman ltering approach so that the resulting covariance matrix may be used for dimension reduction. Experiments show that the windowing approach achieves high classi cation accuracy and preserves the spatial properties of HSIs. For the proposed feature extraction techniques, Support Vector Machine (SVM) and Neural Networks (NN) classi cation techniques are employed and their performances are compared for these two classi ers. The performances of all proposed FE techniques have also been shown to outperform other state-of-the-art approaches.Item Gaussian mixture model classifiers for detection and tracking in UAV video streams.(2017) Pillay, Treshan.; Naidoo, Bashan.Manual visual surveillance systems are subject to a high degree of human-error and operator fatigue. The automation of such systems often employs detectors, trackers and classifiers as fundamental building blocks. Detection, tracking and classification are especially useful and challenging in Unmanned Aerial Vehicle (UAV) based surveillance systems. Previous solutions have addressed challenges via complex classification methods. This dissertation proposes less complex Gaussian Mixture Model (GMM) based classifiers that can simplify the process; where data is represented as a reduced set of model parameters, and classification is performed in the low dimensionality parameter-space. The specification and adoption of GMM based classifiers on the UAV visual tracking feature space formed the principal contribution of the work. This methodology can be generalised to other feature spaces. This dissertation presents two main contributions in the form of submissions to ISI accredited journals. In the first paper, objectives are demonstrated with a vehicle detector incorporating a two stage GMM classifier, applied to a single feature space, namely Histogram of Oriented Gradients (HoG). While the second paper demonstrates objectives with a vehicle tracker using colour histograms (in RGB and HSV), with Gaussian Mixture Model (GMM) classifiers and a Kalman filter. The proposed works are comparable to related works with testing performed on benchmark datasets. In the tracking domain for such platforms, tracking alone is insufficient. Adaptive detection and classification can assist in search space reduction, building of knowledge priors and improved target representations. Results show that the proposed approach improves performance and robustness. Findings also indicate potential further enhancements such as a multi-mode tracker with global and local tracking based on a combination of both papers.Item Facial expression recognition using covariance matrix descriptors and local texture patterns.(2017) Naidoo, Ashaylin.; Tapamo, Jules-Raymond.; Khutlang, Rethabile.Facial expression recognition (FER) is a powerful tool that is emerging rapidly due to increased computational power in current technologies. It has many applications in the fields of human-computer interaction, psychological behaviour analysis, and image understanding. However, FER presently is not fully realised due to the lack of an effective facial feature descriptor. The covariance matrix as a feature descriptor is popular in object detection and texture recognition. Its innate ability to fuse multiple local features within a domain is proving to be useful in applications such as biometrics. Developing methods also prevalent in pattern recognition are local texture patterns such as Local Binary Pattern (LBP) and Local Directional Pattern (LDP) because of their fast computation and robustness against illumination variations. This study will examine the performance of covariance feature descriptors that incorporate local texture patterns concerning applications in facial expression recognition. The proposed method will focus on generating feature descriptors to extract robust and discriminative features that can aid against extrinsic factors affecting facial expression recognition, such as illumination, pose, scale, rotation and occlusion. The study also explores the influence of using holistic versus componentbased approaches to FER. A novel feature descriptor referred to as Local Directional Covariance Matrices (LDCM) is proposed. The covariance descriptors will consist of fusing features such as location, intensity and filter responses, and include LBP and LDP into the covariance structure. Tests conducted will examine the accuracy of different variations of covariance features and the impact of segmenting the face into equal sized blocks or special landmark regions, i.e. eyes, nose and mouth, for classification. The results on JAFFE, CK+ and ISED facial expression databases establish that the proposed descriptor achieves a high level of performance for FER at a reduced feature size. The effectiveness of using a component-based approach with special landmarks displayed stable results across different datasets and environments.Item Improvements of local directional pattern for texture classification.(2017) Shabat, Abuobayda Mohammed Mosa.; Tapamo, Jules-Raymond.The Local Directional Pattern (LDP) method has established its effectiveness and performance compared to the popular Local Binary Pattern (LBP) method in different applications. In this thesis, several extensions and modification of LDP are proposed with an objective to increase its robustness and discriminative power. Local Directional Pattern (LDP) is dependent on the empirical choice of three for the number of significant bits used to code the responses of the Kirsch Mask operation. In a first study, we applied LDP on informal settlements using various values for the number of significant bits k. It was observed that the change of the value of the number of significant bits led to a change in the performance, depending on the application. Local Directional Pattern (LDP) is based on the computation Kirsch Mask application response values in eight directions. But this method ignores the gray value of the center pixel, which may lead to loss of significant information. Centered Local Directional Pattern (CLDP) is introduced to solve this issue, using the value of the center pixel based on its relations with neighboring pixels. Local Directional Pattern (LDP) also generates a code based on the absolute value of the edge response value; however, the sign of the original value indicates two different trends (positive or negative) of the gradient. To capture the gradient trend, Signed Local Directional Pattern (SLDP) and Centered-SLDP (C-SLDP) are proposed, which compute the eight edge responses based on the two different directions (positive or negative) of the gradients.The Directional Local Binary pattern (DLBP) is introduced, which adopts directional information to represent texture images. This method is more stable than both LDP and LBP because it utilizes the center pixel as a threshold for the edge response of a pixel in eight directions, instead of employing the center pixel as the threshold for pixel intensity of the neighbors, as in the LBP method. Angled Local directional pattern (ALDP) is also presented, with an objective to resolve two problems in the LDP method. These are the value of the number of significant bits k, and to taking into account the center pixel value. It computes the angle values for the edge response of a pixel in eight directions for each angle (0◦,45◦,90◦,135◦). Each angle vector contains three values. The central value in each vector is chosen as a threshold for the other two neighboring pixels. Circular Local Directional Pattern (CILDP) isalso presented, with an objective of a better analysis, especially with textures with a different scale. The method is built around the circular shape to compute the directional edge vector using different radiuses. The performances of LDP, LBP, CLDP, SLDP, C-SLDP, DLBP, ALDP and CILDP are evaluated using five classifiers (K-nearest neighbour algorithm (k-NN), Support Vector Machine (SVM), Perceptron, Naive-Bayes (NB), and Decision Tree (DT)) applied to two different texture datasets: Kylberg dataset and KTH-TIPS2-b dataset. The experimental results demonstrated that the proposed methods outperform both LDP and LBP.Item Using facial expression recognition for crowd monitoring.(2017) Holder, Ross Philip.; Tapamo, Jules-Raymond.In recent years, Crowd Monitoring techniques have attracted emerging interest in the eld of computer vision due to their ability to monitor groups of people in crowded areas, where conventional image processing methods would not suffice. Existing Crowd Monitoring techniques focus heavily on analyzing a crowd as a single entity, usually in terms of their density and movement pattern. While these techniques are well suited for the task of identifying dangerous and emergency situations, such as a large group of people exiting a building at once, they are very limited when it comes to identifying emotion within a crowd. By isolating different types of emotion within a crowd, we aim to predict the mood of a crowd even in scenes of non-panic. In this work, we propose a novel Crowd Monitoring system based on estimating crowd emotion using Facial Expression Recognition (FER). In the past decade, both FER and activity recognition have been proposed for human emotion detection. However, facial expression is arguably more descriptive when identifying emotion and is less likely to be obscured in crowded environments compared to body pos- ture. Given a crowd image, the popular Viola and Jones face detection algorithm is used to detect and extract unobscured faces from individuals in the crowd. A ro- bust and efficient appearance based method of FER, such as Gradient Local Ternary Pattern (GLTP), is used together with a machine learning algorithm, Support Vec- tor Machine (SVM), to extract and classify each facial expression as one of seven universally accepted emotions (joy, surprise, anger, fear, disgust, sadness or neutral emotion). Crowd emotion is estimated by isolating groups of similar emotion based on their relative size and weighting. To validate the effectiveness of the proposed system, a series of cross-validation tests are performed using a novel Crowd Emotion dataset with known ground-truth emotions. The results show that the system presented is able to accurately and efficiently predict multiple classes of crowd emotion even in non-panic situations where movement and density information may be incomplete. In the future, this type of system can be used for many security applications; such as helping to alert authorities to potentially aggressive crowds of people in real-time.Item Power-line insulator defect detection and classification.(2018) Iruansi, Usiholo.; Tapamo, Jules-Raymond.; Davidson, Innocent Ewean.Faulty insulators may compromise the electrical and mechanical integrity of a power delivery system, leading to leakage currents owing through line supports. This poses a risk to human safety and increases electrical losses and voltage drop in the power grid. Therefore, it is necessary to monitor and inspect insulators for damages that could be caused by degradation or any accident on the power system infrastructure. However, the traditional method of inspection is inadequate in meeting the growth and development of the present power grid, hence automated systems based on computer vision method are presently being explored as a means to solve this problem speedily, economically and accurately. This thesis proposes a method to distinguish between defectuous and nondefectuous insulators from two approaches; structural inspection to detect broken parts and a study of hydrophobicity of insulators under wet conditions. For the structural inspection of insulators, an active contour model is used to segment the insulator from the image context, and thereafter the insulator region of interest is extracted. Then, di erent feature extraction methods such as local binary pattern, scale invariant feature transform and grey-level co-occurrence matrix are used to extract features from the extracted insulator region of interest image and then fed into classi ers, such as a support vector machine and K-nearest neighbour for insulator condition classi cation. For the hydrophobicity study of the insulator, an active contour model is used to segment water droplets on the insulator, and thereafter the geometrical characteristics of the water droplets are extracted. The extracted geometrical features are then fed into a classi er to assess the insulator condition based on the hydrophobicity levels. Experiments performed in this research work show that the proposed methods outperformed some existing state-of-the-art methods.Item Feature regularization and learning for human activity recognition.(2018) Osayamwen, Festus Osazuwa.; Tapamo, Jules-Raymond.Feature extraction is an essential component in the design of human activity recognition model. However, relying on extracted features alone for learning often makes the model a suboptimal model. Therefore, this research work seeks to address such potential problem by investigating feature regularization. Feature regularization is used for encapsulating discriminative patterns that are needed for better and efficient model learning. Firstly, a within-class subspace regularization approach is proposed for eigenfeatures extraction and regularization in human activity recognition. In this ap- proach, the within-class subspace is modelled using more eigenvalues from the reliable subspace to obtain a four-parameter modelling scheme. This model enables a better and true estimation of the eigenvalues that are distorted by the small sample size effect. This regularization is done in one piece, thereby avoiding undue complexity of modelling eigenspectrum differently. The whole eigenspace is used for performance evaluation because feature extraction and dimensionality reduction are done at a later stage of the evaluation process. Results show that the proposed approach has better discriminative capacity than several other subspace approaches for human activity recognition. Secondly, with the use of likelihood prior probability, a new regularization scheme that improves the loss function of deep convolutional neural network is proposed. The results obtained from this work demonstrate that a well regularized feature yields better class discrimination in human activity recognition. The major contribution of the thesis is the development of feature extraction strategies for determining discriminative patterns needed for efficient model learning.Item Error performance analysis of n-ary Alamouti scheme with signal space diversity.(2018) Sibanda, Nathael.; Xu, Hongjun.In this dissertation, a high-rate Alamouti scheme with Signal Space Diversity is developed to improve both the spectral efficiency and overall error performance in wireless communication links. This scheme uses high modulation techniques (M-ary quadrature amplitude modulation (M-QAM) and N-ary phase shift keying modulation (N-PSK)). Hence, this dissertation presents the mathematical models, design methodology and theoretical analysis of this high-rate Alamouti scheme with Signal Space Diversity.To improve spectral efficiency in multiple-input multiple-output (MIMO) wireless communications an N-ary Alamouti M-ary quadrature amplitude modulation (M-QAM) scheme is proposed in this thesis. The proposed N-ary Alamouti M-QAM Scheme uses N-ary phase shift keying modulation (NPSK) and M-QAM. The proposed scheme is investigated in Rayleigh fading channels with additive white Gaussian noise (AWGN). Based on union bound a theoretical average bit error probability (ABEP) of the system is formulated. The simulation results validate the theoretical ABEP. Both theoretical results and simulation results show that the proposed scheme improves spectral efficiency by 0.5 bit/sec/Hz in 2 × 4 16-PSK Alamouti 16-QAM system compared to the conventional Alamouti scheme (16-QAM). To further improve the error performance of the proposed N-ary Alamouti M-QAM Scheme an 𝑁𝑇 ×𝑁𝑅 N-ary Alamouti coded M-QAM scheme with signal space diversity (SSD) is also proposed in this thesis. In this thesis, based on the nearest neighbour (NN) approach a theoretical closed-form expression of the ABEP is further derived in Rayleigh fading channels. Simulation results also validate the theoretical ABEP for N-ary Alamouti M-QAM scheme with SSD. Both theoretical and simulation results further show that the 2 × 4 4-PSK Alamouti 256-QAM scheme with SSD can achieve 0.8 dB gain compared to the 2 × 4 4-PSK Alamouti 256-QAM scheme without SSD.Item Hybrid generalized non-orthogonal multiple access for the 5G wireless networks.(2018) Zitha, Samson Manyani.; Walingo, Tom Mmbasu.The deployment of 5G networks will lead to an increase in capacity, spectral efficiency, low latency and massive connectivity for wireless networks. They will still face the challenges of resource and power optimization, increasing spectrum efficiency and energy optimization, among others. Furthermore, the standardized technologies to mitigate against the challenges need to be developed and are a challenge themselves. In the current predecessor LTE-A networks, orthogonal frequency multiple access (OFDMA) scheme is used as the baseline multiple access scheme. It allows users to be served orthogonally in either time or frequency to alleviate narrowband interference and impulse noise. Further spectrum limitations of orthogonal multiple access (OMA) schemes have resulted in the development of non-orthogonal multiple access (NOMA) schemes to enable 5G networks to achieve high spectral efficiency and high data rates. NOMA schemes unorthogonally co-multiplex different users on the same resource elements (RE) (i.e. time-frequency domain, OFDMA subcarrier, or spreading code) via power domain (PD) or code domain (CD) at the transmitter and successfully separating them at the receiver by applying multi-user detection (MUD) algorithms. The current developed NOMA schemes, refered to as generalized-NOMA (G-NOMA) technologies includes; Interleaver Division Multiple Access (IDMA, Sparse code multiple access (SCMA), Low-density spreading multiple access (LDSMA), Multi-user shared access (MUSA) scheme and the Pattern Division Multiple Access (PDMA). These protocols are currently still under refinement, their performance and applicability has not been thoroughly investigated. The first part of this work undertakes a thorough investigation and analysis of the performance of the existing G-NOMA schemes and their applicability. Generally, G-NOMA schemes perceives overloading by non-orthogonal spectrum resource allocation, which enables massive connectivity of users and devices, and offers improved system spectral efficiency. Like any other technologies, the G-NOMA schemes need to be improved to further harvest their benefits on 5G networks leading to the requirement of Hybrid G-NOMA (G-NOMA) schemes. The second part of this work develops a HG-NOMA scheme to alleviate the 5G challenges of resource allocation, inter and cross-tier interference management and energy efficiency. This work develops and investigates the performance of an Energy Efficient HG-NOMA resource allocation scheme for a two-tier heterogeneous network that alleviates the cross-tier interference and improves the system throughput via spectrum resource optimization. By considering the combinatorial problem of resource pattern assignment and power allocation, the HG-NOMA scheme will enable a new transmission policy that allows more than two macro-user equipment’s (MUEs) and femto-user equipment’s (FUEs) to be co-multiplexed on the same time-frequency RE increasing the spectral efficiency. The performance of the developed model is shown to be superior to the PD-NOMA and OFDMA schemes.Item Hybrid generalized non-orthogonal multiple access for the 5G wireless networks.(2018) Zitha, Samson Manyani.; Walingo, Tom Mmbasu.The deployment of 5G networks will lead to an increase in capacity, spectral efficiency, low latency and massive connectivity for wireless networks. They will still face the challenges of resource and power optimization, increasing spectrum efficiency and energy optimization, among others. Furthermore, the standardized technologies to mitigate against the challenges need to be developed and are a challenge themselves. In the current predecessor LTE-A networks, orthogonal frequency multiple access (OFDMA) scheme is used as the baseline multiple access scheme. It allows users to be served orthogonally in either time or frequency to alleviate narrowband interference and impulse noise. Further spectrum limitations of orthogonal multiple access (OMA) schemes have resulted in the development of non-orthogonal multiple access (NOMA) schemes to enable 5G networks to achieve high spectral efficiency and high data rates. NOMA schemes unorthogonally co-multiplex different users on the same resource elements (RE) (i.e. time-frequency domain, OFDMA subcarrier, or spreading code) via power domain (PD) or code domain (CD) at the transmitter and successfully separating them at the receiver by applying multi-user detection (MUD) algorithms. The current developed NOMA schemes, refered to as generalized-NOMA (G-NOMA) technologies includes; Interleaver Division Multiple Access (IDMA, Sparse code multiple access (SCMA), Low-density spreading multiple access (LDSMA), Multi-user shared access (MUSA) scheme and the Pattern Division Multiple Access (PDMA). These protocols are currently still under refinement, their performance and applicability has not been thoroughly investigated. The first part of this work undertakes a thorough investigation and analysis of the performance of the existing G-NOMA schemes and their applicability. Generally, G-NOMA schemes perceives overloading by non-orthogonal spectrum resource allocation, which enables massive connectivity of users and devices, and offers improved system spectral efficiency. Like any other technologies, the G-NOMA schemes need to be improved to further harvest their benefits on 5G networks leading to the requirement of Hybrid G-NOMA (G-NOMA) schemes. The second part of this work develops a HG-NOMA scheme to alleviate the 5G challenges of resource allocation, inter and cross-tier interference management and energy efficiency. This work develops and investigates the performance of an Energy Efficient HG-NOMA resource allocation scheme for a two-tier heterogeneous network that alleviates the cross-tier interference and improves the system throughput via spectrum resource optimization. By considering the combinatorial problem of resource pattern assignment and power allocation, the HG-NOMA scheme will enable a new transmission policy that allows more than two macro-user equipment’s (MUEs) and femto-user equipment’s (FUEs) to be co-multiplexed on the same time-frequency RE increasing the spectral efficiency. The performance of the developed model is shown to be superior to the PD-NOMA and OFDMA schemes.Item Smart attendance monitoring system using computer vision.(2019) Mothwa, Louis.; Tapamo, Jules-Raymond.; Mapayi, Temitope.Monitoring of student’s attendance remains the fundamental and vital part of any educational institution. The attendance of students to classes can have an impact on their academic performance. With the gradual increase in the number of students, it becomes a challenge for institutions to manage their attendance. The traditional attendance monitoring system requires considerable amount of time due to manual recording of names and circulation of the paper-based attendance sheet for students to sign their names. The paper-based attendance recording method and some existing automated systems such as mobile applications, Radio Frequency Identification (RFID), Bluetooth, and fingerprint attendance models are prone to fake results and time wasting. The limitations of the traditional attendance monitoring system stimulated the adoption of computer vision to stand in the gap. Student’s attendance can be monitored with biometric candidate’s systems such as iris recognition system and face recognition system. Among these, face recognition have a greater potential because of its non-intrusive nature. Although some automated attendance monitoring systems have been proposed, poor system modelling negatively affects the systems. In order to improve success of the automated systems, this research proposes the smart attendance monitoring system that uses facial recognition to monitor student’s attendance in a classroom. A time integrated model is provided to monitor student’s attendance throughout the lecture period by registering the attendance information at regular time intervals. Multi-camera system is also proposed to guarantee an accurate capturing of students. The proposed multi-camera based system is tested using a real-time database in an experimental class from the University of KwaZulu-Natal (UKZN). The results show that the proposed smart attendance monitoring System is reliable, with the average accuracy rate of 98%.