Computer Engineering
Permanent URI for this communityhttps://hdl.handle.net/10413/6531
Browse
Browsing Computer Engineering by Title
Now showing 1 - 20 of 28
- Results Per Page
- Sort Options
Item Candidate generation and validation techniques for pedestrian detection in thermal (infrared) surveillance videos.(2022) Oluyide, Oluwakorede Monica.; Walingo, Tom Mmbasu.; Tapamo, Jules-Raymond.Video surveillance systems have become prevalent. Factors responsible for this prevalence include, but are not limited to, rapid advancements in technology, reduction in the cost of surveillance systems and changes in user demand. Research in video surveillance is majorly driven by rising global security needs which in turn increase the demand for proactive systems which monitor persistently. Persistent monitoring is a challenge for most video surveillance systems because they depend on visible light cameras. Visible light cameras depend on the presence of external light and can easily be undermined by over-, under, or non-uniform illumination. Thermal infrared cameras have been considered as alternatives to visible light cameras because they measure the intensity of infrared energy emitted from objects and so can function persistently. Many methods put forward make use of methods developed for visible footage, but these tend to underperform in infrared images due to different characteristics of thermal footage compared to visible footage. This thesis aims to increase the accuracy of pedestrian detection in thermal infrared surveillance footage by incorporating strategies into existing frameworks used in visible image processing techniques for IR pedestrian detection without the need to initially assume a model for the image distribution. Therefore, two novel techniques for candidate generation were formulated. The first is an Entropy-based histogram modication algorithm that incorporates a strategy for energy loss to iteratively modify the histogram of an image for background elimination and pedestrian retention. The second is a Background Subtraction method featuring a strategy for building a reliable background image without needing to use the whole video frame. Furthermore, pedestrian detection involves simultaneously solving several sub-tasks while adapting each task with IR-speci_c adaptations. Therefore, a novel semi-supervised single model for pedestrian detection was formulated that eliminates the need for separate modules of candidate generation and validation by integrating region and boundary properties of the image with motion patterns such that all the _ne-tuning and adjustment happens during energy minimization. Performance evaluations have been performed on four publicly available benchmark surveillance datasets consisting of footage taken under a wide variety of weather conditions and taken from different perspectives.Item Contributions into holistic human action recognition.(2020) Toudjeu, Tchangou Ignance.; Tapamo, Jules-Raymond.In this thesis we holistically investigate the interpretation of human actions in both still images and videos. Human action recognition is currently a research problem of great interest both in academia and industry due to its potential applications which include security surveillances, sports annotation, human-computer interaction, and robotics. Action recognition, being a process of labelling actions using sensory observations, can be defined as a sequence of movements engendered by a human during an executed task. Such a process, when considering visual observations, is quite challenging and faces issues such as background clutter, shadows, illumination variations, occlusions, changes in scale, changes in the person performing the action, and viewpoint variations. Although many approaches to development of human action recognition systems have been proposed in the literature, they focused more on recognition accuracy while ignoring the computational complexity accompanying the recognition process. However, a human action recognition system which is both effective and efficient and can be operated real-time is needed. Firstly, we review, evaluate and compare the most prominent state-of-the-art feature extraction representations categorized between handcrafted feature based techniques and deep learning feature based techniques. Secondly, we propose holistic approaches in each of the categories. The first holistic approach takes advantage of existing slope patterns in the motion history images, which are a simple two dimensional representation of video, and reduces the running time of action recognition. The second one based on circular derivative local binary patterns outperforms the LBP based state-of-the-art techniques and addresses the issues of dimensionality by producing feature descriptor with minimal dimension size with less compromise on the recognition accuracy. The third one introduces a preprocessing step in a proposed 2D-convolutional neural network to deal with the same issue of dimensionality differently in the deep learning techniques. Here the temporal dimension is embedded into motion history images before being learned by a two dimensional convolutional neural network. Thirdly, three datasets (JAFFE, KTH and Pedestrian Action dataset) were used to validate the proposed human action recognition models. Finally, we show that better performance in comparison to the state-of-the-art methods can be achieved using holistic feature based techniques.Item Correcting inter-sectional accuracy differences in drowsiness detection systems using generative adversarial networks (GANs)(2020) Ngxande, Mkhuseli.; Tapamo, Jules-Raymond.; Burke, Michael.oad accidents contribute to many injuries and deaths among the human population. There is substantial evidence that proves drowsiness is one of the most prominent causes of road accidents all over the world. This results in fatalities and severe injuries for drivers, passengers, and pedestrians. These alarming facts are raising the interest in equipping vehicles with robust driver drowsiness detection systems to minimise accident rates. One of the primary concerns of motor industries is the safety of passengers and as a consequence they have invested significantly in research and development to equip vehicles with systems that can help minimise to road accidents. A number research endeavours have attempted to use Artificial intelligence, and particularly Deep Neural Networks (DNN), to build intelligent systems that can detect drowsiness automatically. However, datasets are crucial when training a DNN. When datasets are unrepresentative, trained models are prone to bias because they are unable to generalise. This is particularly problematic for models trained in specific cultural contexts, which may not represent a wide range of races, and thus fail to generalise. This is a specific challenge for driver drowsiness detection task, where most publicly available datasets are unrepresentative as they cover only certain ethnicity groups. This thesis investigates the problem of an unrepresentative dataset in the training phase of Convolutional Neural Networks (CNNs) models. Firstly, CNNs are compared with several machine learning techniques to establish their superior suitability for the driver drowsiness detection task. An investigation into the implementation of CNNs was performed and highlighted that publicly available datasets such as NTHU, DROZY and CEW do not represent a wide spectrum of ethnicity groups and lead to biased systems. A population bias visualisation technique was proposed to help identify the regions, or individuals where a model is failing to generalise on a picture grid. Furthermore, the use of Generative Adversarial Networks (GANs) with lightweight convolutions called Depthwise Separable Convolutions (DSC) for image translation to multi-domain outputs was investigated in an attempt to generate synthetic datasets. This thesis further showed that GANs can be used to generate more realistic images with varied facial attributes for predicting drowsiness across multiple ethnicity groups. Lastly, a novel framework was developed to detect bias and correct it using synthetic generated images which are produced by GANs. Training models using this framework results in a substantial performance boost.Item Energy efficient medium access protocol for DS-CDMA based wireless sesor networks.(2012) Thippeswamy, Muddenahalli Nagendrappa.; Takawira, Fambirai.Wireless Sensor Networks (WSN), a new class of devices, has the potential to revolutionize the capturing, processing, and communication of critical data at low cost. Sensor networks consist of small, low-power, and low-cost devices with limited computational and wireless communication capabilities. These sensor nodes can only transmit a finite number of messages before they run out of energy. Thus, reducing the energy consumption per node for end-to-end data transmission is an important design consideration for WSNs. The Medium Access Control (MAC) protocols aim at providing collision-free access to the wireless medium. MAC protocols also provide the most direct control over the utilization of the transceiver, which consumes most of the energy of the sensor nodes. The major part of this thesis is based on a proposed MAC protocol called Distributed Receiver-oriented MAC (DRMACSN) protocol for code division multiple access (CDMA) based WSNs. The proposed MAC protocol employs the channel load blocking scheme to reduce energy consumption in the network. The performance of the proposed MAC protocol is verified through simulations for average packet throughput, average delay and energy consumption. The performance of the proposed MAC protocol is also compared to the IEEE 802.15.4 MAC and the MAC without the channel load sensing scheme via simulations. An analytical model is derived to analyse the average packet throughput and average energy consumption performance for the DRMACSN MAC protocol. The packet success probability, the message success and blocking probabilities are derived for the DRMACSN MAC protocol. The discrete-time multiple vacation queuing models are used to model the delay behaviour of the DRMACSN MAC protocol. The Probability Generating Functions (PGF) of the arrivals of new messages in sleep, back-off and transmit states are derived. The PGF of arrivals of retransmitted packets of a new message are also derived. The queue length and delay expressions for both the Bernoulli and Poisson message arrival models are derived. Comparison between the analytical and simulation results shows that the analytical model is accurate. The proposed MAC protocol is aimed at having an improved average packet throughput, a reduced packet delay, reduced energy consumption performance for WSN.Item Enhanced spectral efficiency schemes for space-time block coded spatial modulation.(2019) Motsa, Sibusiso Thabiso.; Xu, Hongjun.The ever-growing demand for high data rate, low latency and energy efficient transmission schemes has seen an increasing popularity of multiple-input multiple-output (MIMO) scheme. One such scheme is the orthogonal space-time block code (STBC) scheme introduced by Alamouti which provides full diversity without sacrificing its data rate. Introduction of spatial multiplexing to STBC through spatial modulation (SM) improves the performance and spectral efficiency whilst eliminating transmit antenna synchronization and inter-channel interference (ICI) at the receiver. In this dissertation, we investigate and evaluate the error performance of both STBC and SM MIMO schemes. As such, we exploit the advantage of both schemes in space-time block coded spatial modulation (STBC-SM) scheme resulting in a high spectral efficient scheme. Motivated by the requisite for higher data rate transmission schemes, we expand the orthogonal STBC transmission matrix to further improve the spectral efficiency of space-time block coded spatial modulation. The fundamental idea is keeping the size of the amplitude/phase modulator (APM) symbol set of STBC the same. Therefore, unitary matrix transformation technique is introduced to the conventional STBC matrix. This technique prevents an increase in the peak-to-average power ratio of the transmitted symbols. A decrease in the phase angle of the unitary matrix yields an increase in the number of information bits transmitted, subsequently increasing the spectral efficiency of a system. A new system referred to as enhanced spectral efficiency space-time block coded spatial modulation (E-STBC-SM) is proposed. Moreover, a tight closed-form lower-bound is derived to estimate the average BER of the E-STBC-SM system over Rayleigh frequency-flat fading channel and validated with Monte Carlo simulations. Comparisons of the proposed E-STBC-SM scheme and conventional STBC-SM scheme are carried out with four receive antennas in all cases. The E-STBC-SM scheme virtually retains the BER performance of the STBC-SM scheme with a maximum attenuation of 0.6 dB throughout modulation order 16, 32 and 64 of a PSK modulator. An increase of between 2 to 5 information bits are obtained across the motioned modulation orders through altering of the phase angle of the unitary matrix transform incorporated with the conventional STBC-SM scheme thus improving the spectral efficiency. In a rare occurrence of 𝑀=32 and 𝜃=𝜋2 configured E-STBC-SM scheme, an improvement of 0.2 dB in error performance was experienced.Item Error performance analysis of cross QAM and space-time labeling diversity for cross QAM.(2019) Kamdar, Muhammad Wazeer.; Xu, Hongjun.Abstract available in the PDFItem Error performance analysis of n-ary Alamouti scheme with signal space diversity.(2018) Sibanda, Nathael.; Xu, Hongjun.In this dissertation, a high-rate Alamouti scheme with Signal Space Diversity is developed to improve both the spectral efficiency and overall error performance in wireless communication links. This scheme uses high modulation techniques (M-ary quadrature amplitude modulation (M-QAM) and N-ary phase shift keying modulation (N-PSK)). Hence, this dissertation presents the mathematical models, design methodology and theoretical analysis of this high-rate Alamouti scheme with Signal Space Diversity.To improve spectral efficiency in multiple-input multiple-output (MIMO) wireless communications an N-ary Alamouti M-ary quadrature amplitude modulation (M-QAM) scheme is proposed in this thesis. The proposed N-ary Alamouti M-QAM Scheme uses N-ary phase shift keying modulation (NPSK) and M-QAM. The proposed scheme is investigated in Rayleigh fading channels with additive white Gaussian noise (AWGN). Based on union bound a theoretical average bit error probability (ABEP) of the system is formulated. The simulation results validate the theoretical ABEP. Both theoretical results and simulation results show that the proposed scheme improves spectral efficiency by 0.5 bit/sec/Hz in 2 × 4 16-PSK Alamouti 16-QAM system compared to the conventional Alamouti scheme (16-QAM). To further improve the error performance of the proposed N-ary Alamouti M-QAM Scheme an 𝑁𝑇 ×𝑁𝑅 N-ary Alamouti coded M-QAM scheme with signal space diversity (SSD) is also proposed in this thesis. In this thesis, based on the nearest neighbour (NN) approach a theoretical closed-form expression of the ABEP is further derived in Rayleigh fading channels. Simulation results also validate the theoretical ABEP for N-ary Alamouti M-QAM scheme with SSD. Both theoretical and simulation results further show that the 2 × 4 4-PSK Alamouti 256-QAM scheme with SSD can achieve 0.8 dB gain compared to the 2 × 4 4-PSK Alamouti 256-QAM scheme without SSD.Item Facial expression recognition using covariance matrix descriptors and local texture patterns.(2017) Naidoo, Ashaylin.; Tapamo, Jules-Raymond.; Khutlang, Rethabile.Facial expression recognition (FER) is a powerful tool that is emerging rapidly due to increased computational power in current technologies. It has many applications in the fields of human-computer interaction, psychological behaviour analysis, and image understanding. However, FER presently is not fully realised due to the lack of an effective facial feature descriptor. The covariance matrix as a feature descriptor is popular in object detection and texture recognition. Its innate ability to fuse multiple local features within a domain is proving to be useful in applications such as biometrics. Developing methods also prevalent in pattern recognition are local texture patterns such as Local Binary Pattern (LBP) and Local Directional Pattern (LDP) because of their fast computation and robustness against illumination variations. This study will examine the performance of covariance feature descriptors that incorporate local texture patterns concerning applications in facial expression recognition. The proposed method will focus on generating feature descriptors to extract robust and discriminative features that can aid against extrinsic factors affecting facial expression recognition, such as illumination, pose, scale, rotation and occlusion. The study also explores the influence of using holistic versus componentbased approaches to FER. A novel feature descriptor referred to as Local Directional Covariance Matrices (LDCM) is proposed. The covariance descriptors will consist of fusing features such as location, intensity and filter responses, and include LBP and LDP into the covariance structure. Tests conducted will examine the accuracy of different variations of covariance features and the impact of segmenting the face into equal sized blocks or special landmark regions, i.e. eyes, nose and mouth, for classification. The results on JAFFE, CK+ and ISED facial expression databases establish that the proposed descriptor achieves a high level of performance for FER at a reduced feature size. The effectiveness of using a component-based approach with special landmarks displayed stable results across different datasets and environments.Item Feature regularization and learning for human activity recognition.(2018) Osayamwen, Festus Osazuwa.; Tapamo, Jules-Raymond.Feature extraction is an essential component in the design of human activity recognition model. However, relying on extracted features alone for learning often makes the model a suboptimal model. Therefore, this research work seeks to address such potential problem by investigating feature regularization. Feature regularization is used for encapsulating discriminative patterns that are needed for better and efficient model learning. Firstly, a within-class subspace regularization approach is proposed for eigenfeatures extraction and regularization in human activity recognition. In this ap- proach, the within-class subspace is modelled using more eigenvalues from the reliable subspace to obtain a four-parameter modelling scheme. This model enables a better and true estimation of the eigenvalues that are distorted by the small sample size effect. This regularization is done in one piece, thereby avoiding undue complexity of modelling eigenspectrum differently. The whole eigenspace is used for performance evaluation because feature extraction and dimensionality reduction are done at a later stage of the evaluation process. Results show that the proposed approach has better discriminative capacity than several other subspace approaches for human activity recognition. Secondly, with the use of likelihood prior probability, a new regularization scheme that improves the loss function of deep convolutional neural network is proposed. The results obtained from this work demonstrate that a well regularized feature yields better class discrimination in human activity recognition. The major contribution of the thesis is the development of feature extraction strategies for determining discriminative patterns needed for efficient model learning.Item Fingerprint identification using distributed computing.(2012) Khanyile, Nontokozo Portia.; Dube, Erick.; Tapamo, Jules-Raymond.Biometric systems such as face, palm and fingerprint recognition are very computationally expensive. The ever growing biometric database sizes have posed a need for faster search algorithms. High resolution images are expensive to process and slow down less powerful extraction algorithms. There is an apparent need to improve both the signal processing and the searching algorithms. Researchers have continually searched for new ways of improving the recognition algorithms in order to keep up with the high pace of the scientific and information security world. Most such developments, however, are architecture- or hardware-specific and do not port well to other platforms. This research proposes a cheaper and portable alternative. With the use of the Single Program Multiple Data programming architecture, a distributed fingerprint recognition algorithm is developed and executed on a powerful cluster. The first part in the parallelization of the algorithm is distributing the image enhancement algorithm which comprises of a series of computationally intensive image processing operations. Different processing elements work concurrently on different parts of the same image in order to speed up the processing. The second part of parallelization speeds up searching/ matching through a parallel search. A database is partitioned as evenly as possible amongst the available processing nodes which work independently to search their respective partitions. Each processor returns a match with the highest similarity score and the template with the highest score among those returned is returned as match given that the score is above a certain threshold. The system performance with respect to response time is then formalized in a form of a performance model which can be used to predict the performance of a distributed system given network parameters and number of processing nodes. The proposed algorithm introduces a novel approach to memory distribution of block-wise image processing operations and discusses three different ways to process pixels along the partitioning axes of the distributed images. The distribution and parallelization of the recognition algorithm gains up to as much as 12.5 times performance in matching and 10.2 times in enhancement.Item Flat fingerprint classification using a rule-based technique, based on directional patterns and similar points.(2016) Dorasamy, Kribashnee.; Webb-Ray, Leandra.; Tapamo, Jules-Raymond.Abstract available in PDF file.Item Fusion of time of flight (ToF) camera's ego-motion and inertial navigation.(2013) Ratshidaho, Thikhathali Terence.; Tapamo, Jules-Raymond.For mobile robots to navigate autonomously, one of the most important and challenging task is localisation. Localisation refers to the process whereby a robot locates itself within a map of a known environment or with respect to a known starting point within an unknown environment. Localisation of a robot in unknown environment is done by tracking the trajectory of a robot whilst knowing the initial pose. Trajectory estimation becomes challenging if the robot is operating in an unknown environment that has scarcity of landmarks, is GPS denied, is slippery and dark such as in underground mines. This dissertation addresses the problem of estimating a robot's trajectory in underground mining environments. In the past, this problem has been addressed by using a 3D laser scanner. 3D laser scanners are expensive and consume lot of power even though they have high measurements accuracy and wide eld of view. For this research work, trajectory estimation is accomplished by the fusion of an ego-motion provided by Time of Flight(ToF) camera and measurement data provided by a low cost Inertial Measurement Unit(IMU). The fusion is performed using Kalman lter algorithm on a mobile robot moving in a 2D planar surface. The results shows a signi cant improvement on the trajectory estimation. Trajectory estimation using ToF camera only is erroneous especially when the robot is rotating. The fused trajectory estimation algorithm is able to estimate accurate ego-motion even when the robot is rotating.Item Gaussian mixture model classifiers for detection and tracking in UAV video streams.(2017) Pillay, Treshan.; Naidoo, Bashan.Manual visual surveillance systems are subject to a high degree of human-error and operator fatigue. The automation of such systems often employs detectors, trackers and classifiers as fundamental building blocks. Detection, tracking and classification are especially useful and challenging in Unmanned Aerial Vehicle (UAV) based surveillance systems. Previous solutions have addressed challenges via complex classification methods. This dissertation proposes less complex Gaussian Mixture Model (GMM) based classifiers that can simplify the process; where data is represented as a reduced set of model parameters, and classification is performed in the low dimensionality parameter-space. The specification and adoption of GMM based classifiers on the UAV visual tracking feature space formed the principal contribution of the work. This methodology can be generalised to other feature spaces. This dissertation presents two main contributions in the form of submissions to ISI accredited journals. In the first paper, objectives are demonstrated with a vehicle detector incorporating a two stage GMM classifier, applied to a single feature space, namely Histogram of Oriented Gradients (HoG). While the second paper demonstrates objectives with a vehicle tracker using colour histograms (in RGB and HSV), with Gaussian Mixture Model (GMM) classifiers and a Kalman filter. The proposed works are comparable to related works with testing performed on benchmark datasets. In the tracking domain for such platforms, tracking alone is insufficient. Adaptive detection and classification can assist in search space reduction, building of knowledge priors and improved target representations. Results show that the proposed approach improves performance and robustness. Findings also indicate potential further enhancements such as a multi-mode tracker with global and local tracking based on a combination of both papers.Item Granting privacy and authentication in mobile ad hoc networks.(2012) Balmahoon, Reevana.; Peplow, Roger Charles Samuel.The topic of the research is granting privacy and authentication in Mobile Ad Hoc Networks (MANETs) that are under the authority of a certificate authority (CA) that is often not available. Privacy is implemented in the form of an anonymous identity or pseudonym, and ideally has no link to the real identity. Authentication and privacy are conflicting tenets of security as the former ensures a user's identity is always known and certified and the latter hides a user's identity. The goal was to determine if it is possible for a node to produce pseudonyms for itself that would carry the authority of the CA while being traceable by the CA, and would be completely anonymous. The first part of the dissertation places Vehicular Ad Hoc Networks (VANETs) into context, as this is the application of MANETs considered. This is followed by a detailed survey and analysis of the privacy aspects of VANETs. Thereafter, the solution is proposed, documented and analysed. Lastly, the dissertation is concluded and the contributions made are listed. The solution implements a novel approach for making proxies readily available to vehicles, and does indeed incorporate privacy and authentication in VANETs such that the pseudonyms produced are always authentic and traceable.Item Hybrid generalized non-orthogonal multiple access for the 5G wireless networks.(2018) Zitha, Samson Manyani.; Walingo, Tom Mmbasu.The deployment of 5G networks will lead to an increase in capacity, spectral efficiency, low latency and massive connectivity for wireless networks. They will still face the challenges of resource and power optimization, increasing spectrum efficiency and energy optimization, among others. Furthermore, the standardized technologies to mitigate against the challenges need to be developed and are a challenge themselves. In the current predecessor LTE-A networks, orthogonal frequency multiple access (OFDMA) scheme is used as the baseline multiple access scheme. It allows users to be served orthogonally in either time or frequency to alleviate narrowband interference and impulse noise. Further spectrum limitations of orthogonal multiple access (OMA) schemes have resulted in the development of non-orthogonal multiple access (NOMA) schemes to enable 5G networks to achieve high spectral efficiency and high data rates. NOMA schemes unorthogonally co-multiplex different users on the same resource elements (RE) (i.e. time-frequency domain, OFDMA subcarrier, or spreading code) via power domain (PD) or code domain (CD) at the transmitter and successfully separating them at the receiver by applying multi-user detection (MUD) algorithms. The current developed NOMA schemes, refered to as generalized-NOMA (G-NOMA) technologies includes; Interleaver Division Multiple Access (IDMA, Sparse code multiple access (SCMA), Low-density spreading multiple access (LDSMA), Multi-user shared access (MUSA) scheme and the Pattern Division Multiple Access (PDMA). These protocols are currently still under refinement, their performance and applicability has not been thoroughly investigated. The first part of this work undertakes a thorough investigation and analysis of the performance of the existing G-NOMA schemes and their applicability. Generally, G-NOMA schemes perceives overloading by non-orthogonal spectrum resource allocation, which enables massive connectivity of users and devices, and offers improved system spectral efficiency. Like any other technologies, the G-NOMA schemes need to be improved to further harvest their benefits on 5G networks leading to the requirement of Hybrid G-NOMA (G-NOMA) schemes. The second part of this work develops a HG-NOMA scheme to alleviate the 5G challenges of resource allocation, inter and cross-tier interference management and energy efficiency. This work develops and investigates the performance of an Energy Efficient HG-NOMA resource allocation scheme for a two-tier heterogeneous network that alleviates the cross-tier interference and improves the system throughput via spectrum resource optimization. By considering the combinatorial problem of resource pattern assignment and power allocation, the HG-NOMA scheme will enable a new transmission policy that allows more than two macro-user equipment’s (MUEs) and femto-user equipment’s (FUEs) to be co-multiplexed on the same time-frequency RE increasing the spectral efficiency. The performance of the developed model is shown to be superior to the PD-NOMA and OFDMA schemes.Item Hybrid generalized non-orthogonal multiple access for the 5G wireless networks.(2018) Zitha, Samson Manyani.; Walingo, Tom Mmbasu.The deployment of 5G networks will lead to an increase in capacity, spectral efficiency, low latency and massive connectivity for wireless networks. They will still face the challenges of resource and power optimization, increasing spectrum efficiency and energy optimization, among others. Furthermore, the standardized technologies to mitigate against the challenges need to be developed and are a challenge themselves. In the current predecessor LTE-A networks, orthogonal frequency multiple access (OFDMA) scheme is used as the baseline multiple access scheme. It allows users to be served orthogonally in either time or frequency to alleviate narrowband interference and impulse noise. Further spectrum limitations of orthogonal multiple access (OMA) schemes have resulted in the development of non-orthogonal multiple access (NOMA) schemes to enable 5G networks to achieve high spectral efficiency and high data rates. NOMA schemes unorthogonally co-multiplex different users on the same resource elements (RE) (i.e. time-frequency domain, OFDMA subcarrier, or spreading code) via power domain (PD) or code domain (CD) at the transmitter and successfully separating them at the receiver by applying multi-user detection (MUD) algorithms. The current developed NOMA schemes, refered to as generalized-NOMA (G-NOMA) technologies includes; Interleaver Division Multiple Access (IDMA, Sparse code multiple access (SCMA), Low-density spreading multiple access (LDSMA), Multi-user shared access (MUSA) scheme and the Pattern Division Multiple Access (PDMA). These protocols are currently still under refinement, their performance and applicability has not been thoroughly investigated. The first part of this work undertakes a thorough investigation and analysis of the performance of the existing G-NOMA schemes and their applicability. Generally, G-NOMA schemes perceives overloading by non-orthogonal spectrum resource allocation, which enables massive connectivity of users and devices, and offers improved system spectral efficiency. Like any other technologies, the G-NOMA schemes need to be improved to further harvest their benefits on 5G networks leading to the requirement of Hybrid G-NOMA (G-NOMA) schemes. The second part of this work develops a HG-NOMA scheme to alleviate the 5G challenges of resource allocation, inter and cross-tier interference management and energy efficiency. This work develops and investigates the performance of an Energy Efficient HG-NOMA resource allocation scheme for a two-tier heterogeneous network that alleviates the cross-tier interference and improves the system throughput via spectrum resource optimization. By considering the combinatorial problem of resource pattern assignment and power allocation, the HG-NOMA scheme will enable a new transmission policy that allows more than two macro-user equipment’s (MUEs) and femto-user equipment’s (FUEs) to be co-multiplexed on the same time-frequency RE increasing the spectral efficiency. The performance of the developed model is shown to be superior to the PD-NOMA and OFDMA schemes.Item Improvements of local directional pattern for texture classification.(2017) Shabat, Abuobayda Mohammed Mosa.; Tapamo, Jules-Raymond.The Local Directional Pattern (LDP) method has established its effectiveness and performance compared to the popular Local Binary Pattern (LBP) method in different applications. In this thesis, several extensions and modification of LDP are proposed with an objective to increase its robustness and discriminative power. Local Directional Pattern (LDP) is dependent on the empirical choice of three for the number of significant bits used to code the responses of the Kirsch Mask operation. In a first study, we applied LDP on informal settlements using various values for the number of significant bits k. It was observed that the change of the value of the number of significant bits led to a change in the performance, depending on the application. Local Directional Pattern (LDP) is based on the computation Kirsch Mask application response values in eight directions. But this method ignores the gray value of the center pixel, which may lead to loss of significant information. Centered Local Directional Pattern (CLDP) is introduced to solve this issue, using the value of the center pixel based on its relations with neighboring pixels. Local Directional Pattern (LDP) also generates a code based on the absolute value of the edge response value; however, the sign of the original value indicates two different trends (positive or negative) of the gradient. To capture the gradient trend, Signed Local Directional Pattern (SLDP) and Centered-SLDP (C-SLDP) are proposed, which compute the eight edge responses based on the two different directions (positive or negative) of the gradients.The Directional Local Binary pattern (DLBP) is introduced, which adopts directional information to represent texture images. This method is more stable than both LDP and LBP because it utilizes the center pixel as a threshold for the edge response of a pixel in eight directions, instead of employing the center pixel as the threshold for pixel intensity of the neighbors, as in the LBP method. Angled Local directional pattern (ALDP) is also presented, with an objective to resolve two problems in the LDP method. These are the value of the number of significant bits k, and to taking into account the center pixel value. It computes the angle values for the edge response of a pixel in eight directions for each angle (0◦,45◦,90◦,135◦). Each angle vector contains three values. The central value in each vector is chosen as a threshold for the other two neighboring pixels. Circular Local Directional Pattern (CILDP) isalso presented, with an objective of a better analysis, especially with textures with a different scale. The method is built around the circular shape to compute the directional edge vector using different radiuses. The performances of LDP, LBP, CLDP, SLDP, C-SLDP, DLBP, ALDP and CILDP are evaluated using five classifiers (K-nearest neighbour algorithm (k-NN), Support Vector Machine (SVM), Perceptron, Naive-Bayes (NB), and Decision Tree (DT)) applied to two different texture datasets: Kylberg dataset and KTH-TIPS2-b dataset. The experimental results demonstrated that the proposed methods outperform both LDP and LBP.Item Investigating machine and deep-learning model combinations for a two-stage IDS for IoT networks.(2021) Van der Walt, André.; Quazi, Tahmid Al-Mumit.; Van Niekerk, Brett.By 2025, there will be upwards of 75 billion IoT devices connected to the internet. Notable security incidents have shown that many IoT devices are insecure or misconfigured, leaving them vulnerable, often with devastating results. AI’s learning, adaptable and flexible nature can be leveraged to provide networking monitoring for IoT networks. This work proposes a novel two-stage IDS, using layered machine- and deep-learning models. The applicability of seven algorithms is investigated using the BoT-IoT dataset. After replicating four algorithms from literature, modifications to these algorithms' application are then explored along with their ability to classify in three scenarios: 1) binary attack/benign, 2) multi-class attack with benign and 3) multi-class attack only. Three additional algorithms are also considered. The modifications are shown to achieve higher F1-scores by 22.75% and shorter training times by 35.68 seconds on average than the four replicated algorithms. Potential benefits of the proposed two-stage system are examined, showing a reduction of threat detection/identification time by 0.51s on average and an increase of threat classification F1-score by 0.05 on average. In the second half of the dissertation, algorithm combinations, layered in the two-stage system, are investigated. To facilitate comparison of time metrics, the classification scenarios from the first half of the dissertation are re-evaluated on the test PC CPU. All two-stage combinations are then tested. The results show a CNN binary classifier at stage one and a KNN 4-Class model at stage two performs best, outperforming the 5-Class (attack and benign) system of either algorithm. This system's first stage improves upon the 5-Class system's classification time by 0.25 seconds. The benign class F1-score is improved by 0.23, indicating a significant improvement in false positive rate. The system achieves an overall F1-score of 0.94. This shows the two-stage system would perform well as an IDS. Additionally, investigations arising from findings during the evaluation of the two-stage system are presented, namely GPU data-transfer overhead, the effect of data scaling and the effect of benign samples on stage two, giving a better understanding of how the dataset interacts with AI models and how they may be improved in future work.Item Investigation of feature extraction algorithms and techniques for hyperspectral images.(2017) Adebanjo, Hannah Morenike.; Tapamo, Jules-Raymond.Hyperspectral images (HSIs) are remote-sensed images that are characterized by very high spatial and spectral dimensions and nd applications, for example, in land cover classi cation, urban planning and management, security and food processing. Unlike conventional three bands RGB images, their high dimensional data space creates a challenge for traditional image processing techniques which are usually based on the assumption that there exists su cient training samples in order to increase the likelihood of high classi cation accuracy. However, the high cost and di culty of obtaining ground truth of hyperspectral data sets makes this assumption unrealistic and necessitates the introduction of alternative methods for their processing. Several techniques have been developed in the exploration of the rich spectral and spatial information in HSIs. Speci cally, feature extraction (FE) techniques are introduced in the processing of HSIs as a necessary step before classi cation. They are aimed at transforming the high dimensional data of the HSI into one of a lower dimension while retaining as much spatial and/or spectral information as possible. In this research, we develop semi-supervised FE techniques which combine features of supervised and unsupervised techniques into a single framework for the processing of HSIs. Firstly, we developed a feature extraction algorithm known as Semi-Supervised Linear Embedding (SSLE) for the extraction of features in HSI. The algorithm combines supervised Linear Discriminant Analysis (LDA) and unsupervised Local Linear Embedding (LLE) to enhance class discrimination while also preserving the properties of classes of interest. The technique was developed based on the fact that LDA extracts features from HSIs by discriminating between classes of interest and it can only extract C 1 features provided there are C classes in the image by extracting features that are equivalent to the number of classes in the HSI. Experiments show that the SSLE algorithm overcomes the limitation of LDA and extracts features that are equivalent to ii iii the number of classes in HSIs. Secondly, a graphical manifold dimension reduction (DR) algorithm known as Graph Clustered Discriminant Analysis (GCDA) is developed. The algorithm is developed to dynamically select labeled samples from the pool of available unlabeled samples in order to complement the few available label samples in HSIs. The selection is achieved by entwining K-means clustering with a semi-supervised manifold discriminant analysis. Using two HSI data sets, experimental results show that GCDA extracts features that are equivalent to the number of classes with high classi cation accuracy when compared with other state-of-the-art techniques. Furthermore, we develop a window-based partitioning approach to preserve the spatial properties of HSIs when their features are being extracted. In this approach, the HSI is partitioned along its spatial dimension into n windows and the covariance matrices of each window are computed. The covariance matrices of the windows are then merged into a single matrix through using the Kalman ltering approach so that the resulting covariance matrix may be used for dimension reduction. Experiments show that the windowing approach achieves high classi cation accuracy and preserves the spatial properties of HSIs. For the proposed feature extraction techniques, Support Vector Machine (SVM) and Neural Networks (NN) classi cation techniques are employed and their performances are compared for these two classi ers. The performances of all proposed FE techniques have also been shown to outperform other state-of-the-art approaches.Item Investigation of virtual learning behaviour in an Eastern Cape high school biology course.(2003) Kavuma, Henry.; Yates, Steven.Transformation in education over the decades has failed to keep abreast of the rapidly advancing technological environment of modern society. This implies that curricula, learning paradigms and tools employed by educational institutions are not in sync with the technologically oriented lifestyle of modern society. Learners are therefore unable to apply and assimilate their daily life experiences into the learning process. This disparity warrants radical transformation in education, so as to furnish the appropriate education system where learners are able to construct their knowledge on the basis of pre-existing ideas and experiences. However, any transformation in the e~ucation approach should essentially be complemented by the adoption of appropriate learning environments and paradigms that can capitalize on learners' life experiences as well as elicit the appropriate learning behaviour and attitudes for effective and life-long learning. Much of the literature reviewed affirms the efficacy of virtual learning environments as mediums that can facilitate effective learner-centred electronic-learning suitable for modern society. They are asserted as liberators of learning in respect of instructivist ideals, information access and the confines of the physical classroom. This is confirmed by findings of this research, which are generally in favour of the virtual learning environment's ability to enhance the learning experiences of learners but remained inconclusive on their learning outcomes.