Repository logo
 

Masters Degrees (Computer Science)

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 20 of 85
  • Item
    Solar flare recurrence prediction & visual recognition.
    (2024) Mngomezulu, Mangaliso Moses.; Gwetu, Mandlenkosi Victor.; Fonou-Dombeu, Jean Vincent.
    Solar flares are intense outbursts of radiation observable in the photosphere. The radiation flux is measured in W/m2. Solar flares can kill astronauts, disrupt electrical power grids, and interrupt satellite-dependent technologies. They threaten human survival and the efficiency of technology. The reliability of solar flare prediction models is often undermined by the stochastic nature of solar flare occurrence as shown in previous studies. The Geostationary Operational Environmental Satellite (GOES) system classifies solar flares based on their radiation flux. This study investigated how Recurrent Neural Network (RNN) models compare to their ensembles when predicting flares that emit at least 10−6W/m2 of radiation flux, known as ≥C class flares. A Long-Short Term Memory (LSTM) and Simple RNN homogeneous ensemble achieved a similar performance with a tied True Skill Statistic (TSS) score of 70 ± 1.5%. Calibration curves showed that ensembles are more reliable. The balanced accuracies of the Simple RNN Ensemble and LSTM are both 85% with f1-scores of 79% and 77% respectively. Furthermore, this study proposed a framework that shows how objective function reparameterization can be used to improve binary (≥C or
  • Item
    Hybrid genetic optimisation for quantum feature map design.
    (2024) Pellow-Jarman, Rowan Martin.; Pillay, Anban Woolaganathan.; Ilya, Sinayskiy.; Petruccione, Francesco.
    Good feature maps are crucial for machine learning kernel methods for effective mapping of non-linearly separable input data into a higher dimension feature space, thus allowing the data to be linearly separable in feature space. Recent works have proposed automating the task of quantum feature map circuit design with methods such as variational ansatz parameter optimization and genetic algorithms. A problem commonly faced by genetic algorithm methods is the high cost of computing the genetic cost function. To mitigate this, this work investigates the suitability of two metrics as alternatives to test set classification accuracy. Accuracy has been applied successfully as a genetic algorithm cost function for quantum feature map design in previous work. The first metric is kernel-target alignment, which has previously been used as a training metric in quantum feature map design by variational ansatz training. Kernel-target alignment is a faster metric to evaluate than test set accuracy and does not require any data points to be reserved from the training set for its evaluation. The second metric is an estimation of kernel-target alignment which further accelerates the genetic fitness evaluation by an adjustable constant factor. The second aim of this work is to address the issue of the limited gate parameter choice available to the genetic algorithm. This is done by training the parameters of the quantum feature map circuits output in the final generation of the genetic algorithm using COBYLA to improve either kernel-target alignment or root mean squared error. This hybrid approach is intended to complement the genetic algorithm structure optimization approach by improving the feature maps without increasing their size. Eight new approaches are compared to the accuracy optimization approach across nine varied binary classification problems from the UCI machine learning repository, demonstrating that kernel-target alignment and its approximation produce feature map circuits enabling comparable accuracy to the original approach, with larger margins on training data that improve further with variational training.
  • Item
    Application of ELECTRE algorithms in ontology selection.
    (2022) Sooklall, Ameeth.; Fonou-Dombeu, Jean Vincent.
    The field of artificial intelligence (AI) is expanding at a rapid pace. Ontology and the field of ontological engineering is an invaluable component of AI, as it provides AI the ability to capture and express complex knowledge and data in a form that encourages computation, inference, reasoning, and dissemination. Accordingly, the research and applications of ontology is becoming increasingly widespread in recent years. However, due to the complexity involved with ontological engineering, it is encouraged that users reuse existing ontologies as opposed to creating ontologies de novo. This in itself has a huge disadvantage as the task of selecting appropriate ontologies for reuse is complex as engineers and users may find it difficult to analyse and comprehend ontologies. It is therefore crucial that techniques and methods be developed in order to reduce the complexity of ontology selection for reuse. Essentially, ontology selection is a Multi-Criteria Decision-Making (MCDM) problem, as there are multiple ontologies to choose from whilst considering multiple criteria. However, there has been little usage of MCDM methods in solving the problem of selecting ontologies for reuse. Therefore, in order to tackle this problem, this study looks to a prominent branch of MCDM, known as the ELimination Et. Choix Traduisant la RÉalite (ELECTRE). ELECTRE is a family of decision-making algorithms that model and provide decision support for complex decisions comprising many alternatives with many characteristics or attributes. The ELECTRE algorithms are extremely powerful and they have been applied successfully in a myriad of domains, however, they have only been studied to a minimal degree with regards to ontology ranking and selection. In this study the ELECTRE algorithms were applied to aid in the selection of ontologies for reuse, particularly, three applications of ELECTRE were studied. The first application focused on ranking ontologies according to their complexity metrics. The ELECTRE I, II, III, and IV models were applied to rank a dataset of 200 ontologies from the BioPortal Repository, with 13 complexity metrics used as attributes. Secondly, the ELECTRE Tri model was applied to classify the 200 ontologies into three classes according to their complexity metrics. A preference-disaggregation approach was taken, and a genetic algorithm was designed to infer the thresholds and parameters for the ELECTRE Tri model. In the third application a novel ELECTRE model was developed, named ZPLTS-ELECTRE II, where the concept of Z-Probabilistic Linguistic Term Set (ZPLTS) was combined with the traditional ELECTRE II algorithm. The ZPLTS-ELECTRE II model enables multiple decision-makers to evaluate ontologies (group decision-making), as well as the ability to use natural language to provide their evaluations. The model was applied to rank 9 ontologies according to five complexity metrics and five qualitative usability metrics. The results of all three applications were analysed, compared, and contrasted, in order to understand the applicability and effectiveness of the ELECTRE algorithms for the task of selecting ontologies for reuse. These results constitute interesting perspectives and insights for the selection and reuse of ontologies.
  • Item
    Blockchain-based security model for efficient data transmission and storage in cloudlet network resource environment.
    (2023) Masango, Nothile Clementine.; Ezugwu, Absalom El-Shamir.
    As mobile users’ service requirement increases, applications such as online games, virtual reality, and augmented reality demand for more computation power. However, the current design of mobile devices and their associated innovations cannot accommodate such applications because of the limitations they have in terms storage, computing power and battery life. Therefore, as a result, mobile devices offload their tasks to the remote cloud environments. Moreover, due to the architecture of cloud computing, where cloud is located at the core of the network, applications experiences challenges such as latency. This is a disadvantage to real-time online applications. Hence, the edge computing based cloudlet environment was introduced to bring resources closer to the end user, with an enhanced network quality of service. Although there is merit in deploying cloudlets at the edge of the network, which is closer to the user, this makes them susceptible to attacks. For this newly introduced technology to be fully adopted, effective security measures need to be incorporated into the current cloudlets computing platform. This study proposes blockchain technology as a security model in securing the data shared between mobile devices and cloudlet, with an agent layer concept introduced in between mobile device layer and cloudlet. The implemented agent-based model uses the new consensus mechanism, proof of trust where trust and experience is determine by the number of coins each node (cloudlet) possess, to select two miners. These miners participate in message verification using Elliptic curve scheme, and if they do not reach consensus, a third miner is selected to resolve the conflict. Any miner with wrong verification loses all the coins; in this way trust and experience is controlled. This proposed solution has proven to be more efficient in terms of security and network performance in comparison to existing state-of-the-arts implementations.
  • Item
    Gender classification using facial components.
    (2018) Bayana, Mayibongwe Handy.; Viriri, Serestina.; Angulu, Raphael.
    Gender classification is very important in facial analysis as it can be used as input into a number of systems such as face recognition. Humans are able to classify gender with great accuracy however passing this ability to machines is a complex task because of many variables such as lighting to mention a few. For the purpose of this research we have approached gender classification as a binary problem, involving the two classes male and female. Two datasets are used in this research which are the FG-NET dataset and Pilots Parliament datasets. Two appearance based feature extractors are used which are the LBP and LDP with the Active Shape model being included by fusing. The classifiers used here are the Support Vector Machine with Radial Basis Function kernel and an Artificial Neural Network with backpropagation. On the FG-NET an average detection of 90.6% against that of 87.5% to that of the PPB. Gender is then detected from the facial components the nose, eyes among others. The forehead recorded the highest accuracy with 92%, followed by the nose with 90%, cheeks with 89.2% and the eyes with 87% and the mouth recorded the lowest accuracy of 75%. As a result feature fusion is then carried out to improve classification accuracies especially that of the mouth and eyes with lowest accuracies. The eyes with an accuracy of 87% is fused with the forehead with 92% and the resulting accuracy is an increase to 93%. The mouth, with the lowest accuracy of 75% is fused with the nose which has an accuracy of 90% and the resulting accuracy is 87%. These results carried out by fusing through addition showed improved results. Fusion is then carried out between Appearance based and shape based features. On the FG-NET dataset using the LBP and LDP an accuracy of 85.33% and 89.53% with the PPB recording 83.13%, 89.3% for LBP and LDP respectively. As expected and shown by previous researchers the LDP clearly obtains higher classification accuracies as it than LBP as it uses gradient rather than pixel intensity. We then fuse the vectors of the LDP, LBP with that of the ASM and carry out dimensionality reduction, then fusion by addition. On the PPB dataset fusion of LDP and ASM records 81.56%, and 94.53% with the FG-NET recording 89.53% respectively.
  • Item
    A comparative study of metaheuristics for blood assignment problem.
    (2018) Govender, Prinolan.; Ezugwu, Absalom El-Shamir.
    The Blood Assignment Problem (BAP) is a real world and NP-hard combinatorial optimization problem. The study of BAP is significant due to the continuous demand for blood transfusion during medical emergencies. However, the formulation of this problem faces various challenges that stretch from managing critical blood shortages, limited shelf life and, blood type incompatibility that constrain the random transfusion of blood to patients. The transfusion of incompatible blood types between patient and donor can lead to adverse side effects on the patients. Usually, the sudden need for blood units arises as a result of unforeseen trauma that requires urgent medical attention. This condition can interrupt the supply of blood units and may result in the blood bank importing additional blood products from external sources, thereby increasing its running cost and other risk factors associated with blood transfusion. This however, might have serious consequences in terms of medical emergency, running cost and supply of blood units. Therefore, by taking these factors into consideration the aforementioned study implemented five global metaheuristic optimization algorithms to solve the BAP. Each of these algorithms was hybridized with a sustainable blood assignment policy that relates to the South Africa blood banks. The objective of this study was to minimize blood product wastage with emphasis on expiry and reduction in the amount of importation from external sources. Extensive computational experiments were conducted over a total of six different datasets, and the results validate the reliability and effectiveness of each of the proposed algorithms. Results were analysed across three major aspects, namely, the average levels of importation, expiry across a finite time period and computational time experienced by each of the metaheuristic algorithms. The numerical results obtained show that the Particle Swarm Optimization algorithm was better in terms of computational time. Furthermore, none of the algorithms experienced any form of expiry within the allotted time frame. Moreover, the results also revealed that the Symbiotic Organism Search algorithm produced the lowest average result for importation; therefore, it was considered the most reliable and proficient algorithm for the BAP.
  • Item
    An assessment of the component-based view for metaheuristic research.
    (2023) Achary, Thimershen.; Pillay, Anban Woolaganathan.; Jembere, Edgar.
    Several authors have recently pointed to a crisis within the metaheuristic research field, particularly the proliferation of metaphor-inspired metaheuristics. Common problems identified include using non-standard terminology, poor experimental practices, and, most importantly, the introduction of purportedly new algorithms that are only superficially different from existing ones. These issues make similarity and performance analysis, classification, and metaheuristic generation difficult for both practitioners and researchers. A component-based view of metaheuristics has recently been promoted to deal with these problems. A component based view argues that metaheuristics are best understood in terms of their constituents or components. This dissertation presents three papers that are thematically centred on this view. The central problem for the component-based view is the identification of components of a metaheuristic. The first paper proposes the use of taxonomies to guide the identification of metaheuristic components. We developed a general and rigorous method, TAXONOG-IMC, that takes as input an appropriate taxonomy and guides the user to identify components. The method is described in detail, an example application of the method is given, and an analysis of its usefulness is provided. The analysis shows that the method is effective and provides insights that are not possible without the proper identification of the components. The second paper argues for formal, mathematically sound representations of metaheuristics. It introduces and defends a formal representation that leverages the component based view. The third paper demonstrates that a representation technique based on a component based view is able to provide the basis for a similarity measure. This paper presents a method of measuring similarity between two metaheuristic-algorithms, based on their representations as signal flow diagrams. Our findings indicate that the component based view of metaheuristics provides valuable insights and allows for more robust analysis, classification and comparison.
  • Item
    Policy optimisation and generalisation for reinforcement learning agents in sparse reward navigation environments.
    (2021) Jeewa, Asad.; Pillay, Anban Woolaganathan.; Jembere, Edgar.
    Sparse reward environments are prevalent in the real world and training reinforcement learning agents in them remains a substantial challenge. Two particularly pertinent problems in these environments are policy optimisation and policy generalisation. This work is focused on the navigation task in which agents learn to navigate past obstacles to distant targets and are rewarded on completion of the task. A novel compound reward function, Directed Curiosity, a weighted sum of curiosity-driven ex-ploration and distance-based shaped rewards is presented. The technique allowed for faster convergence and enabled agents to gain more rewards than agents trained with the distance-based shaped rewards or curiosity alone. However, it resulted in policies that were highly optimised for the specific environment that the agents were trained on, and therefore did not generalise well to unseen environments. A training curricu-lum was designed for this purpose and resulted in the transfer of knowledge, when using the policy “as-is”, to unseen testing environments. It also eliminated the need for additional reward shaping and was shown to converge faster than curiosity-based agents. Combining curiosity with the curriculum provided no meaningful benefits and exhibited inferior policy generalisation.
  • Item
    A patch-based convolutional neural network for localized MRI brain segmentation.
    (2020) Vambe, Trevor Constantine.; Viriri, Serestina.; Gwetu, Mandlenkosi Victor.
    Accurate segmentation of the brain is an important prerequisite for effective diagnosis, treatment planning, and patient monitoring. The use of manual Magnetic Resonance Imaging (MRI) segmentation in treating brain medical conditions is slowly being phased out in favour of fully-automated and semi-automated segmentation algorithms, which are more efficient and objective. Manual segmentation has, however, remained the gold standard for supervised training in image segmentation. The advent of deep learning ushered in a new era in image segmentation, object detection, and image classification. The convolutional neural network has contributed the most to the success of deep learning models. Also, the availability of increased training data when using Patch Based Segmentation (PBS) has facilitated improved neural network performance. On the other hand, even though deep learning models have achieved successful results, they still suffer from over-segmentation and under-segmentation due to several reasons, including visually unclear object boundaries. Even though there have been significant improvements, there is still room for better results as all proposed algorithms still fall short of 100% accuracy rate. In the present study, experiments were carried out to improve the performance of neural network models used in previous studies. The revised algorithm was then used for segmenting the brain into three regions of interest: White Matter (WM), Grey Matter (GM), and Cerebrospinal Fluid (CSF). Particular emphasis was placed on localized component-based segmentation because both disease diagnosis and treatment planning require localized information, and there is a need to improve the local segmentation results, especially for small components. In the evaluation of the segmentation results, several metrics indicated the effectiveness of the localized approach. The localized segmentation resulted in the accuracy, recall, precision, null-error, false-positive rate, true-positive and F1- score increasing by 1.08%, 2.52%, 5.43%, 16.79%, -8.94%, 8.94%, 3.39% respectively. Also, when the algorithm was compared against state of the art algorithms, the proposed algorithm had an average predictive accuracy of 94.56% while the next best algorithm had an accuracy of 90.83%.
  • Item
    Irenbus - a real-time machine learning based public transport management system.
    (2020) Skhosana, Menzi.; Ezugwu, Absalom El-Shamir.
    The era of Big Data and the Internet of Things is upon us, and it is time for developing countries to take advantage of and pragmatically apply these ideas to solve real-world problems. As the saying goes, "data is the new oil" - we believe that data can be used to power the transportation sector the same way traditional oil does. Many problems faced daily by the public transportation sector can be resolved or mitigated through the collection of appropriate data and application of predictive analytics. In this body of work, we are primarily focused on problems affecting public transport buses. These include the unavailability of real-time information to commuters about the current status of a given bus or travel route; and the inability of bus operators to efficiently assign available buses to routes for a given day based on expected demand for a particular route. A cloud-based system was developed to address the aforementioned. This system is composed of two subsystems, namely a mobile application for commuters to provide the current location and availability of a given bus and other related information, which can also be used by drivers so that the bus can be tracked in real-time and collect ridership information throughout the day, and a web application that serves as a dashboard for bus operators to gain insights from the collected ridership data. These were all developed using the Firebase Backend- as-a-Service (BaaS) platform and integrated with a machine learning model trained on collected ridership data to predict the daily ridership for a given route. Our novel system provides a holistic solution to problems in the public transport sector, as it is highly scalable, cost-efficient and takes full advantage of the currently available technologies in comparison with other previous work in this topic.
  • Item
    Optimized deep learning model for early detection and classification of lung cancer on CT images.
    (2022) Mohamed, Tehnan Ibrahem Alhasseen.; Ezugwu, Absalom El-Shamir.; Oyelade, Olaide Nathaniel.
    Recently, researchers have shown an increased interest in the early diagnosis and detection of lung cancer using the characteristics of computed tomography (CT) images. The accurate classification of lung cancer assists the physician to know the targeted treatment, reducing mortality, and as a result, supporting human survival. Several studies have been carried out on lung cancer detection using a convolutional neural network (CNN) models. However, it still remains a challenge to improve the model’s performance. Moreover, CNN models have some limitations that affect their performance, including choosing the optimal architecture, selecting suitable model parameters, and picking the best parameter values for weights and bias. To address the problem of selecting the best combination of weights and bias needed for the classification of lung cancer in CT images, this study proposes a hybrid of Ebola optimization search algorithm (EOSA) and the CNN model. We proposed a hybrid deep learning model with preprocessing features for lung cancer classification using publicly accessible Iraq-Oncology Teaching Hospital/National Center for Cancer Diseases (IQ-OTH/NCCD) dataset. The proposed EOSA-CNN hybrid model was trained using 80% of the cases to obtain the optimal configuration, while the remaining 20% was applied for validation. Also, we compared the proposed model with similar five hybrid algorithms and the traditional CNN. The results indicated that EOSA-CNN scored 0.9321 classification accuracy. Furthermore, the result showed that EOSA-CNN achieved a specificity of 0.7941, 0.97951, 0.9328, and sensitivity of 0.9038, 0.13333, 0.9071 for normal, benign, and malignant cases, respectively. This confirmed that the hybrid algorithm provides a good solution for the classification of lung cancer.
  • Item
    Possible Models Diagrams - a new approach to teaching propositional logic.
    (1994) Clarke, Matthew Charles.; Dempster, Robert.; Grayson, Diane J.
    Abstract available in PDF.
  • Item
    Addressing traffic congestion and throughput through optimization.
    (2021) Iyoob, Mohamed Zaire.; van Niekerk, Brett.
    Traffic congestion experienced in port precincts have become prevalent in recent years for South Africa and internationally [1, 2, 3]. In addition to the environmental impacts of air pollution due to this challenge, economic effects also weigh heavy on profit margins with added fuel costs and time wastages. Even though there are many common factors attributing to congestion experienced in port precincts and other areas, operational inefficiencies due to slow productivity and lack of handling equipment to service trucks in port areas are a major contributor [4, 5]. While there are several types of optimisation approaches to addressing traffic congestion such as Queuing Theory [6], Genetic Algorithms [7], Ant Colony Optimisation [8], Particle Swarm Optimisation [9], traffic congestion is modelled based on congested queues making queuing theory most suited for resolving this problem. Queuing theory is a discipline of optimisation that studies the dynamics of queues to determine a more optimal route to reduce waiting times. The use of optimisation to address the root cause of port traffic congestion has been lacking with several studies focused on specific traffic zones that only address the symptoms. In addition, research into traffic around port precincts have also been limited to the road side with proposed solutions focusing on scheduling and appointment systems [25, 56] or the sea-side focusing on managing vessel traffic congestion [30, 31, 58]. The aim of this dissertation is to close this gap through the novel design and development of Caudus, a smart queue solution that addresses traffic congestion and throughput through optimization. The name “CAUDUS” is derived as an anagram with Latin origins to mean “remove truck congestion”. Caudus has three objective functions to address congestion in the port precinct, and by extension, congestion in warehousing and freight logistics environments viz. Preventive, Reactive and Predictive. The preventive objective function employs the use of Little’s rule [14] to derive the algorithm for preventing congestion. Acknowledging that congestion is not always avoidable, the reactive objective function addresses the problem by leveraging Caudus’ integration capability with Intelligent Transport Systems [65] in conjunction with other road-user network solutions. The predictive objective function is aimed at ensuring the environment is incident free and provides an early-warning detection of possible exceptions in traffic situations that may lead to congestion. This is achieved using the derived algorithms from this study that identifies bottleneck symptoms in one traffic zone where the root cause exists in an adjoining traffic area. The Caudus Simulation was developed in this study to test the derived algorithms against the different congestion scenarios. The simulation utilises HTML5 and JavaScript in the front-end GUI with the back-end having a SQL code base. The entire simulation process is triggered using a series of multi-threaded batch programs to mimic the real-world by ensuring process independence for the various simulation activities. The results from the simulation demonstrates a significant reduction in the vii duration of congestion experienced in the port precinct. It also displays a reduction in throughput time of the trucks serviced at the port thus demonstrating Caudus’ novel contribution in addressing traffic congestion and throughput through optimisation. These results were also published and presented at the International Conference on Artificial Intelligence, Big Data, Computing and Data Communication Systems (icABCD 2021) under the title “CAUDUS: An Optimisation Model to Reducing Port Traffic Congestion” [84].
  • Item
    Network intrusion detection using genetic programming.
    (2018) Chareka, Tatenda Herbert.; Pillay, Nelishia.
    Network intrusion detection is a real-world problem that involves detecting intrusions on a computer network. Detecting whether a network connection is intrusive or non-intrusive is essentially a binary classification problem. However, the type of intrusive connections can be categorised into a number of network attack classes and the task of associating an intrusion to a particular network type is multiclass classification. A number of artificial intelligence techniques have been used for network intrusion detection including Evolutionary Algorithms. This thesis investigates the application of evolutionary algorithms namely, Genetic Programming (GP), Grammatical Evolution (GE) and Multi-Expression Programming (MEP) in the network intrusion detection domain. Grammatical evolution and multi-expression programming are considered to be variants of GP. In this thesis, a comparison of the effectiveness of classifiers evolved by the three EAs within the network intrusion detection domain is performed. The comparison is performed on the publicly available KDD99 dataset. Furthermore, the effectiveness of a number of fitness functions is evaluated. From the results obtained, standard genetic programming performs better than grammatical evolution and multi-expression programming. The findings indicate that binary classifiers evolved using standard genetic programming outperformed classifiers evolved using grammatical evolution and multi-expression programming. For evolving multiclass classifiers different fitness functions used produced classifiers with different characteristics resulting in some classifiers achieving higher detection rates for specific network intrusion attacks as compared to other intrusion attacks. The findings indicate that classifiers evolved using multi-expression programming and genetic programming achieved high detection rates as compared to classifiers evolved using grammatical evolution.
  • Item
    An evaluation of depth camera-based hand pose recognition for virtual reality systems.
    (2018) Clark, Andrew William.; Moodley, Deshendran.; Pillay, Anban Woolaganathan.
    Camera-based hand gesture recognition for interaction in virtual reality systems promises to provide a more immersive and less distracting means of input than the usual hand-held controllers. It is unknown if a camera would effectively distinguish hand poses made in a virtual reality environment, due to lack of research in this area. This research explores and measures the effectiveness of static hand pose input with a depth camera, specifically the Leap Motion controller, for user interaction in virtual reality applications. A pose set was derived by analyzing existing gesture taxonomies and Leap Motion controller-based virtual reality applications, and a dataset of these poses was constructed using data captured by twenty-five participants. Experiments on the dataset utilizing three popular machine learning classifiers were not able to classify the poses with a high enough accuracy, primarily due to occlusion issues affecting the input data. Therefore, a significantly smaller subset was empirically derived using a novel algorithm, which utilized a confusion matrix from the machine learning experiments as well as a table of Hamming Distances between poses. This improved the recognition accuracy to above 99%, making this set more suitable for real-world use. It is concluded that while camera-based pose recognition can be reliable on a small set of poses, finger occlusion hinders the use of larger sets. Thus, alternative approaches, such as multiple input cameras, should be explored as a potential solution to the occlusion problem.
  • Item
    Hybrid component-based face recognition.
    (2018) Gumede, Andile Martin.; Viriri, Serestina.; Gwetu, Mandlenkosi.
    Facial recognition (FR) is the trusted biometric method for authentication. Compared to other biometrics such as signature; which can be compromised, facial recognition is non-intrusive and it can be apprehended at a distance in a concealed manner. It has a significant role in conveying the identity of a person in social interaction and its performance largely depends on a variety of factors such as illumination, facial pose, expression, age span, hair, facial wear, and motion. In the light of these considerations this dissertation proposes a hybrid component-based approach that seeks to utilise any successfully detected components. This research proposes a facial recognition technique to recognize faces at component level. It employs the texture descriptors Grey-Level Co-occurrence (GLCM), Gabor Filters, Speeded-Up Robust Features (SURF) and Scale Invariant Feature Transforms (SIFT), and the shape descriptor Zernike Moments. The advantage of using the texture attributes is their simplicity. However, they cannot completely characterise the whole face recognition, hence the Zernike Moments descriptor was used to compute the shape properties of the selected facial components. These descriptors are effective facial components feature representations and are robust to illumination and pose changes. Experiments were performed on four different state of the art facial databases, the FERET, FEI, SCface and CMU and Error-Correcting Output Code (ECOC) was used for classification. The results show that component-based facial recognition is more effective than whole face and the proposed methods achieve 98.75% of recognition accuracy rate. This approach performs well compared to other componentbased facial recognition approaches.
  • Item
    An analysis of algorithms to estimate the characteristics of the underlying population in Massively Parallel Pyrosequencing data.
    (2011) Ragalo, Anisa.; Murrell, Hugh Crozier.
    Massively Parallel Pyrosequencing (MPP) is a next generation DNA sequencing technique that is becoming ubiquitous because it is considerably faster, cheaper and produces a higher throughput than long established sequencing techniques like Sanger sequencing. The MPP methodology is also much less labor intensive than Sanger sequencing. Indeed, MPP has become a preferred technology in experiments that seek to determine the distinctive genetic variation present in homologous genomic regions. However there arises a problem in the interpretation of the reads derived from an MPP experiment. Specifically MPP reads are characteristically error prone. This means that it becomes difficult to separate the authentic genomic variation underlying a set of MPP reads from variation that is a consequence of sequencing error. The difficulty of inferring authentic variation is further compounded by the fact that MPP reads are also characteristically short. As a consequence of this, the correct alignment of an MPP read with respect to the genomic region from which it was derived may not be intuitive. To this end, several computational algorithms that seek to correctly align and remove the non-authentic genetic variation from MPP reads have been proposed in literature. We refer to the removal of non-authentic variation from a set of MPP reads as error correction. Computational algorithms that process MPP data are classified as sequence-space algorithms and flow-space algorithms. Sequence-space algorithms work with MPP sequencing reads as raw data, whereas flow-space algorithms work with MPP flowgrams as raw data. A flowgram is an intermediate product of MPP, which is subsequently converted into a sequencing read. In theory, flow-space computations should produce more accurate results than sequence-space computations. In this thesis, we make a qualitative comparison of the distinct solutions delivered by selected MPP read alignment algorithms. Further we make a qualitative comparison of the distinct solutions delivered by selected MPP error correction algorithms. Our comparisons between different algorithms with the same niche are facilitated by the design of a platform for MPP simulation, PyroSim. PyroSim is designed to encapsulate the error rate that is characteristic of MPP. We implement a selection of sequence-space and flow-space alignment algorithms in a software package, MPPAlign. We derive a quality ranking for the distinct algorithms implemented in MPPAlign through a series of qualitative comparisons. Further, we implement a selection of sequence-space and flow-space error correction algorithms in a software package, MPPErrorCorrect. Similarly, we derive a quality ranking for the distinct algorithms implemented in MPPErrorCorrect through a series of qualitative comparisons. Contrary to the view expressed in literature which postulates that flowspace computations are more accurate than sequence-space computations, we find that in general the sequence-space algorithms that we implement outperform the flow-space algorithms. We surmise that flow-space is a more sensitive domain for conducting computations and can only yield consistently good results under stringent quality control measures. In sequence-space, however, we find that base calling, the process that converts flowgrams (flow-space raw data) into sequencing reads (sequence-space raw data), leads to more reliable computations.
  • Item
    Liver segmentation using 3D CT scans.
    (2018) Hiraman, Anura.; Viriri, Serestina.; Gwetu, Mandlenkosi.
    Abstract available in PDF file.
  • Item
    Improved techniques for phishing email detection based on random forest and firefly-based support vector machine learning algorithms.
    (2014) Andronicus, Ayobami Akinyelu.; Adewumi, Aderemi Oluyinka.
    Electronic fraud is one of the major challenges faced by the vast majority of online internet users today. Curbing this menace is not an easy task, primarily because of the rapid rate at which fraudsters change their mode of attack. Many techniques have been proposed in the academic literature to handle e-fraud. Some of them include: blacklist, whitelist, and machine learning (ML) based techniques. Among all these techniques, ML-based techniques have proven to be the most efficient, because of their ability to detect new fraudulent attacks as they appear.There are three commonly perpetrated electronic frauds, namely: email spam, phishing and network intrusion. Among these three, more financial loss has been incurred owing to phishing attacks. This research investigates and reports the use of MLand Nature Inspired technique in the domain of phishing detection, with the foremost objective of developing a dynamic and robust phishing email classifier with improved classification accuracy and reduced processing time.Two approaches to phishing email detection are proposed, and two email classifiers are developed based on the proposed approaches. In the first approach, a random forest algorithm is used to construct decision trees,which are,in turn,used for email classification. The second approach introduced a novel MLmethod that hybridizes firefly algorithm (FFA) and support vector machine (SVM). The hybridized method consists of three major stages: feature extraction phase, hyper-parameter selection phase and email classification phase. In the feature extraction phase, the feature vectors of all the features described in Section 3.6 are extracted and saved in a file for easy access.In the second stage, a novel hyper-parameter search algorithm, developed in this research, is used to generate exponentially growing sequence of paired C and Gamma (γ) values. FFA is then used to optimize the generated SVM hyper-parameters and to also find the best hyper-parameter pair. Finally, in the third phase, SVM is used to carry out the classification. This new approach addresses the problem of hyper-parameter optimization in SVM, and in turn, improves the classification speed and accuracy of SVM. Using two publicly available email datasets, some experiments are performed to evaluate the performance of the two proposed phishing email detection techniques. During the evaluation of each approach, a set of features (well suited for phishing detection) are extracted from the training dataset and used to constructthe classifiers. Thereafter, the trained classifiers are evaluated on the test dataset. The evaluations produced very good results. The RF-based classifier yielded a classification accuracy of 99.70%, a FP rate of 0.06% and a FN rate of 2.50%. Also, the hybridized classifier (known as FFA_SVM) produced a classification accuracy of 99.99%, a FP rate of 0.01% and a FN rate of 0.00%.
  • Item
    An analysis of approaches for developing national health information systems : a case study of two sub-Saharan African countries.
    (2016) Mudaly, Thinasagree.; Moodley, D.; Pillay, Anban Woolaganathan.; Seebregts, Christopher.
    Health information systems in sub-Saharan African countries are currently characterized by significant fragmentation, duplication and limited interoperability. Incorporating these disparate systems into a coherent national health information system has the potential to improve operational efficiencies, decision-making and planning across the health sector. In a recent study, Coiera analysed several mature national health information systems in high income countries and categorised a topology of the approaches for building them as: top-down, bottom-up or middle-out. Coeria gave compelling arguments for countries to adopt a middle-out approach. Building national health information systems in sub-Saharan African countries pose unique and complex challenges due to the substantial difference between the socio-economic, political and health landscapes of these countries and high income countries. Coiera’s analysis did not consider the unique challenges faced by sub-Saharan African countries in building their systems. Furthermore, there is currently no framework for analysing high-level approaches for building NHIS. This makes it difficult to establish the benefits and applicability of Coiera’s analysis for building NHIS in sub-Saharan African countries. The aim of this research was to develop and apply such a framework to determine which approach in Coiera’s topology, if any, showed signs of being the most sustainable approach for building effective national health information systems in sub-Saharan African countries. The framework was developed through a literature analysis and validated by applying it in case studies of the development of national health information systems in South Africa and Rwanda. The result of applying the framework to the case studies was a synthesis of the current evolution of these systems, and an assessment of how well each approach in Coiera’s topology supports key considerations for building them in typical sub-Saharan African countries. The study highlights the value of the framework for analysing sub-Saharan African countries in terms of Coiera’s topology, and concludes that, given the peculiar nature and evolution of national health information systems in sub-Saharan African countries, a middle-out approach can contribute significantly to building effective and sustainable systems in these countries, but its application in sub-Saharan African countries will differ significantly from its application in high income countries.