School of Engineering
Permanent URI for this communityhttps://hdl.handle.net/10413/6522
Browse
Browsing School of Engineering by Title
Now showing 1 - 20 of 1354
- Results Per Page
- Sort Options
Item 3D modelling segmentation, quantification and visualisation of cardiovascular magnetic resonance images.(2014) Brijmohan, Yarish.; Mneney, Stanley Henry.; Rae, William Ian Duncombe.Progress in technology in the field of magnetic resonance imaging (MRI) has provided medical experts with a tool to visualise the heart during the cardiac cycle. The heart contains four chambers namely the left and right ventricles and the left and right atria. Each chamber plays an important role in the circulation of blood throughout the body. Imbalances in the circulatory system can lead to several cardiovascular diseases. In routine clinical medical practice MRIs are produced in large quantities on a daily basis to assist in clinical diagnosis. In practice, the interpretation of these images is generally performed visually by medical experts due to the minimal number of automatic tools and software for extracting quantitative measures. Segmentation refers to the process of detecting regions within an image and associating these regions with known objects. For cardiac MRI, segmentation of the heart distinguishes between different ventricles and atriums. If the segmentation of the left ventricle and right ventricle exists, doctors will be interested in quantifying the thickness of the ventricle walls, the movement of each ventricle, blood volumes, blood flow-rates, etc. Several cardiac MRI segmentation algorithms have been developed over the past 20 years. However, much attention of these segmentation methods was afforded to the left ventricle and its functionality due to its approximately cylindrical shape. Analysis of the right ventricle also plays an important role in heart disease assessment and coupled with left ventricle analysis, will produce a more intuitive and robust diagnostic tool. Unfortunately, the crescent like shape of the right ventricle makes its mathematical modelling difficult. Another issue associated with segmenting cardiac MRI is that the quality of images can be severely degraded by artefactual signals and image noise emanating from equipment errors, patient errors and image processing errors. The presence of these artefacts attribute to additional difficulty for segmentation algorithms and many of the currently available segmentation methods cannot account for all of the abovementioned categories. A further downfall of current segmentation algorithms is that there is no readily available standard methodology to compare the accuracy of these approaches, as each author has provided results on different cardiac MRI datasets and segmentation done by human readers (expert segmentation) is subjective. This thesis addresses the issues of accuracy comparison by providing a framework of mathematical, statistical and clinical accuracy measures. The use of publically available cardiac MRI datasets in which expert segmentation is performed is analysed. The framework allows the author of a new segmentation algorithm to choose a subset of the measures to test their algorithm. A clinical measure is proposed in this thesis which does not require expert segmentation on the cardiac MRI dataset, where the stroke volumes of the left and right ventricle are compared to each other. This thesis proposes a new three dimensional cardiac MRI segmentation algorithm that is able to segment both the left and right ventricles. This approach provides a robust technique that improves on the use of the difference of Gaussians (DoG) image filter. The main focus was to find and extract the region of interest that contains the ventricles and remove all the unwanted information so that the DoG parameters are created from intensity profiles of this localised region. Two methods are proposed to achieve this localisation, depending on the type of cardiac MRI dataset that is present. The first method is used if the cardiac MRI dataset contains images from a single MRI view. Local and global motion maps are created per MRI slice using pixel intensities from images at all time points though the cardiac cycle. The segmentation results show a slight drop in evaluation metrics on the state of the art algorithms for the left ventricle and a significant improvement over the state of the art algorithms for the right ventricle using the publically available cardiac MRI datasets. The algorithm is also robust enough to withstand the influence of image noise and simulated patient movement. The second approach to find the region of interest is used if there are MRIs from three views present in the cardiac MRI dataset. The novel method projects ventricle segmentation in the three dimensional space from two cardiac MRI views to provide an automatic ventricle localisation in the third MRI view. This method utilises an iterative approach with convergence criteria to provide final ventricle segmentation in all three MRI views. The results show increase in segmentation accuracy per iteration and a small stroke volumetric error measurement on final segmentation. Finally, proposed in this thesis is a triangular surface mesh reconstruction algorithm to create the visualisation of both the left and right ventricles. The segmentation of the ventricles are extracted from the MRI per slice and combined to form a three dimensional point set. The use of segmentation from the three orthogonal MRI views further improves the visualisation. From the three dimensional point set, the surface mesh is constructed using Delaunay triangulation, convex hulls and alpha hulls. The volume of the ventricles are calculated by performing a high resolution voxelisation of the ventricle mesh and thereafter several quantification measures are computed. The volume methodology is compared to the commonly used Simpsons method and the results illustrate that the proposed method is superior.Item A framework for modelling the interactions between biochemical reactions and inorganic ionic reactions in aqueous systems.(2022) Brouckaert, Christopher John.; Lokhat, David.Bio‐processes interact with the aqueous environment in which they take place. Integrated bio‐process and three‐phase (aqueous–gas–solid) multiple strong and weak acid/base system models are being developed for a range of wastewater treatment applications, including anaerobic digestion, biological sulphate reduction, autotrophic denitrification, biological desulphurization and plant‐wide wastewater treatment systems. In order to model, measure and control such integrated systems, a thorough understanding of the interaction between the bio‐processes and aqueous‐phase multiple strong and weak acid/bases is required. This thesis is based on a series of five papers that were published in Water SA during 2021 and 2022. Chapter 2 (Part 1 of the series) sets out a conceptual framework and a methodology for deriving bioprocess stoichiometric equations. It also introduces the relationship between alkalinity changes in bioprocesses and the underlying reaction stoichiometry, which is a key theme of the series. Chapter 3 (part 2 of the series) presents the stoichiometric equations of the major biological processes and shows how their structure can be analysed to provide insight into how bioprocesses interact with the aqueous environment. Such insight is essential for confident, effective and reliable use of model development protocols and algorithms. Where aqueous ionic chemistry is combined with biological chemistry in a bioprocess model, it is advantageous to deal with the very fast ionic reactions in an equilibrium sub‐model. Chapter 4 (part 5 of the series) presents details of how of such an equilibrium speciation sub‐model can be implemented, based on well‐known open‐source aqueous chemistry models. Specific characteristics of the speciation calculations which can be exploited to reduce the computational burden are highlighted. The approach is illustrated using the ionic equilibrium sub‐model of a plant‐wide wastewater treatment model as an example. Provided that the correct measurements are made that can quantify the material content of the bioprocess products (outputs), the material content of the bioprocess reactants (inputs) can be determined from the bioprocess products via stoichiometry. The links between the modelling and measurement frameworks, which use summary measures such as chemical oxygen demand (COD) and alkalinity, are described in parts 3 and 4 of the series, which are included as appendices to the thesis. An additional paper, presenting case study on modelling an auto‐thermal aerobic bio‐reactor, is included as a third appendix, as it demonstrates the application of some of the principles developed in the series of papers.Item A review of the engineering properties of concrete with paper mill waste ash — towards sustainable rigid pavement construction.(Silicon) Pillay, Deveshan L.; Oladimeji B., Olalusi; Mostafa, Mohamed M.H.The drastic surge in urbanisation and construction-related activities is increasing the demand for cement and aggregates, especially for concrete production. Concrete is utilised for a wide variety of structural applications, including rigid pavements construction, due to its superior strength and durability performance. However, the production of cement and concrete increases carbon footprint; and the source of natural aggregates depletes. Hence, there is an increased demand for pavement designs that incorporate sustainable materials and maintain a consistent level of service. In rigid pavements construction, this can be achieved with the integration of alternate binder systems, such as paper mill ash (PMA). This paper presents a systematic review of the engineering properties of PMA as a partial cement replacement material for sustainable concrete production. The review is focused on the influence of PMA on the engineering properties of concrete. The main advantages and limitation of using PMA were highlighted and discussed. Grey areas for possible research exploit were also identified. Based on the superior tensile (2.68 – 3.98 MPa) and flexural (4.04 – 5.01 MPa) strength results documented in the various works of literature reviewed, it can be concluded that PMA is a feasible alternative binder material for rigid pavement applications. This, coupled with its negligible CO2e emission value, indicate that PMA is beneficial to the sustainability and serviceability states of rigid pavements. The viewpoint of this review will be useful for researchers for their future studies and guide stakeholders in the construction industry to have more understanding of PMA concrete.Item A semi-empirical formulation for determination of rain attenuation on terrestrial radio links.(2010) Odedina, Modupe Olubunmi.; Afullo, Thomas Joachim Odhiambo.Advances in today’s fast growing communication systems have resulted in congestion in the lower frequency bands and the need for higher capacity broadband services. This has made it inevitable for service providers to migrate to higher frequency bands so as to accommodate the ever increasing demands on radio communication systems. However, the reliability of such systems at these frequency bands tend to be severely degraded due to some natural atmospheric phenomena of which rain is the dominant factor. This is not to say that other factors have become unimportant; however, if attenuation by rain is so severe that a radio link is unavailable for use, then other factors become secondary. Therefore, it is paramount to establish a model capable of predicting the behaviour of these systems in the presence of rain. This study employs a semi-empirical approach for the formulation of rain attenuation models using the knowledge of rain rate, raindrop size distribution, and a signal level measurement recorded at 19.5 GHz on a horizontally polarized terrestrial radio link. The semi-empirical approach was developed by considering the scattering effect of an electromagnetic wave propagating through a medium containing raindrops. The complex forward scattering amplitudes for the raindrops are determined for all raindrop sizes at different frequencies, utilizing the Mie scattering theory on spherical dielectric raindrops. From these scattering amplitudes, the extinction cross-sections for the spherical raindrops are calculated. Applying the power-law regression to the real part of the calculated extinction cross-section, power-law coefficients are determined at different frequencies. The power-law model generated from the extinction crosssection is integrated over different raindrop-size distribution models to formulate theoretical rain attenuation models. The developed rain attenuation models are used with 0.01 R rain rate statistics determined for four locations in different rain climatic zones in South Africa to calculate the specific rain attenuation. From a horizontally polarized 6.73 km terrestrial line-of-sight link in Durban, South Africa,experimental rain attenuation measurements were recorded at 19.5 GHz. These rain attenuation measurements are compared with the results obtained from the developed attenuation models with the same propagation parameters to establish the most appropriate attenuation models that describe the behaviour of radio link performance in the presence of rain. For the purpose of validating the results, it is compared with the ITU-R rain attenuation model. This study also considers the characteristics and variations associated with rain attenuation for terrestrial communication systems. This is achieved by utilizing the ITU-R power-law rain attenuation model on 5-year rain rate data obtained from the four different climatic rain zones in South Africa to estimate the cumulative distributions of rain attenuation. From the raindrop size and 1-minute rain rate measurement recorded in Durban with a distrometer over six months, rain events over the six months are classified into drizzle, widespread, shower and thunderstorm rain types and the mean rain rate statistics determined for each class of rain. Drop-size distribution for all the rain types is estimated. This research has shown a statistical analysis of rain fade data and proposed an empirical rain attenuation model for South Africa localities. This work has also drawn out theoretical rain attenuation prediction models based on the assumption that the shapes of raindrops are spherical. The results predicted from these theoretical attenuation models have shown that it is not the raindrop shapes that determine the attenuation due to rain, but the raindrop size distribution and the rain rate content in the drops. This thesis also provides a good interpretation of cumulative rain attenuation distribution on seasonal and monthly basis. From these distributions, appropriate figures of fade margin are derived for various percentages of link availability in South Africa.Item A study of rain attenuation on terrestrial paths at millimetric wavelengths in South Africa.(2006) Olubunmi, Fashuyi Modupe.; Afullo, Thomas Joachim Odhiambo.Rain affects the design of any communication system that relies on the propagation of electromagnetic waves. Above a certain threshold of frequency, the attenuation due to rain becomes one of the most important limits to the performance of terrestrial line-of-sight (LOS) microwave links. Rain attenuation which is the dominant fading mechanism at these frequencies is based on nature which can vary from location-to-Iocation and from year-to year. In this dissertation, the ITU-R global prediction techniques for predicting the cumulative distribution of rain attenuation on terrestrial links are studied using a five-year rain rate data for twelve different geographical locations in the Republic of South Africa. The specific attenuation rR (dB/km) for both horizontal and vertical polarization is determined. The path attenuation (dB) exceeded for 0.01% of the time is estimated using the available existing models for the twelve different geographical locations on a I-minute integration time rain rate at 0.01% exceedance of the time averaged over a period of 5 years. A comparison study is done on these available rain attenuation mode'ls; The ITU-R model, Crane Global model, and the Moupfouma models at different frequencies and propagation path lengths based on the actual I-minute integration time rain rate exceeded at 0.01% of the time averaged over a period of 5 years for each geographical locations. Finally, from the actual signal attenuation measurements recorded in Durban over a period of 1 year at 19.5 GHz and a propagation path length of 6.73 km, a logarithmic attenuation model and power attenuation model is proposed for Durban, South Africa. Recommendation for future work is given in the concluding chapter for future improvement on this study. Radio communication designers will find the results obtain in the report useful.Item Accelerated environmental degradation of GRP composite materials.(2004) Dlamini, Power Madoda.; Von Klemperer, Christopher Julian.; Verijenko, Viktor.The use of fibre reinforced polymer composites and development of structural composites has expanded rapidly in the Southern African region over the past ten years. The long-term effect of placing these materials outdoors in the Southern African climate is unknown with exposure data for these materials being primarily European and North American based. This study intends to take a broad-based study to the problem of environmental degradation of advanced composite structures. This work is intended to study different degradation mechanisms. Work performed includes: a study of literature on degradation and protective measures; identification of dominant degradation mechanisms; manufacture of specimens; accelerated environmental testing; and an assessment of the effect of the exposure on the chemical properties The goal of this work is to produce information, which can be subsequently used to determine the rate of damage, methods of suitable protection and necessary maintenance intervals for polymer composite components. The approach was: to simulate outdoor exposure within a reduced period of time; to establish correlation of results with actual outdoor exposure; and to determine how the gel coats compare with other protective methods. As part of the objectives of the study (i.e. to assess the durability of polymer matrix composites materials subjected to environmental exposure), an experimental study was carried out to establish the durability of specific gel coats against ultraviolet (DV) and moisture degradation. An investigation of the effectiveness of the various protective measures has begun with a review of selected gel coats available as a protective coating. Laminates with these gel coats have been set up for both accelerated and natural exposure tests. 3000, 2500, 2000, 1600, and 800 hours of accelerated DV exposure tests were performed on polyester GRP laminates with gel coats. No measurable strength loss occurred on protected laminates; there was significant increase in yellowness on un-protected laminates; all protected specimens showed a fair retention of gloss; fibre prominence occurred on unprotected laminates; and the glass transition of samples had dropped from the normal polyester glass transition temperature range.Item Activity of complex multifunctional organic compounds in common solvents.(2009) Moller, Bruce.; Ramjugernath, Deresh.; Rarey, Jurgen.The models used in the prediction of activity coefficients are important tools for designing major unit operations (distillation columns, liquid-liquid extractors etc). In the petrochemical and chemical industry, well established methods such as UNIFAC and ASOG are routinely employed for the prediction of the activity coefficient. These methods are, however, reliant on binary group interaction parameters which need to be fitted to reliable experimental data. It is for this reason that these methods are often not applicable to systems which involve complex molecules. In these systems, typically solid-liquid equilibria are of interest where the solid is some pharmaceutical product or intermediate or a molecule of similar complexity (the term complex here refers to situations where molecules contain several functional groups which are either polar, hydrogen bonding, or lead to mesomeric structures in equilibrium). In many applications, due to economic and environmental considerations, a list of no more than 20 solvents is usually considered. It is for this reason that the objective of this work is to develop a method for predicting the activity coefficient of complex multifunctional compounds in some common solvents. The segment activity coefficient approaches proposed by Hansen, MOSCED and the NRTL-SAC models show that it should be possible to “interpolate” between solvents if suitable reference solvents are available (e.g. non-polar, polar and hydrogen bonding). Therefore it is useful to classify the different solvents into suitable categories inside which analogous behaviour should be observed. To accomplish this, a significant amount of data needs to be collected for the common solvents. Data with water as a solvent was freely available and multiple sources were found with suitable data. Both infinite dilution activity coefficient (y∞) and SLE (Solid-Liquid Equilibrium) data were used for model development. The y∞ data were taken from the DDB (Dortmund Data Bank) and SLE data were taken from Beilstein, Chemspider and DDB. The limiting factor for the usage of SLE data was the availability of fusion data (heat of fusion and melting temperature) for the solute. Since y∞ in water is essentially a pure component property it was modelled as such, using the experience gained previously by this group. The overall RMD percentage (in ln y∞) for the training set was 7.3 % for 630 compounds. For the test set the RMD (in ln y∞) was 9.1 % for 25 fairly complex compounds. Typically the temperature dependence of y∞ data is ignored when considering model development such as this. Nevertheless, the temperature dependence was investigated and it was found that a very simple general correlation showed moderate accuracy when predicting the temperature dependence of compounds with low solubility. Data for solvents other than water were very scarce, with insufficient data to develop a model with reasonable accuracy. A novel method is proposed for the alkane solvents, which allows the values in any alkane solvent to be converted to a value in the solvent hexane. The method relies on a first principles application of the solution of groups concept. Quite unexpectedly throughout the course of developing the method, several shortfalls were uncovered in the combinatorial expressions used by UNIFAC and mod. UNIFAC. These shortfalls were empirically accounted for and a new expression for infinite dilution activity coefficient is proposed. This expression is however not readily applicable to mixtures and therefore requires some further attention. The method allows for the extension of the data available in hexane (chosen since it is a common solvent for complex compounds). In the same way as the y∞ data in water, the y∞ data in hexane were modelled as a pure component property. The overall RMD percentage (in ln y∞) for the training set was 21.4 % for 181 compounds. For the test set the RMD (in ln y∞) was 11.7 % for 14 fairly complex compounds. The great advantage of both these methods is that, since they are treated as pure component properties, the number of model parameters grows linearly with the number of groups, unlike with mixture models (UNIFAC, ASOG, etc.) where it grows quadratically. For both the water and the hexane method the predictions of the method developed in this work were compared to the predictions of UNIFAC, mod. UNIFAC, COSMO-RS(OL) and COSMO-SAC. Since water and hexane are not the only solvents of practical interest, a method was developed to interpolate the alcohol behaviour based on the water and hexane behaviour. The ability to predict the infinite dilution activity coefficient in various solvents allowed for the prediction of various other properties, viz. air-water partition coefficient, octanol-water partition coefficient, and water-alcohol cosolvent mixtures. In most cases the predictions of these properties were good, even for the fairly complex compounds tested.Item Adaptive dynamic matrix control for a multivariable training plant.(2001) Guiamba, Isabel Remigio Ferrao.; Mulholland, Michael.Dynamic Matrix Control (DMC) has proven to be a powerful tool for optimal regulation of chemical processes under constrained conditions. The internal model of this predictive controller is based on step response measurements at an average operating point. As the process moves away from this point, however, control becomes sub-optimal due to process non-linearity. If DMC is made adaptive, it can be expected to perform well even in the presence of uncertainties, non-linearities and time-vary ing process parameters. This project examines modelling and control issues for a complex multivariable industrial operator training plant, and develops and applies a method for adapting the controller on-line to account for non-linearity. A two-input/two-output sub-system of the Training Plant was considered. A special technique had to be developed to deal with the integrating nature of this system - that is, its production of ramp outputs for step inputs. The project included the commissioning of the process equipment and the addition of instrumentation and interfacing to a SCADA system which has been developed in the School of Chemical Engineering.Item Adaptive model predictive control of renewable energy-based micro-grid.(2021) Gbadega, Peter Anuoluwapo.; Saha, Akshay Kumar.Energy sector is facing a shift from a fossil-fuel energy system to a modern energy system focused on renewable energy and electric transport systems. New control algorithms are required to deal with the intermittent, stochastic, and distributed nature of the generation and with the new patterns of consumption. Firstly, this study proposes an adaptive model-based receding horizon control technique to address the issues associated with the energy management system (EMS) in micro-grid operations. The essential objective of the EMS is to balance power generation and demand through energy storage for optimal operation of the renewable energy-based micro-grid. At each sampling point, the proposed control system compares the expected power produced by the renewable generators with the expected load demand and determines the scheduling of the different energy storage devices and generators for the next few hours. The control technique solves the optimization problem in order to minimize or determines the minimum running cost of the overall micro-grid operations, while satisfying the demand and taking into account technical and physical constraints. Micro-grid, as any other systems are subject to disturbances during their normal operation. Hence, the power generated by the renewable energy sources (RESs) and the demanded power are the main disturbances acting on the micro-grid. As renewable sources are used for the generation, their time-varying nature, their difficulty in predicting, and their lack of ability to manipulate make them a problem for the control system to solve. In view of this, the study investigates the impacts of considering the prediction of disturbances on the performance of the energy management system (EMS) based on the adaptive model predictive control (AMPC) algorithm in order to improve the operating costs of the micro-grid with hybrid-energy storage systems. Furthermore, adequate management of loads and electric vehicle (EV) charging can help enhance the micro-grid operation. This study also introduced the concept of demand-side management (DSM), which allows the customers to make decisions regarding their energy consumption and also help to reduce the peak load demand and to reshape the load profile so as to improve the efficiency of the system, environmental impacts, and reduction in the overall operational costs. More so, the intermittent nature of renewable energy and consumer random behavior introduces a stochastic component to the problem of control. Therefore, in order to solve this problem, this study utilizes an AMPC control technique, which provides some robustness to the control of systems with uncertainties. Lastly, the performances of the micro-grids used as a case study are evaluated through simulation modeling, implemented in MATLAB/Simulink environment, and the simulation results show the accuracy and efficiency of the proposed control technique. More so, the results also show how the AMPC can adapt to various generation scenarios, providing an optimal solution to power-sharing among the distributed energy resources (DERs) and taking into consideration both the physical and operational constraints and similarly, the optimization of the imposed operational criteria.Item Adaptive multiple symbol decision feedback for non-coherent detection.(2006) Govender, Nishkar Balakrishna.; Xu, Hongjun.; Takawira, Fambirai.Non-coherent detection is a simple form of signal detection and demodulation for digital communications. The main drawback of this detection method is the performance penalty incurred, since the channel state information is not known at the receiver. Multiple symbol detection (MSD) is a technique employed to close the gap between coherent and non-coherent detection schemes. Differentially encoded JW-ary phase shift keying (DM-PSK) is the classic modulation technique that is favourable for non-coherent detection. The main drawback for standard differential detection (SDD) has been the error floor incurred for frequency flat fading channels. Recently a decision feedback differential detection (DFDD) scheme, which uses the concept of MSD was proposed and offered significant performance gain over the SDD in the mobile flat fading channel, almost eliminating the error floor. This dissertation investigates multiple symbol decision feedback detection schemes, and proposes alternate adaptive strategies for non-coherent detection. An adaptive algorithm utilizing the numerically stable QR decomposition that does not require training symbols is proposed, named QR-DFDD. The QR-DFDD is modified to use a simpler QR decomposition method which incorporates sliding windows: QRSW-DFDD. This structure offers good tracking performance in flat fading conditions, while achieving near optimal DFDD performance. A bit interleaved coded decision feedback differential demodulation (DFDM) scheme, which takes advantage of the decision feedback concept and iterative decoding, was introduced by Lampe in 2001. This low complexity iterative demodulator relied on accurate channel statistics for optimal performance. In this dissertation an alternate adaptive DFDM is introduced using the recursive least squares (RLS) algorithm. The alternate iterative decoding procedure makes use of the convergence properties of the RLS algorithm that is more stable and achieves superior performance compared to the DFDM.Item An adaptive protocol for use over meteor scatter channels.(1987) Spann, Michael Dwight.; Broadhurst, Anthony D.Modem technology has revived interest in the once popular area of meteor scatter communications. Meteor scatter systems offer reliable communications in the 500 to 2000 km range all day, every day. Recent advances in microprocessor technology have made meteor scatter communications a viable and cost effective method of providing modest data rate communications. A return to the basic fundamentals has revealed characteristics of meteor scatter propagation that can be used to optimize the protocols for a meteor scatter link. The duration of an underdense trail is bounded when its initial amplitude is known. The upper bound of the duration is determined by maximizing the classical underdense model. The lower bound is determined by considering the volume of sky utilized. The duration distribution between these bounds is computed and compared to measured values. The duration distribution is then used to specify a fixed data rate, frame adaptive protocol which more efficaciously utilizes underdense trails, in the half duplex environment, than a non-adaptive protocol. The performance of these protocols is verified by modeling.Item Adaptive sedimentation and patch optimization for multi-viewed stereo reconstruction.(2015) Khuboni, Ray Leroy.; Naidoo, Bashan.This dissertation presents two main contributions towards the Patch-based Multi-View Stereo (PMVS) algorithm. Firstly, we present an adaptive segmentation method for preprocessing input data to the PMVS algorithm. This method applies a specially developed grayscale transformation to the input to redefine the intensity histogram. The Nelder- Mead (NM) simplex method is used to adaptively locate an optimized segmentation threshold point in the modified histogram. The transformed input image is then segmented using the acquired threshold value into foreground and background data. This segmentation information is thus applied to the patch-based method to exclude the background artefacts. The results acquired indicated a reduction in cumulative error whilst achieving relatively similar results with a beneficial factor of reduced time and space complexity. Secondly, two improvements are made to the patch optimisation stage. Both the optimisation method and the photometric discrepancy function are changed. A classical quasi-newton BFGS method with stochastic objectives is used to incorporate curvature information into stochastic optimisation method. The BFGS method is modified to introduce stochastic gradient differences, whilst regularising the Hessian approximation matrix to ensure a well-conditioned matrix. The proposed method is employed to solve the optimisation of newly generated patches, to refine the 3D geometric orientation and depth information with respect to its visible set of images. We redefine the photometric discrepancy function to incorporate a specially developed feature space in order to address the problem of specular highlights in image datasets. Due to this modification, we are able to incorporate curvature information of those patches which were deemed to be depleted in the refinement process due to their low correlation scores. With those patches contributing towards the refinement algorithm, we are able to accurately represent the surface of the reconstructed object or scene. This new feature space is also used in the image feature detection to realise more features. From the results, we noticed reduction in the cumulative error and obtained results that are denser and more complete than the baseline reconstruction.Item Adaptive techniques with cross-layer design for multimedia transmission.(2013) Vieira, Ricardo.; Xu, Hongjun.Wireless communication is a rapidly growing field with many of its aspects undergoing constant enhancement. The use of cross-layer design (CLD) in current technologies has improved system performance in terms of Quality-of-Services (QoS) guarantees. While multimedia transmission is difficult to achieve, CLD is capable of incorporating techniques to achieve multimedia transmission without high complexity. Many systems have incorporated some form of adaptive transmission when using a cross-layer design approach. Various challenges must be overcome when transmitting multimedia traffic; the main challenge being that each traffic type, namely voice; image; and data, have their own transmission QoS; delay; Symbol Error Rate (SER); throughput; and jitter requirements. Recently cross-layer design has been proposed to exchange information between different layers to optimize the overall system performance. Current literature has shown that the application layer and physical layer can be used to adequately transmit multimedia over fading channels. Using Reed-Solomon coding at the application layer and Rate Adaption at the physical layer allows each media type to achieve its QoS requirement whilst being able to transmit the different media within a single packet. The following dissertation therefore strives to improve traffic through-put by introducing an unconventional rate adaption scheme and by using power adaption to achieve Symbol Error Rate (SER) QoS in multimedia transmission. Firstly, we introduce a system which modulates two separate sets of information with different modulation schemes. These two information sets are then concatenated and transmitted across the fading channel. The receiver uses a technique called Blind Detection to detect the modulation schemes used and then demodulates the information sets accordingly. The system uses an application layer that encodes each media type such that their QoS, in terms of SER, is achieved. Simulated results show an increase in spectral efficiency and the system achieves the required Symbol Error Rate constraint at lower Signal to Noise Ratio (SNR) values. The second approach involves adapting the input power to the system rather than adapting the modulation scheme. The two power adaptive schemes that are discussed are Water- Filling and Channel Inversion. Channel Inversion allows the SER requirement to be maintained for low SNR values, which is not possible with Rate Adaption. Furthermore, the system uses an application layer to encode each media type such that their QoS is achieved. Simulated results using this design show an improvement in through-put and the system achieves the SER constraint at lower SNR values.Item Adsorption of heavy metals on marine algae.(2005) Mbhele, Njabulo.; Carsky, Milan.; Pienaar, D. H.Biosorption is a property of certain type of inactive, microbial biomass to bind and concentrate heavy metals from even very dilute aqueous solutions. Biomass exhibits this property, acting just as a chemical substance, as an ion exchanger of biological origin. It is particularly the cell wall structure of certain algae that is found responsible for this phenomenon. In these experiments, the rate and extent for removal of copper is subjected to parameters such as pH, initial metal concentration, biosorbent size, contact time, temperature and the ability of the biomass to be regenerated in sorption-desorption experiments. The metal adsorption was found to be rapid within 25 minutes. The maximum copper uptake of 30 mg of copper / g of biomass has been observed, in the following conditions: 100 mg / L, 0.1 g of biomass, pH 4 and at temperature of 25°C. From this study, it was found that copper uptake is increasing with increase in pH, with optimum being pH 4. Copper uptake increases substantially from 0 to 25 minutes. Metal biosorption behaviour of raw seaweed Sargassum in six consecutive sorptiondesorption cycles were also investigated in a packed-bed column, during a continuous removal of copper from a 35 mg/l aqueous solution at pH 4. The sorption and desorption was carried out for an average of 85 and 15 hours, respectively, representing more than 40 days of continuous use of the biosorbent. The weight loss ofbiomass after this time was 13.5%. The column service time decreased from 25 hrs in the first cycle to 10 hrs for the last cycle.Item Adsorption studies for the separation of light hydrocarbons.(2014) Govender, Inbanathan.; Ramjugernath, Deresh.; Naidoo, Paramespri.; Nelson, Wayne Michael.Traditionally, the separation of ethylene from ethane is undertaken using a fractionation sequence. The distillation is performed at low temperatures and elevated pressures in conventional trayed fractionators. For economic feasibility, the separation scheme must be heat integrated to produce the low temperatures needed for separation – as low as 243 K. Low temperature distillation units are expensive to build and are typically only economically feasible for feed streams containing high amounts of ethylene. Adsorption provides a favourable alternative to the traditional low temperature distillation process. The availability of accurately measured adsorption data over a wide range of temperatures and pressures is vital in the design of efficient separation processes. However, reproducible binary adsorption data are not readily available in the literature due largely to the uncertainties involved in measuring adsorption equilibria. This project involved the measurement of adsorption equilibria using two techniques – the gravimetric and the volumetric technique. Particular focus was placed on the design and commissioning of a volumetric apparatus capable of measuring binary adsorption equilibria over a range of temperatures and pressures. The gravimetric apparatus is not capable of measuring multicomponent adsorption equilibria. The Thermodynamic Research Unit (TRU) has extensive capabilities in the field of phase equilibria with specialized expertise in the field of vapour liquid equilibria (VLE). The objective of this project is to develop competence in the field of adsorption equilibria by designing and commissioning new apparatus. This forms part of a larger objective to extend the capabilities of TRU. The volumetric apparatus designed and commissioned in this study uses an innovative gas mixer to prepare binary mixtures for adsorption equilibrium measurements. The measured data were compared to literature to validate the measurement reproducibility of the apparatus and accuracy of measurement techniques used. Adsorption equilibrium data were measured for pure components and a binary system. Pure component adsorption data were measured for methane, ethane and ethylene. The binary system of ethane + ethylene was also investigated. Measurements were performed at pressures up to 15 bar, at temperatures of 298 K and 323 K, on an adsorbent zeolite 13X. The gravimetric and volumetric apparatus both showed good reliability and reproducibility. Uncertainties in temperature and pressure were 0.1 K and 4×10-3 bar for the gravimetric apparatus and 0.03 K and 0.002 bar for the volumetric apparatus respectively. The measured equilibrium data were fitted to the Langmuir, Sips and Vacancy Solution Model (VSM) adsorption models. The regressed parameters were used to predict binary adsorption equilibria. The Langmuir model performed the poorest across the pressure range investigated, with an average absolute deviation (AAD) as high as 5%. The deviation however, was comparable with the experimental uncertainties reported in literature. The Sips model improved upon the Langmuir model with the VSM model generally performing the best with an AAD of approximately 1%. The Extended Langmuir, Extended Sips and VSM all provided good predictions of the binary adsorption equilibria. The Extended Langmuir model performed best with an AAD of 3%. The Extended Sips model performed marginally poorer with an AAD of 3.05%. The VSM model performed satisfactorily with an AAD of 6%, marginally higher than the reported experimental uncertainties of 5%.Item The advancement of the waste resource optimization and scenario evaluation model: the inclusion of socio-economic and instituitonal indicator.(2018) Kissoon, Sameera.; Trois, Cristina.This study explored Novice Teacher Educators (NTEs) experiences of Relational Learning in a private Higher Education Institution (HEI) in South Africa. The main purpose of this study was to gain a deeper understanding of how NTEs involved in initial teacher education experience Relational Learning in a private HEI. It further attempted to gain an insight of how these experiences of Relational Learning influenced their learning as teacher educators and their work as NTEs in a private HEI context. Literature used in this research highlighted the need to develop competent NTEs due to the increase demand for access into higher education institutions, public and private, the limited structured induction and mentoring for NTEs and the limited research on the relational experiences of NTEs. Key debates on national and international higher education contexts were also foregrounded. The literature review also focused on understanding the phenomenon of Relational Learning as a progressive approach to learning, through and about relationships. Relational Learning is viewed as a catalyst for learning with others. Situated Learning Theory (SLT) and Relational Cultural Theory (RCT) was employed as the theoretical framework for the study. The study focuses on six NTEs who are newly appointed teacher educators in their first three years of employment primarily involved in the teaching of pre-service teachers (or student teachers) in a private HEI's as research context. The NTE participants moved from a school context into a HE context. This research study is a qualitative interpretive case study. All six NTEs that participated in this research were purposively selected by the researcher. Criteria used to select participants included NTE being in their first three years of their higher education careers and being able to access technologically devices. A qualitative approach was used to generate data and the data generation instruments used were questionnaires, individual semi structured interviews and a collage with presentation. The data generation process took three months and data generated was validated for authenticity by each participant by member checking. The findings of the data revealed that NTEs experienced many challenges in their first few months of being NTEs and considered this to be an exceptionally overwhelming shift. To overcome these difficult times NTEs moved to develop relationship with colleagues and more than often self-selected their mentors to guide and assist them as there was limited structured and mentoring with the institution. The relationship developed between NTE and self-selected mentor is a growth fostering relationships as foregrounded in Relational Cultural Theory. The xiii. findings of this research showed that mutual relationships between NTEs and teacher educators paved the way for NTE to become active members of communities of practice (COP). The responsibilities of HEIs is to provide relational opportunities for NTEs so they may ease into the profession. Relational opportunities such a mentoring, inductions, conversations and social activities to name a few have a fundamental role to play in enculturating a NTEs into HEI. Relational Learning has a pivotal role to play in the growth and development of NTEs thus improving the quality of teacher educationItem Aerodynamic modelling and further optimisation of solar powered vehicle.(2016) Lawrence, Christopher Jon.; Bemont, Clinton Pierre.; Veale, Kirsty Lynn.Computational fluid dynamics was used to optimise the aerodynamics of a solar powered vehicle via the addition of airflow alteration devices that interact with the boundary layer airflow. These features were designed, manufactured and applied to the vehicle while ensuring that the bulk geometry remained unmodified. The modifications had to be added to the vehicle non-invasively, and had to allow for removal during race conditions. The solar vehicle raced in both the Sasol Solar Challenge (SASC) which took place in September 2014 and the Bridgestone World Solar Challenge (WSC) which took place in September 2015. Aerodynamic drag is the single largest energy loss experienced by a solar vehicle; it is therefore essential that the aerodynamics of these vehicles be highly refined if they are to be competitive. The UKZN solar vehicle placed first in South Africa in the SASC and 13th in the WSC - indisputably outstanding results. The features to be refined were chosen to reduce aerodynamic drag caused by the wheel spokes as well as the canopy due to these being high turbulence zones and having high curvatures respectively. The principles applied were to reduce turbulence caused by the wheel spokes by adding to the wheel geometry, and adding turbulence to the canopy airflow through the use of a technique commonly known as flow tripping. While turbulence caused by the wheels is undesirable, the turbulence added by flow tripping is desirable as it reduces the size of the separated region of airflow behind the canopy, allowing for a net reduction in aerodynamic drag. Wheel geometry alteration was done via the addition of smooth and dimpled covers, so as to mitigate the turbulence caused by the wheel spokes. Many techniques were considered to trip the airflow on the canopy, it was found that vortex generators of specific geometry and dimensions would reduce drag more effectively. Another airflow altering device, a NACA duct, was designed and manufactured. This duct was placed on the canopy to allow airflow into the driver compartment which enabled adherence to race rules and allowed for driver cooling and ventilation. Each wheel cover was manufactured from two layers of carbon fibre to allow a net gain in efficiency with regards to rolling resistance and drag reduction when considering weight added by the wheel covers. The vortex generators and NACA duct were 3-D printed using ABS plastic. The wheel covers and NACA duct were applied to the car for the World Solar Challenge while only the wheel covers were applied for the Sasol Solar Challenge. The vortex generators were not applied due to the efficiency gain from the application being uncertain at the time of the race. A gain in aerodynamic efficiency with the addition of wheel covers to a front wheel was shown through CFD testing. The drag was reduced by approximately 0.5 Newtons (5 %) relating to translational forces and 0.02 Newtons per meter (44 %) percent with regards to rotational forces. The addition of vortex generators resulted in a drag reduction ranging from approximately zero to three percent when considering straight airflow and crosswinds respectively.Item Alternative approach to Power Line Communication (PLC) channel modelling and multipath characterization.(2016) Awino, Steven Omondi.; Afullo, Thomas Joachim Odhiambo.Modelling and characterization of the Power Line Communication (PLC) channel is an active research area. The research mainly focuses on ways of fully exploiting the existing and massive power line network for communications. In order to exploit the PLC channel for effective communication solutions, physical properties of the PLC channel need to be studied, especially for high bandwidth signals. In this dissertation, extensive simulations and measurement campaigns for the channel transfer characteristics are carried out at the University of KwaZulu-Natal in selected offices, laboratories and workshops within the Department of Electrical, Electronic and Computer Engineering. Firstly, we employ the Parallel Resonant Circuit (PRC) approach to model the power line channel in chapter 4, which is based on two-wire transmission line theory. The model is developed, simulated and measurements done for validation in the PLC laboratory for different network topologies in the frequency domain. From the results, it is found that the PRC model produces similar results to the Series Resonant Circuit (SRC) model, and hence the model is considered for PLC channel modelling and characterization. Secondly, due to the time variant nature of the power line network, this study also presents the multipath characteristics of the power line communication (PLC) channel in chapter 5. We analyse the effects of the network characteristics on the received signal and derive the multipath characteristics of the PLC channel from measured channel transfer functions by evaluating the channel impulse responses (CIR). The results obtained are compared with results from other parts of the world employing similar approach based on the Root Mean Square (RMS) delay spread and are found to be comparable. Based on the CIR and extracted multipath characteristics, further research in PLC and related topics shall be inspired.Item Alternative binder materials for rigid pavements – an investigation into the structural and sustainability effects of partial cement replacement with pulp and paper mill waste ash in concrete pavements.(2021) Pillay, Deveshan Loganathan.; Mostafa, Mohamed Mostafa Hassan.; Olalusi, Oladimeji Benedict.Abstract available in PDF.Item Alternative techniques for the improvement of energy efficiency in cognitive radio networks.(2016) Orumwense, Efe Francis.; Srivastava, Viranjay Mohan.; Afullo, Thomas Joachim Odhiambo.Abstract available in PDF file.