Electronic Engineering
Permanent URI for this communityhttps://hdl.handle.net/10413/6532
Browse
Browsing Electronic Engineering by Date Accessioned
Now showing 1 - 20 of 212
- Results Per Page
- Sort Options
Item PLC implementation of online, PRBS-based tests for mechanical system parameter estimation.(2009) Rampersad, Vaughan.; Burton, Bruce.This thesis investigates the use of correlation techniques to perform system identification tests, with the objective of developing online test methods to perform mechanical parameter extraction as well as machine diagnostics. More specifically, these test methods must be implemented on a Programmable Logic Controller (PLC) in combination with Variable Speed Drives (VSD). Models for motor-based mechanical systems are derived and other documented methods for parameter identification of mechanical systems are discussed. An investigation is undertaken into the principle that the impulse response of a system may be obtained when a test signal with an impulsive autocorrelation is injected into the system. The theory of using correlation functions to determine the numerical impulse response of a system is presented. Suitable test signals, pseudorandom binary sequences (PRBS) are analysed, and their generation and properties are discussed. Simulations are presented as to how the various properties of the PRBS test signals influence the resulting impulse response curve. Further simulations are presented that demonstrate how PRBS-based tests in conjunction with a curve-fitting method, in this case the method of linear least squares, can provide a fair estimation of the parameters of a mechanical system. The implementation of a correlation based online testing routine on a PLC is presented. Results from these tests are reviewed and discussed. A SCADA system that has been designed is discussed and it is shown how this system allows the user to perform diagnostics on networked drives in a distributed automation system. Identification of other mechanical phenomena such as elasticity and the non-linearity introduced by the presence of backlash is also investigated.Item Cross layer hybrid ARQ2 : cooperative diversity.(2008) Beharie, Sannesh Rabiechand.Cooperative communication allows for single users in multi user wireless network to share their antennas and achieve virtual antenna transmitters, which leads to transmit diversity. Coded Cooperation introduced channel coding into cooperative diversity over traditional pioneer cooperative diversity methods which were based on a user repeating its partner's transmitted signals in a multi-path fading channel environment in order to improve Bit Error Rate (BER) performance.. In this dissertation the Coded Cooperation is simulated and the analytical bounds are evaluated in order to understand basic cooperation principles. This is done using Rate Compatible Punctured Convolutional Codes (RCPC). Based on the understanding of these principles a new protocol called Cross Layer Hybrid Automatic Repeat reQuest (ARQ) 2 Cooperative Diversity is developed to allow for improvements in BER and throughput. In Cross Layer Hybrid ARQ 2 Cooperation, Hybrid ARQ 2 (at the data-link layer) is combined with cooperative diversity (at the physical layer), in a cross layer design manner, to improve the BER and throughput based on feedback from the base station on the user's initial transmissions. This is done using RCPC codes which partitions a full rate code into sub code words that are transmitted as incremental packets in an effort to only transmit as much parity as is required by the base station for correct decoding of a user's information bits. This allows for cooperation to occur only when it is necessary unlike with the conventional Coded Cooperation, where bandwidth is wasted cooperating when the base station has already decoded a user's information bits. The performance of Cross Layer Hybrid ARQ 2 Cooperation is quantised by BER and throughput. BER bounds of Cross Layer Hybrid ARQ 2 Cooperation are derived based on the Pairwise Error Probability (PEP) of the uplink channels as well as the different inter-user and base station Cyclic Redundancy Check (CRC) states. The BER is also simulated and confirmed using the derived bound. The throughput of this new scheme is also simulated and confirmed via analytical throughput bounds. This scheme maintains BER and throughput gains over the conventional Coded Cooperation even under the worst inter-user channel conditions.Item Traffic modelling and analysis of next generation networks.(2008) Walingo, Tom Mmbasu.; Takawira, Fambirai.Wireless communication systems have demonstrated tremendous growth over the last decade, and this growth continues unabated worldwide. The networks have evolved from analogue based first generation systems to third generation systems and further. We are envisaging a Next Generation Network (NGN) that should deliver anything anywhere anytime, with full quality of service (QoS) guarantees. Delivering anything anywhere anytime is a challenge that is a focus for many researchers. Careful teletraffic design is required for this ambitious project to be realized. This research goes through the protocol choices, design factors, performance measures and the teletraffic analysis, necessary to make the project feasible. The first significant contribution of this thesis is the development of a Call Admission Control (CAC) model as a means of achieving QoS in the NGN’s. The proposed CAC model uses an expanded set of admission control parameters. The existing CAC schemes focus on one major QoS parameter for CAC; the Code Division Multiple Access (CDMA) based models focus on the signal to interference ratio (SIR) while the Asynchronous Transfer Mode (ATM) based models focus on delay. A key element of NGN’s is inter-working of many protocols and hence the need for a diverse set of admission control parameters. The developed CAC algorithm uses an expanded set of admission control parameters (SIR, delay, etc). The admission parameters can be generalized as broadly as the design engineer might require for a particular traffic class without rendering the analysis intractable. The second significant contribution of this thesis is the presentation of a complete teletraffic analytical model for an NGN. The NGN network features the following issues; firstly, NGN call admission control algorithm, with expanded admission control parameters; secondly, multiple traffic types, with their diverse demands; thirdly, the NGN protocol issues such as CDMA’s soft capacity and finally, scheduling on both the wired and wireless links. A full teletraffic analysis with all analytical challenges is presented. The analysis shows that an NGN teletraffic model with more traffic parameters performs better than a model with less traffic parameters. The third contribution of the thesis is the extension of the model to traffic arrivals that are not purely Markovian. This work presents a complete teletraffic analytical model with Batch Markovian Arrival (BMAP) traffic statistics unlike the conventional Markovian types. The Markovian traffic models are deployed for analytical simplicity at the expense of realistic traffic types. With CAC, the BMAP processes become non-homogeneous. The analysis of homogeneous BMAP process is extended to non-homogeneous processes for the teletraffic model in this thesis. This is done while incorporating all the features of the NGN network. A feasible analytical model for an NGN must combine factors from all the areas of the protocol stack. Most models only consider the physical layer issues such as SIR or the network layer issues such as packet delay. They either address call level issues or packet level issues on the network. The fourth contribution has been to incorporate the issues of the transport layer into the admission control algorithm. A complete teletraffic analysis of our network with the effects of the transport layer protocol, the Transmission Control Protocol (TCP), is performed. This is done over a wireless channel. The wireless link and the protocol are mathematically modeled, there-after, the protocols effect on network performance is thoroughly presented.Item Wavelet based image compression integrating error protection via arithmetic coding with forbidden symbol and map metric sequential decoding with ARQ retransmission(2010-08-27) Mahomed, VeruschiaThe phenomenal growth of digital multimedia applications has forced the communicationsItem Effect of amplifier non-linearity on the performance of CDMA communication systems in a Rayleigh fading environment(2010-08-31) Syed, JameelThe effect of amplifier non-linearity on the performance of a CDMA communications systemItem Repeat-punctured turbo coded cooperation.(2010-09-01) Moualeu, Jules Merlin Mouatcho.Transmit diversity usually employs multiple antennas at the transmitter. However, many wireless devices such as mobile cellphones, Personal Digital Assistants (PDAs), just to name a few, are limited by size, hardware complexity, power and other constraints to just one antenna. A new paradigm called cooperative communication which allows single antenna mobiles in a multi-user scenario to share their antennas has been proposed lately. This multi-user configuration generates a virtual Multiple-Input Multiple-Output system, leading to transmit diversity. The basic approach to cooperation is for two single-antenna users to use each other's antenna as a relay in which each of the users achieves diversity. Previous cooperative signaling methods encompass diverse forms of repetition of the data transmitted by the partner to the destination. A new scheme called coded cooperation [15] which integrates user cooperation with channel coding has also been proposed. This method maintains the same code rate, bandwidth and transmit power as a similar non-cooperative system, but performs much better than previous signaling methods [13], [14] under various inter-user channel qualities. This dissertation first discusses the coded cooperation framework that has been proposed lately [19], coded cooperation with Repeat Convolutional Punctured Codes (RCPC) codes and then investigates the application of turbo codes in coded cooperation. In this dissertation we propose two new cooperative diversity schemes which are the Repeat-Punctured Turbo Coded cooperation and coded cooperation using a Modified Repeat-Punctured Turbo Codes. Prior to that, Repeat-Punctured Turbo codes are introduced. We characterize the performance of the two new schemes by developing the analytical bounds for bit error rate, which is confirmed by computer simulations. Finally, the turbo coded cooperation using the Forced Symbol Method (FSM) is presented and validated through computer simulations under various inter-user Signal-to-Noise Ratios (SNRs).Item Survivability stategies in all optical networks.(2006) Singh, Sidharta.; Nleya, B. M.Recent advances in fiber optics technology have enabled extremely high-speed transportItem Rain rate and rain drop size distribution models for line-of-sight millimetric systems in South Africa.(2006) Owolawi, Pius Adewale.; Afullo, Thomas Joachim Odhiambo.Radio frequencies at millimeter wavelengths suffer greatly from rain attenuation. It is therefore essential to study rainfall characteristics for efficient and reliable design of radio networks at frequencies above 10GHz. These characteristics of rain are geographically based, which need to be studied for estimation of rain induced attenuation. The ITU-R, through recommendations P.837 and P.838, have presented global approaches to rain-rate variation and rain-induced attenuation in line-of-sight radio links. Therefore, in this dissertation characteristics of rainfall rate and its applications for South Africa are evaluated. The cumulative distributions of rain intensity for 12 locations in seven regions in South Africa are presented in this dissertation based on five-year rainfall data. The rain rate with an integration time of 60 minutes is converted into an integration time of 1 minute in accordance with ITU-R recommendations. The resulting cumulative rain intensities and relations between them are compared with the global figures presented in ITU-R Recommendation P.837, as well as with the work in other African countries, notably by Moupfuma and Martin. Based on this work, additional rain-climatic zones are proposed alongside the five identified by ITU-R for South Africa. Finally, the study compares the semi-empirical raindrop-size distribution models such as Laws and Parsons, Marshall and Palmer, Joss, Thams and Waldvogel, and Gamma distribution with the estimated South Africa models.Item Channel estimation for SISO and MIMO OFDM communications systems.(2010) Oyerinde, Olutayo Oyeyemi.; Mneney, Stanley Henry.Telecommunications in the current information age is increasingly relying on the wireless link. This is because wireless communication has made possible a variety of services ranging from voice to data and now to multimedia. Consequently, demand for new wireless capacity is growing rapidly at a very alarming rate. In a bid to cope with challenges of increasing demand for higher data rate, better quality of service, and higher network capacity, there is a migration from Single Input Single Output (SISO) antenna technology to a more promising Multiple Input Multiple Output (MIMO) antenna technology. On the other hand, Orthogonal Frequency Division Multiplexing (OFDM) technique has emerged as a very popular multi-carrier modulation technique to combat the problems associated with physical properties of the wireless channels such as multipath fading, dispersion, and interference. The combination of MIMO technology with OFDM techniques, known as MIMO-OFDM Systems, is considered as a promising solution to enhance the data rate of future broadband wireless communication Systems. This thesis addresses a major area of challenge to both SISO-OFDM and MIMO-OFDM Systems; estimation of accurate channel state information (CSI) in order to make possible coherent detection of the transmitted signal at the receiver end of the system. Hence, the first novel contribution of this thesis is the development of a low complexity adaptive algorithm that is robust against both slow and fast fading channel scenarios, in comparison with other algorithms employed in literature, to implement soft iterative channel estimator for turbo equalizer-based receiver for single antenna communication Systems. Subsequently, a Fast Data Projection Method (FDPM) subspace tracking algorithm is adapted to derive Channel Impulse Response Estimator for implementation of Decision Directed Channel Estimation (DDCE) for Single Input Single Output - Orthogonal Frequency Division Multiplexing (SISO-OFDM) Systems. This is implemented in the context of a more realistic Fractionally Spaced-Channel Impulse Response (FS-CIR) channel model, as against the channel characterized by a Sample Spaced-Channel Impulse Response (SS)-CIR widely assumed by other authors. In addition, a fast convergence Variable Step Size Normalized Least Mean Square (VSSNLMS)-based predictor, with low computational complexity in comparison with others in literatures, is derived for the implementation of the CIR predictor module of the DDCE scheme. A novel iterative receiver structure for the FDPM-based Decision Directed Channel Estimation scheme is also designed for SISO-OFDM Systems. The iterative idea is based on Turbo iterative principle. It is shown that improvement in the performance can be achieved with the iterative DDCE scheme for OFDM system in comparison with the non iterative scheme. Lastly, an iterative receiver structure for FDPM-based DDCE scheme earlier designed for SISO OFDM is extended to MIMO-OFDM Systems. In addition, Variable Step Size Normalized Least Mean Square (VSSNLMS)-based channel transfer function estimator is derived in the context of MIMO Channel for the implementation of the CTF estimator module of the iterative Decision Directed Channel Estimation scheme for MIMO-OFDM Systems in place of linear minimum mean square error (MMSE) criterion. The VSSNLMS-based channel transfer function estimator is found to show improved MSE performance of about -4 MSE (dB) at SNR of 5dB in comparison with linear MMSE-based channel transfer function estimator.Item Performance analysis of LAN, WAN and WLAN in Eritrea.(2006) Kakay, Osman Mohammed Osman.; Afullo, Thomas Joachim Odhiambo.The dissertation addresses the communication issues of interconnecting the different government sectors LANs, and access to the global Internet. Network capacities are being purposely overengineered in today's commercial Internet. Any network provider, be it a commercial Internet Service Provider (ISP) or Information Technology Service department at government sector, company or university site, will design network bandwidth resources in such a way that there will be virtually no data loss, even during the worst possible network utilization scenario. Thus, the service delivered by today's end-to-end wide area Internet would be perfect if it wasn't for the inter-domain connections, such as Internet access link to the ISP, or peering points between ISPs. The thesis studies the performance of the network in Eritrea, displaying the problems of Local Area Networks (LANs) and Wide Area Networks (WAN) and suggesting initial solutions and investigating the performance of (WAN) through the measured traffic analysis between Asmara LAN and Massawa LAN, using queuing theory system (M/M/1 and M/M/2) solution. The dissertation also uses OPNET IT Guru simulation software program ·to study the performance of LAN and WLAN in Eritrea. The items studied include traffic, collision, packet loss, and queue delay. Finally in order to follow the current trends, we study the performance ofVOIP links in Eritrean WANs environment, with a focus on five different link capacities: 28 kbps, 33 kbps, 64 kbps, and 128 kbps for voice and 256/512 kbps for voice and data. Using the R value as a measure of mean opinion score (MOS), we determine that the 33 kbps link would be adequate for Eritrean WANs.Item Multiuser detection employing recurrent neural networks for DS-CDMA systems.(2006) Moodley, Navern.; Mneney, Stanley Henry.Over the last decade, access to personal wireless communication networks has evolved to a point of necessity. Attached to the phenomenal growth of the telecommunications industry in recent times is an escalating demand for higher data rates and efficient spectrum utilization. This demand is fuelling the advancement of third generation (3G), as well as future, wireless networks. Current 3G technologies are adding a dimension of mobility to services that have become an integral part of modem everyday life. Wideband code division multiple access (WCDMA) is the standardized multiple access scheme for 3G Universal Mobile Telecommunication System (UMTS). As an air interface solution, CDMA has received considerable interest over the past two decades and a great deal of current research is concerned with improving the application of CDMA in 3G systems. A factoring component of CDMA is multiuser detection (MUD), which is aimed at enhancing system capacity and performance, by optimally demodulating multiple interfering signals that overlap in time and frequency. This is a major research problem in multipoint-to-point communications. Due to the complexity associated with optimal maximum likelihood detection, many different sub-optimal solutions have been proposed. This focus of this dissertation is the application of neural networks for MUD, in a direct sequence CDMA (DS-CDMA) system. Specifically, it explores how the Hopfield recurrent neural network (RNN) can be employed to give yet another suboptimal solution to the optimization problem of MUD. There is great scope for neural networks in fields encompassing communications. This is primarily attributed to their non-linearity, adaptivity and key function as data classifiers. In the context of optimum multiuser detection, neural networks have been successfully employed to solve similar combinatorial optimization problems. The concepts of CDMA and MUD are discussed. The use of a vector-valued transmission model for DS-CDMA is illustrated, and common linear sub-optimal MUD schemes, as well as the maximum likelihood criterion, are reviewed. The performance of these sub-optimal MUD schemes is demonstrated. The Hopfield neural network (HNN) for combinatorial optimization is discussed. Basic concepts and techniques related to the field of statistical mechanics are introduced and it is shown how they may be employed to analyze neural classification. Stochastic techniques are considered in the context of improving the performance of the HNN. A neural-based receiver, which employs a stochastic HNN and a simulated annealing technique, is proposed. Its performance is analyzed in a communication channel that is affected by additive white Gaussian noise (AWGN) by way of simulation. The performance of the proposed scheme is compared to that of the single-user matched filter, linear decorrelating and minimum mean-square error detectors, as well as the classical HNN and the stochastic Hopfield network (SHN) detectors. Concluding, the feasibility of neural networks (in this case the HNN) for MUD in a DS-CDMA system is explored by quantifying the relative performance of the proposed model using simulation results and in view of implementation issues.Item Rain attenuation modelling for line-of-sight terrestrial links.(2006) Naicker, Kumaran.; Mneney, Stanley Henry.In today's rapidly expanding communications industry, there is an ever-increasing demand for greater bandwidth, higher data rates and better spectral efficiency. As a result current and future communication systems will need to employ advanced spatial, temporal and frequency diversity techniques in order to meet these demands. Even with the utilisation of such techniques, the congestion of the lower frequency bands, will inevitably lead to the increased usage of the millimetre-wave frequencies in terrestrial communication systems. Before such systems can be deployed, radio system designers require realistic and readily useable channel and propagation models at their disposal to predict the behaviour of such communication links and ensure that reliable and efficient data transmission is achieved The scattering and attenuation of electromagnetic waves by rain is a serious problem for microwave and millimetre-wave frequencies. The conversion of rain rate to specific attenuation is a crucial step in the analysis of the total path attenuation and hence radio-link availability. It is now common practice to relate the specific attenuation and the rain rate using the simple power law relationship. The power-law parameters are then used in the path attenuation model, where the spatial variations of rainfall are estimated by a path-integration of the rain rate. These power law parameters are strongly influenced by the drop-size-distribution (DSD). Thus an examination of the various DSDs and their influence on the specific attenuation and link availability is warranted. Several models for the DSD have been suggested in literature, from the traditional exponential, to the gamma, log normal and Weibull distributions. The type of DSD varies depending on the geographical location and rainfall type. An important requirement of the DSD is that it is consistent with rain rate (i.e. the DSD must satisfy the rain-rate integral equation). Thus before application in the specific attenuation calculations, normalisation needs to be performed to ensure the consistency, as done in this study. Once the specific attenuation has been evaluated for necessary frequency and rain-rate range, path averaging is performed to predict the rain attenuation over the communication link. The final step in this dissertation is the estimation of the percentage of time of such occurrences. For this, cumulative time statistics of surface point rain rates are needed. The resulting cumulative distribution model of the fade depth and duration due to rain is a valuable tool for system designers. With such models the system designer can then determine the appropriate fade margin for the communication system and resulting period of unavailability for the linkItem A multi-objective particle swarm optimized fuzzy logic congestion detection and dual explicit notification mechanism for IP networks.(2006) Nyirenda, Clement Nthambazale.; Dawoud, Peter Dawoud Shenouda.The Internet has experienced a tremendous growth over the past two decades and with that growth have come severe congestion problems. Research efforts to alleviate the congestion problem can broadly be classified into three groups: Cl) Router based congestion detection; (2) Generation and transmission of congestion notification signal to the traffic sources; (3) End-to-end algorithms which control the flow of traffic between the end hosts. This dissertation has largely addressed the first two groups which are basically router initiated. Router based congestion detection mechanisms, commonly known as Active Queue Management (AQM), can be classified into two groups: conventional mathematical analytical techniques and fuzzy logic based techniques. Research has shown that fuzzy logic techniques are more effective and robust compared to the conventional techniques because they do not rely on the availability of a precise mathematical model of Internet. They use linguistic knowledge and are, therefore, better placed to handle the complexities associated with the non-linearity and dynamics of the Internet. In spite of all these developments, there still exists ample room for improvement because, practically, there has been a slow deployment of AQM mechanisms. In the first part of this dissertation, we study the major AQM schemes in both the conventional and the fuzzy logic domain in order to uncover the problems that have hampered their deployment in practical implementations. Based on the findings from this study, we model the Internet congestion problem as a multi-objective problem. We propose a Fuzzy Logic Congestion Detection (FLCD) which synergistically combines the good characteristics of the fuzzy approaches with those of the conventional approaches. We design the membership functions (MFs) of the FLCD algorithm automatically by using Multi-objective Particle Swarm Optimization (MOPSO), a population based stochastic optimization algorithm. This enables the FLCD algorithm to achieve optimal performance on all the major objectives of Internet congestion control. The FLCD algorithm is compared with the basic Fuzzy Logic AQM and the Random Explicit Marking (REM) algorithms on a best effort network. Simulation results show that the FLCD algorithm provides high link utilization whilst maintaining lower jitter and packet loss. It also exhibits higher fairness and stability compared to its basic variant and REM. We extend this concept to Proportional Differentiated Services network environment where the FLCD algorithm outperforms the traditional Weighted RED algorithm. We also propose self learning and organization structures which enable the FLCD algorithm to achieve a more stable queue, lower packet losses and UDP traffic delay in dynamic traffic environments on both wired and wireless networks. In the second part of this dissertation, we present the congestion notification mechanisms which have been proposed for wired and satellite networks. We propose an FLCD based dual explicit congestion notification algorithm which combines the merits of the Explicit Congestion Notification (ECN) and the Backward Explicit Congestion Notification (BECN) mechanisms. In this proposal, the ECN mechanism is invoked based on the packet marking probability while the BECN mechanism is invoked based on the BECN parameter which helps to ensure that BECN is invoked only when congestion is severe. Motivated by the fact that TCP reacts to tbe congestion notification signal only once during a round trip time (RTT), we propose an RTT based BECN decay function. This reduces the invocation of the BECN mechanism and resultantly the generation of reverse traffic during an RTT. Compared to the traditional explicit notification mechanisms, simulation results show that the new approach exhibits lower packet loss rates and higher queue stability on wired networks. It also exhibits lower packet loss rates, higher good-put and link utilization on satellite networks. We also observe that the BECN decay function reduces reverse traffic significantly on both wired and satellite networks while ensuring that performance remains virtually the same as in the algorithm without BECN traffic reduction.Item Cell search in frequency division : duplex WCDMA networks.(2006) Rezenom, Seare Haile.; Broadhurst, Anthony D.Wireless radio access technologies have been progressively evolving to meet the high data rate demands of consumers. The deployment and success of voice-based second generation networks were enabled through the use of the Global System for Mobile Communications (GSM) and the Interim Standard Code Division Multiple Access (lS-95 CDMA) networks. The rise of the high data rate third generation communication systems is realised by two potential wireless radio access networks, the Wideband Code Division Multiple Access (WCDMA) and the CDMA2000. These networks are based on the use of various types of codes to initiate, sustain and terminate the communication links. Moreover, different codes are used to separate the transmitting base stations. This dissertation focuses on base station identification aspects of the Frequency Division Duplex (FDD) WCDMA networks. Notwithstanding the ease of deployment of these networks, their asynchronous nature presents serious challenges to the designer of the receiver. One of the challenges is the identification of the base station identity by the receiver, a process called Cell Search. The receiver algorithms must therefore be robust to the hostile radio channel conditions, Doppler frequency shifts and the detrimental effects of carrier frequency offsets. The dissertation begins by discussing the structure and the generation of WCDMA base station data along with an examination of the effects of the carrier frequency offset. The various cell searching algorithms proposed in the literature are then discussed and a new algorithm that exploits the correlation length structure is proposed and the simulation results are presented. Another design challenge presented by WCDMA networks is the estimation of carrier frequency offset at the receiver. Carrier frequency offsets arise due to crystal oscillator inaccuracies at the receiver and their effect is realised when the voltage controlled oscillator at the receiver is not oscillating at the same carrier frequency as that of the transmitter. This leads to a decrease in the receiver acquisition performance. The carrier frequency offset has to be estimated and corrected before the decoding process can commence. There are different approaches in the literature to estimate and correct these offsets. The final part of the dissertation investigates the FFT based carrier frequency estimation techniques and presents a new method that reduces the estimation error.Item Repeat--punctured turbo codes and superorthogonal convolutional turbo codes.(2007) Pillay, Narushan.; Xu, Hongjun.; Takawira, Fambirai.The use of error-correction coding techniques in communication systems has become extremely imperative. Due to the heavy constraints faced by systems engineers more attention has been given to developing codes that converge closer to the Shannon theoretical limit. Turbo codes exhibit a performance a few tenths of a decibel from the theoretical limit and has motivated a lot of good research in the channel coding area in recent years. In the under-mentioned dissertation, motivated by turbo codes, we study the use of three new error-correction coding schemes: Repeat-Punctured Superorthogonal Convolutional Turbo Codes, Dual-Repeat-Punctured Turbo Codes and Dual-Repeat-Punctured Superorthogonal Convolutional Turbo Codes, applied to the additive white Gaussian noise channel and the frequency non-selective or flat Rayleigh fading channel. The performance of turbo codes has been shown to be near the theoretical limit in the AWGN channel. By using orthogonal signaling, which allows for bandwidth expansion, the performance of the turbo coding scheme can be improved even further. Since the resultant is a low-rate code, the code is mainly suitable for spread-spectrum modulation applications. In conventional turbo codes the frame length is set equal to the interleaver size; however, the codeword distance spectrum of turbo codes improves with an increasing interleaver size. It has been reported that the performance of turbo codes can be improved by using repetition and puncturing. Repeat-punctured turbo codes have shown a significant increase in performance at moderate to high signal-to-noise ratios. In this thesis, we study the use of orthogonal signaling and parallel concatenation together with repetition (dual and single) and puncturing, to improve the performance of the superorthogonal convolutional turbo code and the conventional turbo code for reliable and effective communications. During this research, three new coding schemes were adapted from the conventional turbo code; a method to evaluate the union bounds for the AWGN channel and flat Rayleigh fading channel was also established together with a technique for the weight-spectrum evaluation.Item Extending WiFi access for rural reach(2007) Naidoo, Kribashnee.; Sewsunker, Rathi.WiFi can be used to provide cost-effective last-mile IP connectivity to rural users. In initial rollout, hotspots or hotzones can be positioned at community centres such as schools, clinics, hospitals or call-centres. The research will investigate maximizing coverage using physical and higher layer techniques. The study will consider a typical South African rural region, with telecommunications services traffic estimates. The study will compare several IEEE 802.11 deployment options based on the requirements of the South African case in order to recommend options that improve performance.Item Multiple antenna systems : channel capacity and low-density parity-check codes.(2005) Byers, Geoffrey James.; Takawira, Fambirai.The demand for high data rate wireless communication systems is evident today as indicated by the rapid growth in wireless subscribers and services. High data rate systems are bandwidth intensive but bandwidth is an expensive and scarce commodity. The ability of future wireless systems to efficiently utilise the available bandwidth is therefore integral to their progress and development. The wireless communications channel is a harsh environment where time varying multipath fading, noise and interference from other users and systems all contribute to the corruption of the received signal. It is difficult to overcome these problems and achieve the high data rates required using single antenna technology. Multiple-input-multipleoutput (MIMO) systems have recently emerged as a promising technique for achieving very large bandwidth efficiencies in wireless channels. Such a system employs multiple antennas at both the transmitter and the receiver. These systems exploit the spatial dimension of the wireless channel to achieve significant gains in terms of capacity and reliability over single antenna systems and consequently achieve high data rates. MIMO systems are currently being considered for 3rd generations cellular systems. The performance of MIMO systems is heavily dependent on the environment in which the system is utilised. For this reason a realistic channel model is essential for understanding the performance of these systems. Recent studies on the capacity of MIMO channels have focused on the effect of spatial correlation but the joint effect of spatial and temporal correlation has not been well studied. The first part of this thesis proposes a new spatially and temporally correlated MIMO channel model which considers motion of the receiver and nonisotropic scattering at both ends of the radio link. The outage capacity of this channel is examined where the effects of antenna spacing, array angle, degree of scattering and receiver motion are investigated. It is shown that the channel capacity still increases linearly with the number of transmit and receive antennas, despite the presence of both spatial and temporal correlation. The capacity of MIMO channels is generally investigated by simulation. Where analytical expressions have been considered for spatially correlated channels, only bounds or approximations have been used. In this thesis closed form analytical expressions are derived for the ergodic capacity of MIMO channels for the cases of spatial correlation at one end and both ends of the radio link. The latter does not lend itself to numerical integration but the former is shown to be accurate by comparison with simulation results. The proposed analysis is also very general as it is based on the transmit and receive antenna correlation matrices. Low-density parity-check (LDPC) codes have recently been rediscovered and have been shown to approach the Shannon limit and even outperform turbo codes for long block lengths. Non-binary LDPC codes have demonstrated improved performance over binary LDPC codes in the AWGN channel. Methods to optimise non-binary LDPC codes have not been well developed where only simulation based approaches have been employed, which are not very efficient. For this reason, a new approach is proposed which is based on extrinsic information transfer (EXIT) charts. It is demonstrated that by performing curve matching on the EXIT chart, good non-binary LDPC codes can be designed for the AWGN channel. In order to approach the theoretical capacity of MIMO channels, many space-time coded, multiple antenna (MA) systems have been considered in the literature. These systems merge channel coding and antenna diversity and exploit the benefits of both. Binary LDPC codes have demonstrated good performance in MA systems but nonbinary LDPC codes have not been considered. Therefore, the application of non-binary LDPC codes to MA systems is investigated where the codes are optimised for the system of interest, using a simulation and EXIT chart based design approach. It is shown that non-binary LDPC codes achieve a small gain in performance over binary LDPC codes in MA systems.Item Key management in mobile ad hoc networks.(2005) Van der Merwe, Johannes Petrus.; McDonald, Stephen A.Mobile ad hoc networks (MANETs) eliminate the need for pre-existing infrastructure by relying on the nodes to perform all network services. The connectivity between the nodes is sporadic due to the shared, error-prone wireless medium and frequent route failures caused by node mobility. Fully self-organized MANETs are created solely by the end-users for a common purpose in an ad hoc fashion. Forming peer-to-peer security associations in MANETs is more challenging than in conventional networks due to the lack of central authority. This thesis is mainly concerned with peer- t o-peer key management in fully self-organized M ANETs. A key management protocol’s primary function is to bootstrap and maintain the security associations in the network, hence to create, distribute and revocate (symmetric or asymmetric) keying material as needed by the network security services. The fully self-organized feature means that t he key management protocol cannot rely on any form of off-line or on-line trusted third party (TTP). The first part of the thesis gives an introduction to MANETs and highlights MANETs' main characteristics and applications. The thesis follows with an overall perspective on the security issues in MANETs and motivates the importance of solving the key management problem in MANETs. The second part gives a comprehensive survey on the existing key management protocols in MANETs. The protocols are subdivided into groups based on their main characteristic or design strategy. Discussion and comments are provided on the strategy of each group. The discussions give insight into the state of the art and show researchers the way forward. The third part of the thesis proposes a novel peer- to-peer key management scheme for fully self-organized MANETs, called Self-Organized Peer-to-Peer Key Management (SelfOrgPKM). The scheme has low implementation complexity and provides self-organized mechanisms for certificate dissemination and key renewal without the need for any form of off-line or on-line authority. The fully distributed scheme is superior in communication and computational overhead with respect to its counterparts. All nodes send and receive the same number of messages and complete the same amount of computation. ScifOrgPKM therefore preserves the symmetric relationship between the nodes. Each node is its own authority domain which provides an adversary with no convenient point of attack. SelfOrgPKM solves t he classical routing-security interdependency problem and mitigates impersonation attacks by providing a strong one-to-one binding between a user’s certificate information and public key. The proposed scheme uses a novel certificate exchange mechanism t hat exploits user mobility but does not rely on mobility in anyway. The proposed certificate exchange mechanism is ideally suited for bootstraping the routing security. It enables nodes to setup security associations on the network layer in a localized fashion without any noticeable time delay. The thesis also introduces two generic cryptographic building blocks as the basis of SelfOrgPKM: 1) A variant on the ElGamal type signature scheme developed from the generalized ElGamal signature scheme introduced by Horster et al. The modified scheme is one of the most efficient ElGamal variants, outperforming most other variant s; and 2) A subordinate public key generation scheme. The thesis introduces t he novel notion of subordinate public keys, which allows the users of SelfOrgPKM to perform self-organized, self-certificate revocation without changing their network identifiers / addresses. Subordinate public keys therefore eliminate the main weakness of previous efforts to solve the address ownership problem in Mobile IPv6. Furthermore, the main weakness of previous efforts to break t he routing-security interdependence cycle in MANETs is also eliminated by a subordinate public key mechanism. The presented EIGamal signature variant is proved secure in t he Random Oracle and Generic Security Model (ROM+ GM ) without making any unrealistic assumptions . It is shown how the strong security of the signature scheme supports t he security of t he proposed subordinate key generation scheme. Based on the secure signature scheme a security argument for SelfOrgPKM is provided with respect to a genera l, active insider adversary model. The only operation of SelfOrgPKM affecting the network is the pairwise exchange of certificates. The cryptographic correctness, low implementation complexity and effectiveness of SelfOrgPKM were verified though extensive simulations using ns-2 and OpenSSL. Thorough analysis of the simulation results shows t hat t he localized certificate exchange mechanism on the network layer has negligible impact on network performance. The simulation results also correlate with efficiency analysis of SelfOrgPKM in an ideal network setting, hence assuming guaranteed connectivity. The simulation results furthermore demonstrate that network layer certificate exchanges can be triggered without extending routing protocol control packet.Item A structure from motion solution to head pose recovery for model-based video coding.(2005) Heathcote, Jonathan Michael.; Naidoo, Bashan.Current hybrid coders such as H.261/263/264 or MPEG-l/-2 cannot always offer high quality-to-compression ratios for video transfer over the (low-bandwidth) wireless channels typical of handheld devices (such as smartphones and PDAs). Often these devices are utilised in videophone and teleconferencing scenarios, where the subjects of inte:est in the scene are peoples faces. In these cases, an alternative coding scheme known as Model-Based Video Coding (MBVC) can be employed. MBVC systems for face scenes utilise geometrically and photorealistically accurate computer graphic models to represent head !md shoulder views of people in a scene. High compression ratios are achieved at the encoder by extracting and transmitting only the parameters which represent the explicit shape and motion changes occurring on the face in the scene. With some a priori knowledge (such as the MPEG-4 standard for facial animation parameters), the transmitted parameters can be used at the decoder to accurately animate the graphical model and a synthesised version of the scene (originally appearing at the encoder) can be output. Primary components for facial re-animation at the decoder are a set of local and global motion parameters extracted from the video sequence appearing at the encoder. Local motion describes the changes in facial expression occurring on the face. Global motion describes the three-dimensional motion· of the entire head as a rigid object. Extraction of this three-dimensional global motion is often called head tracking. This thesis focuses on the tracking of rigid head pose in a monocular video sequence. The system framework utilises the recursive Structure from Motion (SfM) method of Azarbayejani and Pentland. Integral to the SfM solution are a large number of manually selected two-dimensional feature points, which are tracked throughout the sequence using an efficient image registration technique. The trajectories of the feature points are simultaneously processed by an extended Kalman filter (EKF) to stably recover camera geometry and the rigid three-dimensional structure and pose of the head. To improve estimation accuracy and stability, adaptive estimation is harnessed within the Kalman filter by dynamically varying the noise associated with each of the feature measurements. A closed loop approach is used to constrain feature tracking in each frame. The Kalman filter's estimate of motion and structure of the face are used to predict the trajectory of the features, thereby constraining the search space for the next frame in the video sequence. Further robustness in feature tracking is achieved through the integration of a linear appearance basis to accommodate variations in illumination or changes in aspect on the face. Synthetic experiments are performed for both the SfM and the feature tracking algorithm. The accuracy of the SfM solution is evaluated against synthetic ground truth. Further experimentation demonstrates the stability of the framework to significant noise corruption on arriving measurement data. The accuracy of obtained pixel measurements in the feature tracking algorithm is also evaluated against known ground truth. Additional experiments confirm feature tracking stability despite significant changes in target appearance. Experiments with real video sequences illustrate robustness of the complete head tracker to partial occlusions on the face. The SfM solution (including two-dimensional tracking) runs near real time at 12 Hz. The limits of Pitch, Yaw and Roll (rotational) recovery are 45°,45° and 90° respectively. Large translational recovery (especially depth) is also demonstrated. The estimated motion trajectories are validated against (publically available) ground truth motion captured using a commercial magnetic orientation tracking system. Rigid reanimation of an overlayed wire frame face model is further used as a visually subjective analysis technique. These combined results serve to confirm the suitability of the proposed head tracker as the global (rigid) motion estimator in an MBVC system.Item Human motion reconstruction fom video sequences with MPEG-4 compliant animation parameters.(2005) Carsky, Dan.; Naidoo, Bashan.; McDonald, Stephen A.The ability to track articulated human motion in video sequences is essential for applications ranging from biometrics, virtual reality, human-computer interfaces and surveillance. The work presented in this thesis focuses on tracking and analysing human motion in terms of MPEG-4 Body Animation Parameters, in the context of a model-based coding scheme. Model-based coding has emerged as a potential technique for very low bit-rate video compression. This study emphasises motion reconstruction rather than photorealistic human body modelling, consequently a 3-D skeleton with 31 degrees-of-freedom was used to model the human body. Compression is achieved by analysing the input images in terms of the known 3-D model and extracting parameters that describe the relative pose of each segment. These parameters are transmitted to the decoder which synthesises the output by transforming the default model into the correct posture. The problem comprises two main aspects: 3-D human motion capture and pose description. The goal of the 3-D human motion capture component is to generate 3-D locations of key joints on the human body without the use of special markers or sensors placed on the subject. The input sequence is acquired by three synchronised and calibrated CCD cameras. Digital image matching techniques including cross-correlation and least squares matching are used to find spatial correspondences between the multiple views as well as temporal correspondences in subsequent frames with sub-pixel accuracy. The tracking algorithm automates the matching process examining each matching result and adaptively modifying matching parameters. Key points must be manually selected in the first frame, following which the tracking commences without the intervention of the user, employing the recovered 3-D motion of the skeleton model for prediction of future states. Epipolar geometry is exploited to verify spatial correspondences in each frame before the 3-D locations of all joints are computed through triangulation to construct the 3-D skeleton. The pose of the skeleton is described by the MPEG-4 Body Animation Parameters. The subject's motion is reconstructed by applying the animation parameters to a simplified version of the default MPEG-4 skeleton. The tracking algorithm may be adapted to 2-D tracking in monocular sequences. An example of 2-D tracking of facial expressions demonstrates the flexibility of the algorithm. Further results involving tracking separate body parts demonstrate the advantage of multiple views and the benefit of camera calibration, which simplifies the generation of 3-D trajectories and the estimation of epipolar geometry. The overall system is tested on a walking sequence where full body motion capture is performed and all 31 degrees-of freedom of the tracked model are extracted. Results show adequate motion reconstruction (i.e. convincing to most human observers), with slight deviations due to lack of knowledge of the volumetric property of the human body.