School Mathematics, Statistics and Computer Science
Permanent URI for this communityhttps://hdl.handle.net/10413/6526
Browse
Browsing School Mathematics, Statistics and Computer Science by Title
Now showing 1 - 20 of 475
- Results Per Page
- Sort Options
Item 2-generations of the sporadic simple groups.(1997) Ganief, Moegamad Shahiem.; Moori, Jamshid.A group G is said to be 2-generated if G = (x, y), for some non-trivial elements x, y E G. In this thesis we investigate three special types of 2-generations of the sporadic simple groups. A group G is a (l, rn, n )-generated group if G is a quotient group of the triangle group T(l, rn, n) = (x, y, zlx1 = ym = zn = xyz = la). Given divisors l, rn, n of the order of a sporadic simple group G, we ask the question: Is G a (l, rn, n)-generated group? Since we are dealing with simple groups, we may assume that III +l/rn + l/n < 1. Until recently interest in this type of generation had been limited to the role it played in genus actions of finite groups. The problem of determining the genus of a finite simple group is tantamount to maximizing the expression III +l/rn +Iln for which the group is (l,rn,n)-generated. Secondly, we investigate the nX-complementary generations of the finite simple groups. A finite group G is said to be nX-complementary generated if, given an arbitrary non-trivial element x E G, there exists an element y E nX such that G = (x, y). Our interest in this type of generation is motivated by a conjecture (Brenner-Guralnick-Wiegold [18]) that every finite simple group can be generated by an arbitrary non-trivial element together with another suitable element. It was recently proved by Woldar [181] that every sporadic simple group G is pAcomplementary generated, where p is the largest prime divisor of IGI. In an attempt to further the theory of X-complementary generations of the finite simple groups, we pose the following problem. Which conjugacy classes nX of the sporadic simple groups are nX-complementary generated conjugacy classes. In this thesis we provide a complete solution to this problem for the sporadic simple groups HS, McL, C03, Co2 , Jt , J2 , J3 , J4 and Fi 22 · We partially answer the question on (l, rn, n)-generation for the said sporadic groups. A finite non-abelian group G is said to have spread r iffor every set {Xl, X2, ' , "xr } of r non-trivial distinct elements, thpre is an element y E G such that G = (Xi, y), for all i. Our interest in this type of 2-generation comes from a problem by BrennerWiegold [19] to find all finite non-abelian groups with spread 1, but not spread 2. Every sporadic simple group has spread 1 (Woldar [181]) and we show that every sporadic simple group has spread 2.Item Addressing traffic congestion and throughput through optimization.(2021) Iyoob, Mohamed Zaire.; van Niekerk, Brett.Traffic congestion experienced in port precincts have become prevalent in recent years for South Africa and internationally [1, 2, 3]. In addition to the environmental impacts of air pollution due to this challenge, economic effects also weigh heavy on profit margins with added fuel costs and time wastages. Even though there are many common factors attributing to congestion experienced in port precincts and other areas, operational inefficiencies due to slow productivity and lack of handling equipment to service trucks in port areas are a major contributor [4, 5]. While there are several types of optimisation approaches to addressing traffic congestion such as Queuing Theory [6], Genetic Algorithms [7], Ant Colony Optimisation [8], Particle Swarm Optimisation [9], traffic congestion is modelled based on congested queues making queuing theory most suited for resolving this problem. Queuing theory is a discipline of optimisation that studies the dynamics of queues to determine a more optimal route to reduce waiting times. The use of optimisation to address the root cause of port traffic congestion has been lacking with several studies focused on specific traffic zones that only address the symptoms. In addition, research into traffic around port precincts have also been limited to the road side with proposed solutions focusing on scheduling and appointment systems [25, 56] or the sea-side focusing on managing vessel traffic congestion [30, 31, 58]. The aim of this dissertation is to close this gap through the novel design and development of Caudus, a smart queue solution that addresses traffic congestion and throughput through optimization. The name “CAUDUS” is derived as an anagram with Latin origins to mean “remove truck congestion”. Caudus has three objective functions to address congestion in the port precinct, and by extension, congestion in warehousing and freight logistics environments viz. Preventive, Reactive and Predictive. The preventive objective function employs the use of Little’s rule [14] to derive the algorithm for preventing congestion. Acknowledging that congestion is not always avoidable, the reactive objective function addresses the problem by leveraging Caudus’ integration capability with Intelligent Transport Systems [65] in conjunction with other road-user network solutions. The predictive objective function is aimed at ensuring the environment is incident free and provides an early-warning detection of possible exceptions in traffic situations that may lead to congestion. This is achieved using the derived algorithms from this study that identifies bottleneck symptoms in one traffic zone where the root cause exists in an adjoining traffic area. The Caudus Simulation was developed in this study to test the derived algorithms against the different congestion scenarios. The simulation utilises HTML5 and JavaScript in the front-end GUI with the back-end having a SQL code base. The entire simulation process is triggered using a series of multi-threaded batch programs to mimic the real-world by ensuring process independence for the various simulation activities. The results from the simulation demonstrates a significant reduction in the vii duration of congestion experienced in the port precinct. It also displays a reduction in throughput time of the trucks serviced at the port thus demonstrating Caudus’ novel contribution in addressing traffic congestion and throughput through optimisation. These results were also published and presented at the International Conference on Artificial Intelligence, Big Data, Computing and Data Communication Systems (icABCD 2021) under the title “CAUDUS: An Optimisation Model to Reducing Port Traffic Congestion” [84].Item Adjusting the effect of integrating antiretroviral therapy and tuberculosis treatment on mortality for non-compliance : an instrumental variables analysis using a time-varying exposure.(2018) Yende-Zuma, Fortunate Nonhlanhla.; Mwambi, Henry Godwell.; Vansteelandt, Stijn.In South Africa and elsewhere, research has shown that the integration of antiretroviral therapy (ART) and tuberculosis (TB) treatment saves lives. The randomised controlled trials (RCTs) which provided this compelling evidence used intent-to-treat (ITT) strategy as part of their primary analysis. As much as ITT is protected against selection bias caused by both measured and unmeasured confounders, but it is capable of drawing results towards the null and underestimate the e ectiveness of treatment if there is too much non-compliance. To adjust for non-compliance, \as-treated"and \per-protocol"comparisons are commonly made. These contrast study participants according to their received treatment, regardless of the treatment arm to which they were assigned, or limit the analysis to participants who followed the protocol. Such analyses are generally biased because the subgroups which they compare often lack comparability. In view of the shortcomings of the \as-treated"and \per-protocol"analyses, our objective was to account for non-compliance by using instrumental variables (IV) analysis to estimate the e ect of ART initiation during TB treatment on mortality. Furthermore, to capture the full complexity of compliance behaviour outside the TB treatment duration, we developed a novel IV-methodology for a time-varying measure of compliance to ART. This is an important contribution to the IV literature since IV-methodology for the e ect of a time-varying exposure on a time-to-event endpoint is currently lacking. In RCTs, IV analysis enable us to make use of the comparability o ered by randomisation and thereby have the capability of adjusting for unmeasured and measured confounders; they have the further advantage of yielding results that are less sensitive to random measurement error in the exposure. In order to carry out IV analysis, one needs to identify a variable called an instrument, which needs to satisfy three important assumptions. To apply the IV methodology, we used data from Starting Antiretroviral Therapy at Three Points in Tuberculosis (SAPiT) trial which was conducted by the Centre for the AIDS Programme of Research in South Africa. This trial enrolled HIV and TB co-infected patients who were assigned to start ART either early or late during TB treatment or after TB treatment completion. The results from IV analysis demonstrate that survival bene t of fully integrating TB treatment and ART is even higher than what has been reported in the ITT analysis since non-compliance has been accounted for.Item The adoption of Web 2.0 tools in teaching and learning by in-service secondary school teachers: the Mauritian context.(2018) Pyneandee, Marday.; Govender, Desmond Wesley.; Oogarah-Pratap, Brinda.With the current rapid increase in use of Web 2.0 tools by students, it is becoming necessary for teachers to understand what is happening in this social networking phenomenon, so that they can better understand the new spaces that students inhabit and the implications for students’ learning and investigate the wealth of available Web 2.0 tools, and work to incorporate some into their pedagogical and learning practices. Teachers are using the Internet and social networking tools in their personal lives. However, there is little empirical evidence on teachers’ viewpoints and usage of social media and other online technologies to support their classroom practice. This study stemmed from the urgent need to address this gap by exploring teachers’ perceptions, and experience of the integration of online technologies, social media, in their personal lives and for professional practice to find the best predictors of the possibility of teachers’ using Web 2.0 tools in their professional practice. Underpinning the study is a conceptual framework consisting of core ideas found in the unified theory of acceptance and use of technology (UTAUT) and technology pedagogy and content knowledge (TPACK) models. The conceptual framework, together with a review of relevant literature, enabled the formulation of a theoretical model for understanding teachers’ intention to exploit the potential of Web 2.0 tools. The model was then further developed using a mixed-method, two-phase methodology. In the first phase, a survey instrument was designed and distributed to in-service teachers following a Postgraduate Certificate in Education course at the institution where the researcher works. Using the data collected from the survey, exploratory factor analysis, correlational analysis and multiple regression analysis were used to refine the theoretical model. Other statistical methods were also used to gain further insights into teachers’ perceptions of use of Web 2.0 tools in their practices. In the second phase of the study, survey respondents were purposefully selected, based on quantitative results, to participate in interviews. The qualitative data yielded from the interviews was used to support and enrich understanding of the quantitative findings. The constructs teacher knowledge and technology pedagogy knowledge from the TPACK model and the constructs effort expectancy, facilitating conditions and performance expectancy are the best predictors of teachers’ intentions to use Web 2.0 tools in their professional practice. There was an interesting finding on the relationship between UTAUT and TPACK constructs. The constructs performance expectancy and effort expectancy had a significant relationship with all the TPACK constructs – technology knowledge, technology pedagogy knowledge, pedagogical content knowledge (PCK), technology and content knowledge and TPACK – except for content knowledge and pedagogical knowledge. The association between the TPACK construct PCK with the UTAUT constructs performance expectancy and effort expectancy was an unexpected finding because PCK is only about PCK and has no technology component. The theoretical contribution of this study is the model, which is teachers’ intention of future use of Web 2.0 tools in their professional practice. The predictive model, together with other findings, enhances understanding of the nature of teachers’ intention to utilise Web 2.0 tools in their professional practice. Findings from this study have implications for school infrastructure, professional development of teachers and an ICT learning environment to support the adoption of Web 2.0 tools in teaching practices and are presented as guiding principles at the end of the study.Item Age structured models of mathematical epidemiology.(2013) Massoukou, Rodrigue Yves M'pika.; Banasiak, Jacek.We consider a mathematical model which describes the dynamics for the spread of a directly transmitted disease in an isolated population with age structure, in an invariant habitat, where all individuals have a finite life-span, that is, the maximum age is finite, hence the mortality is unbounded. We assume that infected individuals do not recover permanently, meaning that these diseases do not convey immunity (these could be: common cold, influenza, gonorrhoea) and the infection can be transmitted horizontally as well as vertically from adult individuals to their newborns. The model consists of a nonlinear and nonlocal system of equations of hyperbolic type. Note that the above-mentioned model has been already analysed by many authors who assumed a constant total population. With this assumption they considered the ratios of the density and the stable age profile of the population, see [16, 31]. In this way they were able to eliminate the unbounded death rate from the model, making it easier to analyse by means of the semigroup techniques. In this work we do not make such an assumption except for the error estimates in the asymptotic analysis of a singularly perturbed problem where we assume that the net reproduction rate R ≤ 1. For certain particular age-dependent constitutive forms of the force of infection term, solvability of the above-mentioned age-structured epidemic model is proven. In the intercohort case, we use the semigroup theory to prove that the problem is well-posed in a suitable population state space of Lebesgue integrable vector valued functions and has a unique classical solution which is positive, global in time and has the property of continuous dependence on the initial data. Further, we prove, under additional regularity conditions (composed of specific assumptions and compatibility conditions at the origin), that the solution is smooth. In the intracohort case, we have to consider a suitable population state space of bounded vector valued functions on which the (unbounded) population operator cannot generate a strongly continuous semigroup which, therefore, is not suitable for semigroup techniques–any strongly continuous semigroup on the space of bounded vector valued functions is uniformly continuous, see [6, Theorem 3.6]. Since, for a finite life-span of the population, the space of bounded vector valued functions is a subspace densely and continuously embedded in the state space of Lebesgue integrable vector valued functions, thus we can restrict the analysis of the intercohort case to the above-mentioned space of bounded vector valued functions. We prove that this state space is invariant under the action of the strongly continuous semigroup generated by the (unbounded) population operator on the state space of Lebesgue integrable vector valued functions. Further, we prove the existence and uniqueness of a mild solution to the problem. In general, different time scales can be identified in age-structured epidemiological models. In fact, if the disease is not terminal, the process of getting sick and recovering is much faster than a typical demographical process. In this work, we consider the case where recovering is much faster than getting sick, giving birth and death. We consider a convenient approach that carries out a preliminary theoretical analysis of the model and, in particular, identifies time scales of it. Typically this allows separation of scales and aggregation of variables through asymptotic analysis based on the Chapman-Enskog procedure, to arrive at reduced models which preserve essential features of the original dynamics being at the same time easier to analyse.Item Age, period and cohort analysis of young adult mortality due to HIV and TB in South Africa: 1997-2015.(2019) Nkwenika, Tshifhiwa Mildred.; Mwambi, Henry Godwell.; Manda, Samuel.Young adult mortality is very important in South Africa with the impact of Human Immunodeficiency Virus /Acquired Immune deficiency Syndrome (HIV/AIDS), Tuberculosis (TB), injuries and emerging non-communicable diseases (NCDs). Investigation of temporal trends for adult mortality associated with TB and HIV has often based on age, gender, period and birth cohort separately. The overall aim of this study was to estimate age effect across period and birth cohort; period effect across age and birth cohort; and birth cohort effect across age and period on TB and HIV-related mortality. Mortality data and mid population estimates were obtained from Statistics South Africa for the period 1997 to 2015. Observed HIV/AIDS deaths were adjusted for under-reporting while adjustments for the misclassification of AIDS deaths and the proportion of ill-defined natural causes were made. Three-year age, period and birth cohort intervals for 15-64 years, 1997-2015 and 1934-2000 respectively were used. Age-Period-Cohort (APC) analysis using the Poisson distribution was used to compute effects of age, period and cohort on mortality due to TB and HIV.A total of 5, 825,502 adult deaths from the period 1997 to 2015, of which 910,731 (15.6%) were TB deaths while 252,101 (4.3%) were HIV deaths. A concave down association between TB mortality and period was observed while an upward trend was observed for HIV-related mortality. The estimated TB relative mortality showed a concave down association with age, a peak at 36-38 years was found. There was a concave down relationship between TB relative risk between 1997 and 2015. Findings showed a general downward trend between TB mortality and birth cohort, which 1934 cohort had higher rates of mortality. There was an inverse flatter U-shaped association between age and HIV-related mortality, where 30-32 years was more pronounced. An inverse U-shaped relationship between HIV-related mortality and period from 1997 to 2015 was estimated. An inverted V-shape relationship between birth cohort and HIV-related mortality was estimated. The study has found an inverse U-shaped association between TB-related mortality and age, period and general downward trend with birth cohort for deaths reported between 1997 and 2015.A concave down relationship between HIV-related mortality and age, period and inverted V-shaped with birth cohort was found. The association between HIV-related mortality and period differs from the officially reported trend with adjustment, which shows an upward progression. Our findings are based on a slight advanced statistical model using Age-Period-Cohort. Using APC analysis, we found a secular trend in TB and HIV-related mortality rates which could contribute certain clues in long-term planning, monitoring and evaluation.Item Algebraic properties of ordinary differential equations.(1995) Leach, Peter Gavin Lawrence.In Chapter One the theoretical basis for infinitesimal transformations is presented with particular emphasis on the central theme of this thesis which is the invariance of ordinary differential equations, and their first integrals, under infinitesimal transformations. The differential operators associated with these infinitesimal transformations constitute an algebra under the operation of taking the Lie Bracket. Some of the major results of Lie's work are recalled. The way to use the generators of symmetries to reduce the order of a differential equation and/or to find its first integrals is explained. The chapter concludes with a summary of the state of the art in the mid-seventies just before the work described here was initiated. Chapter Two describes the growing awareness of the algebraic properties of the paradigms of differential equations. This essentially ad hoc period demonstrated that there was value in studying the Lie method of extended groups for finding first integrals and so solutions of equations and systems of equations. This value was emphasised by the application of the method to a class of nonautonomous anharmonic equations which did not belong to the then pantheon of paradigms. The generalised Emden-Fowler equation provided a route to major development in the area of the theory of the conditions for the linearisation of second order equations. This was in addition to its own interest. The stage was now set to establish broad theoretical results and retreat from the particularism of the seventies. Chapters Three and Four deal with the linearisation theorems for second order equations and the classification of intrinsically nonlinear equations according to their algebras. The rather meagre results for systems of second order equations are recorded. In the fifth chapter the investigation is extended to higher order equations for which there are some major departures away from the pattern established at the second order level and reinforced by the central role played by these equations in a world still dominated by Newton. The classification of third order equations by their algebras is presented, but it must be admitted that the story of higher order equations is still very much incomplete. In the sixth chapter the relationships between first integrals and their algebras is explored for both first order integrals and those of higher orders. Again the peculiar position of second order equations is revealed. In the seventh chapter the generalised Emden-Fowler equation is given a more modern and complete treatment. The final chapter looks at one of the fundamental algebras associated with ordinary differential equations, the three element 8£(2, R), which is found in all higher order equations of maximal symmetry, is a fundamental feature of the Pinney equation which has played so prominent a role in the study of nonautonomous Hamiltonian systems in Physics and is the signature of Ermakov systems and their generalisations.Item Algebraizing deductive systems.(1995) Van Alten, Clint Johann.; Raftery, James Gordon.; Sturm, Teo.Abstract available in PDF.Item Analysis and numerical solutions of fragmentation equation with transport.(2012) Wetsi, Poka David.; Banasiak, Jacek.; Shindin, Sergey Konstantinovich.Fragmentation equations occur naturally in many real world problems, see [ZM85, ZM86, HEL91, CEH91, HGEL96, SLLM00, Ban02, BL03, Ban04, BA06] and references therein. Mathematical study of these equations is mostly concentrated on building existence and uniqueness theories and on qualitative analysis of solutions (shattering), some effort has be done in finding solutions analytically. In this project, we deal with numerical analysis of fragmentation equation with transport. First, we provide some existence results in Banach and Hilbert settings, then we turn to numerical analysis. For this approximation and interpolation theory for generalized Laguerre functions is derived. Using these results we formulate Laguerre pseudospectral method and provide its stability and convergence analysis. The project is concluded with several numerical experiments.Item Analysis of a binary response : an application to entrepreneurship success in South Sudan.(2012) Lugga, James Lemi John Stephen.; Zewotir, Temesgen Tenaw.Just over half (50:6%) of the population of South Sudan lives on less than one US Dollar a day. Three quarters of the population live below the poverty line (NBS, Poverty Report, 2010). Generally, effective government policy to reduce unemployment and eradicate poverty focuses on stimulating new businesses. Micro and small enterprises (MSEs) are the major source of employment and income for many in under-developed countries. The objective of this study is to identify factors that determine business success and failure in South Sudan. To achieve this objective, generalized linear models, survey logistic models, the generalized linear mixed models and multiple correspondence analysis are used. The data used in this study is generated from the business survey conducted in 2010. The response variable, which is defined as business success or failure was measured by profit and loss in businesses. Fourteen explanatory variables were identified as factors contributing to business success and failure. A main effect model consisting of the fourteen explanatory variables and three interaction effects were fitted to the data. In order to account for the complexity of the survey design, survey logistic and generalized linear mixed models are refitted to the same variables in the main effect model. To confirm the results from the model we used multiple correspondence analysis.Item An analysis of algorithms to estimate the characteristics of the underlying population in Massively Parallel Pyrosequencing data.(2011) Ragalo, Anisa.; Murrell, Hugh Crozier.Massively Parallel Pyrosequencing (MPP) is a next generation DNA sequencing technique that is becoming ubiquitous because it is considerably faster, cheaper and produces a higher throughput than long established sequencing techniques like Sanger sequencing. The MPP methodology is also much less labor intensive than Sanger sequencing. Indeed, MPP has become a preferred technology in experiments that seek to determine the distinctive genetic variation present in homologous genomic regions. However there arises a problem in the interpretation of the reads derived from an MPP experiment. Specifically MPP reads are characteristically error prone. This means that it becomes difficult to separate the authentic genomic variation underlying a set of MPP reads from variation that is a consequence of sequencing error. The difficulty of inferring authentic variation is further compounded by the fact that MPP reads are also characteristically short. As a consequence of this, the correct alignment of an MPP read with respect to the genomic region from which it was derived may not be intuitive. To this end, several computational algorithms that seek to correctly align and remove the non-authentic genetic variation from MPP reads have been proposed in literature. We refer to the removal of non-authentic variation from a set of MPP reads as error correction. Computational algorithms that process MPP data are classified as sequence-space algorithms and flow-space algorithms. Sequence-space algorithms work with MPP sequencing reads as raw data, whereas flow-space algorithms work with MPP flowgrams as raw data. A flowgram is an intermediate product of MPP, which is subsequently converted into a sequencing read. In theory, flow-space computations should produce more accurate results than sequence-space computations. In this thesis, we make a qualitative comparison of the distinct solutions delivered by selected MPP read alignment algorithms. Further we make a qualitative comparison of the distinct solutions delivered by selected MPP error correction algorithms. Our comparisons between different algorithms with the same niche are facilitated by the design of a platform for MPP simulation, PyroSim. PyroSim is designed to encapsulate the error rate that is characteristic of MPP. We implement a selection of sequence-space and flow-space alignment algorithms in a software package, MPPAlign. We derive a quality ranking for the distinct algorithms implemented in MPPAlign through a series of qualitative comparisons. Further, we implement a selection of sequence-space and flow-space error correction algorithms in a software package, MPPErrorCorrect. Similarly, we derive a quality ranking for the distinct algorithms implemented in MPPErrorCorrect through a series of qualitative comparisons. Contrary to the view expressed in literature which postulates that flowspace computations are more accurate than sequence-space computations, we find that in general the sequence-space algorithms that we implement outperform the flow-space algorithms. We surmise that flow-space is a more sensitive domain for conducting computations and can only yield consistently good results under stringent quality control measures. In sequence-space, however, we find that base calling, the process that converts flowgrams (flow-space raw data) into sequencing reads (sequence-space raw data), leads to more reliable computations.Item An analysis of approaches for developing national health information systems : a case study of two sub-Saharan African countries.(2016) Mudaly, Thinasagree.; Moodley, D.; Pillay, Anban Woolaganathan.; Seebregts, Christopher.Health information systems in sub-Saharan African countries are currently characterized by significant fragmentation, duplication and limited interoperability. Incorporating these disparate systems into a coherent national health information system has the potential to improve operational efficiencies, decision-making and planning across the health sector. In a recent study, Coiera analysed several mature national health information systems in high income countries and categorised a topology of the approaches for building them as: top-down, bottom-up or middle-out. Coeria gave compelling arguments for countries to adopt a middle-out approach. Building national health information systems in sub-Saharan African countries pose unique and complex challenges due to the substantial difference between the socio-economic, political and health landscapes of these countries and high income countries. Coiera’s analysis did not consider the unique challenges faced by sub-Saharan African countries in building their systems. Furthermore, there is currently no framework for analysing high-level approaches for building NHIS. This makes it difficult to establish the benefits and applicability of Coiera’s analysis for building NHIS in sub-Saharan African countries. The aim of this research was to develop and apply such a framework to determine which approach in Coiera’s topology, if any, showed signs of being the most sustainable approach for building effective national health information systems in sub-Saharan African countries. The framework was developed through a literature analysis and validated by applying it in case studies of the development of national health information systems in South Africa and Rwanda. The result of applying the framework to the case studies was a synthesis of the current evolution of these systems, and an assessment of how well each approach in Coiera’s topology supports key considerations for building them in typical sub-Saharan African countries. The study highlights the value of the framework for analysing sub-Saharan African countries in terms of Coiera’s topology, and concludes that, given the peculiar nature and evolution of national health information systems in sub-Saharan African countries, a middle-out approach can contribute significantly to building effective and sustainable systems in these countries, but its application in sub-Saharan African countries will differ significantly from its application in high income countries.Item Analysis of cultural and ideological values transmitted by university websites.(2003) Ramakatane, Mamosa Grace.; Clarke, Patricia Ann.With the advent of globalisation and new communication technologies, it was inevitable that educational institutions would follow the advertising trend of establishing websites to market their services. This paper analyses the cultural and ideological values transmitted by such university websites. Particular focus is on issues around gender, sexual orientation, race, religion and socioeconomic status. The aim is to analyse consumer reaction to Internet messages conveyed in websites from different cultures, compare them with the intentions of producers and to relate all these back to ideological factors. This study deconstructs content and messages conveyed by University websites to assess the extent to which they might subscribe to particular ideologies (whether overt or covert). The argument that there are hidden ideologies in Web design does not imply that designers or producers intended any conspiracy or deception. Rather, the study compares the organisation's intended image/ethos with that which consumers perceive through their exposure to the website. The methodology was purposive sampling of participants consulted through personal (face-to-face) and interviews conducted online, as well as email-distributed questionnaires.This study uses websites of two universities in the KwaZulu-Natal region of South Africa.Item Analysis of discrete time competing risks data with missing failure causes and cured subjects.(2023) Ndlovu, Bonginkosi Duncan.; Zewotir, Temesgen Tenaw.; Melesse, Sileshi Fanta.This thesis is motivated by the limitations of the existing discrete time competing risks models vis-a-vis the treatment of data that comes with missing failure causes or a sizableproportions of cured subjects. The discrete time models that have been suggested to date (Davis and Lawrance, 1989; Tutz and Schmid, 2016; Ambrogi et al., 2009; Lee et al., 2018) are cause-specific-hazard denominated. Clearly, this fact summarily disqualifies these models from consideration if data comes with missing failure causes. It is also a well documented fact that naive application of the cause-specific-hazards to data that has a sizable proportion of cured subjects may produce downward biased estimates for these quantities. The existing models can be considered within the multiple imputation framework (Rubin, 1987) for handling missing failure causes, but the prospects of scaling them up for handling cured subjects are minimal, if not nil. In this thesis we address these issues concerning the treatment of missing failure causes and cured subjects in discrete time settings. Towards that end, we focus on the mixture model (Larson and Dinse, 1985) and the vertical model (Nicolaie et al., 2010) because these models possess certain properties which dovetail with the objectives of this thesis. The mixture model has been upgraded into a model that can handle cured subjects. Nicolaie et al. (2015) have demonstrated that the vertical model can also handle missing failure causes as is. Nicolaie et al. (2018) have also extended the vertical model to deal with cured subjects. Our strategy in this thesis is to exploit both the mixture model and the vertical model as a launching pad to advance discrete time models for handling data that comes with missing failure causes or cured subjects.Item Analysis of longitudinal binary data : an application to a disease process.(2008) Ramroop, Shaun.; Mwambi, Henry Godwell.The analysis of longitudinal binary data can be undertaken using any of the three families of models namely, marginal, random effects and conditional models. Each family of models has its own respective merits and demerits. The models are applied in the analysis of binary longitudinal data for childhood disease data namely the Respiratory Syncytial Virus (RSV) data collected from a study in Kilifi, coastal Kenya. The marginal model was fitted using generalized estimating equations (GEE). The random effects models were fitted using ‘Proc GLIMMIX’ and ‘NLMIXED’ in SAS and then again in Genstat. Because the data is a state transition type of data with the Markovian property the conditional model was used to capture the dependence of the current response to the previous response which is known as the history. The data set has two main complicating issues. Firstly, there is the question of developing a stochastically based probability model for the disease process. In the current work we use direct likelihood and generalized linear modelling (GLM) approaches to estimate important disease parameters. The force of infection and the recovery rate are the key parameters of interest. The findings of the current work are consistent and in agreement with those in White et al. (2003). The aspect of time dependence on the RSV disease is also highlighted in the thesis by fitting monthly piecewise models for both parameters. Secondly, there is the issue of incomplete data in the analysis of longitudinal data. Commonly used methods to analyze incomplete longitudinal data include the well known available case analysis (AC) and last observation carried forward (LOCF). However, these methods rely on strong assumptions such as missing completely at random (MCAR) for AC analysis and unchanging profile after dropout for LOCF analysis. Such assumptions are too strong to generally hold. In recent years, methods of analyzing incomplete longitudinal data have become available with weaker assumptions, such as missing at random (MAR). Thus we make use of multiple imputation via chained equations that require the MAR assumption and maximum likelihood methods that result in the missing data mechanism becoming ignorable as soon as it is MAR. Thus we are faced with the problem of incomplete repeated non–normal data suggesting the use of at least the Generalized Linear Mixed Model (GLMM) to account for natural individual heterogeneity. The comparison of the parameter estimates using the different methods to handle the dropout is strongly emphasized in order to evaluate the advantages of the different methods and approaches. The survival analysis approach was also utilized to model the data due to the presence of multiple events per subject and the time between these events.Item Analysis of mixed convection in an air filled square cavity.(2010) Ducasse, Deborah S.; Sibanda, Precious.A steady state two-dimensional mixed convection problem in an air filled square unit cavity has been numerically investigated. Two different cases of heating are investigated and compared. In the first case, the bottom wall was uniformly heated, the side walls were linearly heated and the top moving wall was heated sinusoidally. The second case differed from the first in that the side walls were instead uniformly cooled. This investigation is an extension of the work by Basak et al. [6, 7] who investigated mixed convection in a square cavity with similar boundary conditions to the cases listed above with the exception of the top wall which was well insulated. In this dissertation, their work is extended to include a sinusoidally heated top wall. The nonlinear coupled equations are solved using the Penalty Galerkin Finite Element Method. Stream function and isotherm results are found for various values of the Reynolds number and the Grashof number. The strength of the circulation is seen to increase with increasing Grashof number and to decrease with increasing Reynolds number for both cases of heating. A comparison is made between the stream function and isotherm results for the two cases. The results for the rate of heat transfer in terms of the Nusselt number are discussed. Both local and average Nusselt number results are presented and discussed. The average Nusselt number is found using Simpson's 1/3rd rule. The rate of heat transfer is found to be higher at all four walls for the case of cooled side walls than that of linearly heated side walls.Item Analysis of models arising from heat conduction through fins using Lie symmetries and Tanh method.(2021) Bulunga, Vusi Andile.; Mhlongo, Mfanafikile Don.Abstract available in PDF.Item Analysis of multiple control strategies for pre-exposure prophylaxis and post-infection interventions on HIV infection.(2016) Afassinou, Komi.; Chirove, Faraimunashe.; Govinder, Keshlan Sathasiva.Abstract available in PDF file.Item Analysis of nonlinear Benjamin equation posed on the real line.(2022) Aluko, Olabisi Babatope.; Paramasur, Nabendra.; Shindin, Sergey Konstantinovich.The thesis contains a comprehensive theoretical and numerical study of the nonlinear Benjamin equation posed in the real line. We explore wellposedness of the problem in weighted settings and provide a detailed study of existence, regularity and orbital stability of traveling wave solutions. Further, we present a comprehensive study of the Malmquist-Takenaka-Christov (MTC) computational basis and employ it for the numerical treatment of the nonstaionary and the stationary Benjamin equations.Item Analysis of shear-free spherically symmetric charged relativistic fluids.(2011) Kweyama, Mandlenkosi Christopher.; Maharaj, Sunil Dutt.; Govinder, Keshlan Sathasiva.We study the evolution of shear-free spherically symmetric charged fluids in general relativity. This requires the analysis of the coupled Einstein-Maxwell system of equations. Within this framework, the master field equation to be integrated is yxx = f(x)y2 + g(x)y3 We undertake a comprehensive study of this equation using a variety of ap- proaches. Initially, we find a first integral using elementary techniques (subject to integrability conditions on the arbitrary functions f(x) and g(x)). As a re- sult, we are able to generate a class of new solutions containing, as special cases, the models of Maharaj et al (1996), Stephani (1983) and Srivastava (1987). The integrability conditions on f(x) and g(x) are investigated in detail for the purposes of reduction to quadratures in terms of elliptic integrals. We also obtain a Noether first integral by performing a Noether symmetry analy- sis of the master field equation. This provides a partial group theoretic basis for the first integral found earlier. In addition, a comprehensive Lie symmetry analysis is performed on the field equation. Here we show that the first integral approach (and hence the Noether approach) is limited { more general results are possible when the full Lie theory is used. We transform the field equation to an autonomous equation and investigate the conditions for it to be reduced to quadrature. For each case we recover particular results that were found pre- viously for neutral fluids. Finally we show (for the first time) that the pivotal equation, governing the existence of a Lie symmetry, is actually a fifth order purely differential equation, the solution of which generates solutions to the master field equation.