Computer Science
Permanent URI for this communityhttps://hdl.handle.net/10413/6769
Browse
Browsing Computer Science by Issue Date
Now showing 1 - 20 of 124
- Results Per Page
- Sort Options
Item Planarity testing and embedding algorithms.(1990) Carson, D. I.; Oellermann, Ortrud Ruth.This thesis deals with several aspects of planar graphs, and some of the problems associated with non-planar graphs. Chapter 1 is devoted to introducing some of the fundamental notation and tools used in the remainder of the thesis. Graphs serve as useful models of electronic circuits. It is often of interest to know if a given electronic circuit has a layout on the plane so that no two wires cross. In Chapter 2, three efficient algorithms are described for determining whether a given 2-connected graph (which may model such a circuit) is planar. The first planarity testing algorithm uses a path addition approach. Although this algorithm is efficient, it does not have linear complexity. However, the second planarity testing algorithm has linear complexity, and uses a recursive fragment addition technique. The last planarity testing algorithm also has linear complexity, and relies on a relatively new data structure called PQ-trees which have several important applications to planar graphs. This algorithm uses a vertex addition technique. Chapter 3 further develops the idea of modelling an electronic circuit using a graph. Knowing that a given electronic circuit may be placed in the plane with no wires crossing is often insufficient. For example, some electronic circuits often have in excess of 100 000 nodes. Thus, obtaining a description of such a layout is important. In Chapter 3 we study two algorithms for obtaining such a description, both of which rely on the PQ-tree data structure. The first algorithm determines a rotational embedding of a 2-connected graph. Given a rotational embedding of a 2-connected graph, the second algorithm determines if a convex drawing of a graph is possible. If a convex drawing is possible, then we output the convex drawing. In Chapter 4, we concern ourselves with graphs that have failed a planarity test of Chapter 2. This is of particular importance, since complex electronic circuits often do not allow a layout on the plane. We study three different ways of approaching the problem of an electronic circuit modelled on a non-planar graph, all of which use the PQ-tree data structure. We study an algorithm for finding an upper bound on the thickness of a graph, an algorithm for determining the subgraphs of a non-planar graph which are subdivisions of the Kuratowski graphs K5 and K3,3, and lastly we present a new algorithm for finding an upper bound on the genus of a non-planar graph.Item A practical investigation of meteor-burst communications.(1991) Melville, Stuart William.; Sartori-Angus, Alan G.; Oellermann, Ortrud Ruth.This study considers the meteor-burst communication (MBC) environment at three levels. At the lowest level, the trails themselves are studied and analysed. Then individual links are studied in order to determine the data throughput and wait time that might be expected at various data rates. Finally, at the top level, MBC networks are studied in order to provide information on the effects of routing strategies, topologies, and connectivity in such networks. A significant amount of theoretical work has been done in the classification of meteor trails, and the analysis of the throughput potential of the channel. At the same time the issues of wait time on MBC links, and MBC network strategies, have been largely ignored. The work presented here is based on data captured on actual monitoring links, and is intended to provide both an observational comparison to theoretical predictions in the well-researched areas, and a source of base information for the others. Chapter 1 of this thesis gives an overview of the field of meteor-burst communications. Prior work in the field is discussed, as are the advantages and disadvantages of the channel, and current application areas. Chapter 2 describes work done on the classification of observed meteor trails into distinctive 'families'. The rule-based system designed for this task is discussed as well as the eventual classification schema produced, which is far more comprehensive and consistent than previously proposed schemas. Chapter 3 deals with the throughput potential of the channel, based on the observed trails. A comparison to predicted results, both as regards fixed and adaptive data-rates, is made with some notable differences between predicted v results and observed results highlighted. The trail families with the largest contribution to the throughput capacity of the channel are identified. Chapter 4 deals with wait time in meteor-burst communications. The data rates at which wait time is minimised in the links used are found, and compared to the rates at which throughput was optimised. These are found to be very different, as indeed are the contributions of the various trail families at these rates. Chapter 5 describes a software system designed to analyse the effect of routing strategies in MBC networks, and presents initial results derived from this system. Certain features of the channel, in particular its sporadic nature, are shown to have significant effects on network performance. Chapter 6 continues the presentation of network results, specifically concentrating on the effect of topologies and connectivity within MBC networks. Chapter 7 concludes the thesis, highlighting suggested areas for further research as well as summarising the more important results presented.Item Packing problems on a PC.(1991) Deighton, Andrew George.; Meyerowitz, Jane Julia.Bin packing is a problem with many applications in various industries. This thesis addresses a specific instance of the this problem, known as the Container Packing problem. Special attention is paid to the Pallet Loading problem which is a restricted sub-problem of the general Container Packing problem. Since the Bin Packing problem is NP-complete, it is customary to apply a heuristic measure in order to approximate solutions in a reasonable amount of computation time rather than to attempt to produce optimal results by applying some exact algorithm. Several heuristics are examined for the problems under consideration, and the results produced by each are shown and compared where relevant.Item Built-in tests for a real-time embedded system.(1991) Olander, Peter Andrew.Beneath the facade of the applications code of a well-designed real-time embedded system lies intrinsic firmware that facilitates a fast and effective means of detecting and diagnosing inevitable hardware failures. These failures can encumber the availability of a system, and, consequently, an identification of the source of the malfunction is needed. It is shown that the number of possible origins of all manner of failures is immense. As a result, fault models are contrived to encompass prevalent hardware faults. Furthermore, the complexity is reduced by determining syndromes for particular circuitry and applying test vectors at a functional block level. Testing phases and philosophies together with standardisation policies are defined to ensure the compliance of system designers to the underlying principles of evaluating system integrity. The three testing phases of power-on self tests at system start up, on-line health monitoring and off-line diagnostics are designed to ensure that the inherent test firmware remains inconspicuous during normal applications. The prominence of the code is, however, apparent on the detection or diagnosis of a hardware failure. The authenticity of the theoretical models, standardisation policies and built-in test philosophies are illustrated by means of their application to an intricate real-time system. The architecture and the software design implementing the idealogies are described extensively. Standardisation policies, enhanced by the proposition of generic tests for common core components, are advocated at all hierarchical levels. The presentation of the integration of the hardware and software are aimed at portraying the moderately complex nature of the task of generating a set of built-in tests for a real-time embedded system. In spite of generic policies, the intricacies of the architecture are found to have a direct influence on software design decisions. It is thus concluded that the diagnostic objectives of the user requirements specification be lucidly expressed by both operational and maintenance personnel for all testing phases. Disparity may exist between the system designer and the end user in the understanding of the requirements specification defining the objectives of the diagnosis. It is thus essential for complete collaboration between the two parties throughout the development life cycle, but especially during the preliminary design phase. Thereafter, the designer would be able to decide on the sophistication of the system testing capabilities.Item Application of backpropagation-like generative algorithms to various problems.(1992) Powell, Alan Roy.; Sartori-Angus, Alan G.Artificial neural networks (ANNs) were originally inspired by networks of biological neurons and the interactions present in networks of these neurons. The recent revival of interest in ANNs has again focused attention on the apparent ability of ANNs to solve difficult problems, such as machine vision, in novel ways. There are many types of ANNs which differ in architecture and learning algorithms, and the list grows annually. This study was restricted to feed-forward architectures and Backpropagation- like (BP-like) learning algorithms. However, it is well known that the learning problem for such networks is NP-complete. Thus generative and incremental learning algorithms, which have various advantages and to which the NP-completeness analysis used for BP-like networks may not apply, were also studied. Various algorithms were investigated and the performance compared. Finally, the better algorithms were applied to a number of problems including music composition, image binarization and navigation and goal satisfaction in an artificial environment. These tasks were chosen to investigate different aspects of ANN behaviour. The results, where appropriate, were compared to those resulting from non-ANN methods, and varied from poor to very encouraging.Item Speech recognition and blackboard expert systems.(1992) Loureiro, Guy Marchand.; Sartori-Angus, Alan G.Spoken language is used by people to communicate naturally with one another. A simplistic view of the communication process is as follows. Person A wishes to communicate an idea to person B. The idea, initiated in the mind/brain of person A is encoded into speech signals by means of the person A's speech production mechanism, the vocal apparata in the vocal tract. Various kinds of noise may interfere with the speech signals as they travel to person B. The resulting signal is captured by person B's speech receiving mechanism, the ear. It is then analysed and decoded into a meaningful message by the brain of person B. This thesis concerns itself with the investigation of and attempt to automate the receiving and decoding of English sentences using a machine - that is to perform the task of person B in the above scenario using a computer. The aim is not only to produce a sequence of phonetic sounds, but to look at the problems of building in the 'mind of the machine', a picture of the meanings, intentions, absurdities and realities of the spoken message. The various models, algorithms and techniques of speech recognition and speech understanding systems are examined. Speech signals are captured and digitised by hardware. The digital samples are analysed and the important distinguishing features of all speech sounds are identified. These are then used to classify speech sounds in subsequent spoken words. The way speech sounds are joined together to form syllables and words introduces difficult problems to the automatic recognition process. Speech sounds are blurred, overlapped or left out due to the effects of coarticulation. Finally, natural language processing issues, such as the importance of syntax (the structure) and semantics (the meaning) of sentences, are studied. A system to control and unite all the above processing is considered. The blackboard expert system model of the widely reported HEARSAY-II speech recognition system is reviewed as the system with the best potential for the above tasks.Item Possible Models Diagrams - a new approach to teaching propositional logic.(1994) Clarke, Matthew Charles.; Dempster, Robert.; Grayson, Diane J.Abstract available in PDF.Item Modelling with mathematica.(1994) Murrell, Hugh Crozier.; Sartori-Angus, Alan G.In this thesis a number of mathematical models are investigated with the aid of the modelling package Mathematica. Some of the models are of a mechanical nature and some of the models are laboratories that have been constructed for the purpose of assisting researchers in a particular field. In the early sections of the thesis mechanical models are investigated. After the equations of motion for the model have been presented, Mathematica is employed to generate solutions which are then used to drive animations of the model. The frames of the animations are graphical snapshots of the model in motion. Mathematica proves to be an ideal tool for this type of modelling since it combines algebraic, numeric and graphics capabilities on one platform. In the later sections of this thesis, Mathematica laboratories are created for investigating models in two different fields. The first laboratory is a collection of routines for performing Phase-Plane analysis of planar autonomous systems of ordinary differential equations. A model of a mathematical concept called a bifurcation is investigated and an animation of this mathematical event is produced. The second laboratory is intended to help researchers in the tomography field. A standard filtered back-projection algorithm for reconstructing images from their projections is implemented. In the final section of the thesis an indication of how the tomography laboratory could be used is presented. Wavelet theory is used to construct a new filter that could be used in filtered back-projection tomography.Item The investigation into an algorithm based on wavelet basis functions for the spatial and frequency decomposition of arbitrary signals.(1994) Goldstein, Hilton.; Sartori-Angus, Alan G.The research was directed toward the viability of an O(n) algorithm which could decompose an arbitrary signal (sound, vibration etc.) into its time-frequency space. The well known Fourier Transform uses sine and cosine functions (having infinite support on t) as orthonormal basis functions to decompose a signal i(t) in the time domain to F(w) in the frequency . domain, where the Fourier coefficients F(w) are the contributions of each frequency in the original signal. Due to the non-local support of these basis functions, a signal containing a sharp localised transient does not have localised coefficients, but rather coefficients that decay slowly. Another problem is that the coefficients F(w) do not convey any time information. The windowed Fourier Transform, or short-time Fourier Transform, does attempt to resolve the latter, but has had limited success. Wavelets are basis functions, usually mutually orthonormal, having finite support in t and are therefore spatially local. Using non-orthogonal wavelets, the Dominant Scale Transform (DST) designed by the author, decomposes a signal into its approximate time-frequency space. The associated Dominant Scale Algorithm (DSA) has O(n) complexity and is integer-based. These two characteristics make the DSA extremely efficient. The thesis also investigates the problem of converting a music signal into it's equivalent music score. The old problem of speech recognition is also examined. The results obtained from the DST are shown to be consistent with those of other authors who have utilised other methods. The resulting DST coefficients are shown to render the DST particularly useful in speech segmentation (silence regions, voiced speech regions, and frication). Moreover, the Spectrogram Dominant Scale Transform (SDST), formulated from the DST, was shown to approximate the Fourier coefficients over fixed time intervals within vowel regions of human speech.Item Neural network assisted software engineered refractive fringe diagnostic of spherical shocks.(1996) Kistan, Trevor.; Sartori-Angus, Alan G.A shock is essentially a propagating variation in the pressure or density of a medium. If the medium is transparent, such as air, and the shock radially symmetric, the refractive fringe diagnostic can be used to examine its general features. A laser beam probes the shock, the central part of the beam, refracted to different degrees by the different density features within the shock, interferes with itself and with the outer unrefracted part creating a series of coarse and fine fringes. By examining this interference pattern one can gain insight into the density profile underlying the shock. A series of such experiments was conducted by the Plasma Physics Research Institute at the University of Natal in 1990. To model the situation computationally, they developed a ray-tracer which produced interference patterns for modified theoretical density profiles based on those predicted by Sedov. After numerous trials, an intensity pattern was produced which agreed approximately with experimental observations. Thus encouraged, the institute then sought to determine density profiles directly from the interference patterns, but a true mathematical deconvolution proved non-trivial and is still awaited. The work presented in this thesis reconstructs the ray-tracer using software engineering techniques and achieves the desired deconvolution by training a neural network of the back-propagation type to behave as an inverse ray-tracer. The ray-tracer is first used to generate numerous density profile - interference pattern pairs. The neural network is trained with this theoretical data to provide a density profile when presented with an interference pattern. The trained network is then tested with experimental interference patterns extracted from captured images. The density profiles predicted by the neural network are then fed back to the ray-tracer and the resultant theoretically determined interference patterns compared to those obtained experimentally. The shock is examined at various times after the initial explosion allowing its propagation to be tracked by its evolving density profile and interference pattern. The results obtained prove superior to those first published by the institute and confirm the neural network's promise as a research tool. Instead of lengthy trial and error sessions with the unaided ray-tracer, experimental interference patterns can be fed directly to an appropriately trained neural network to yield an initial density profile. The network, not the researcher, does the pattern association.Item A computer mediated system for distance education.(1996) Pillay, Nelishia.; Dempster, Robert.A problem currently facing South Africa is the large number of poorly educated or uneducated people in many parts of the country. Distance education has proven to be an apt solution to this problem However, one of the numerous constraints associated with studying at a distance is insufficient communication between students and lecturers and the lack of peer interaction. The integration of Computer Mediated Communications (CMC) in the delivery of distance education courses world-wide has proved to be a means of alleviating this communication problem. The study presented in this thesis examines the technical feasibility of implementing CMC in the delivery of South African distance education courses as a solution to the communication problems experienced by distance learners in this country. For this purpose a system was developed and implemented at a South African distance education institution namely, Natal College of Education in Pietermaritzburg. Based on this implementation a technical evaluation of the feasibility of CMC in the instruction of distance education courses within a South African infrastructure was examined. As a result of this study we have been able to: • Determine the technical problems associated with the implementation of a CMC system in a South African distance education environment. • Identify possible solutions to these technical problems • Define a set of criteria, which if met by a CMC system would ensure the technical feasibility of the system as a solution to the communication problems experienced by South African distance learners. • Determine the effects of students' attitudes towards computers on their use of the CMC system. • Determine the effect of CMC on students' attitudes towards computers. • Identify any additional factors, besides technical issues, which need to be taken into account when implementing a CMC system.Item Application of genetic algorithms to the travelling salesperson problem.(1996) McKenzie, Peter John Campbell.; Petkov, Doncho.Genetic Algorithms (GAs) can be easily applied to many different problems since they make few assumptions about the application domain and perform relatively well. They can also be modified with some success for handling a particular problem. The travelling salesperson problem (TSP) is a famous NP-hard problem in combinatorial optimization. As a result it has no known polynomial time solution. The aim of this dissertation will be to investigate the application of a number of GAs to the TSP. These results will be compared with those of traditional solutions to the TSP and with the results of other applications of the GA to the TSP.Item Artificial neural networks for image recognition : a study of feature extraction methods and an implementation for handwritten character recognition.(1996) Moodley, Deshendran.; Ram, Vevek.; Haines, Linda Margaret.The use of computers for digital image recognition has become quite widespread. Applications include face recognition, handwriting interpretation and fmgerprint analysis. A feature vector whose dimension is much lower than the original image data is used to represent the image. This removes redundancy from the data and drastically cuts the computational cost of the classification stage. The most important criterion for the extracted features is that they must retain as much of the discriminatory information present in the original data. Feature extraction methods which have been used with neural networks are moment invariants, Zernike moments, Fourier descriptors, Gabor filters and wavelets. These together with the Neocognitron which incorporates feature extraction within a neural network architecture are described and two methods, Zernike moments and the Neocognitron are chosen to illustrate the role of feature extraction in image recognition.Item A support environment for the teaching of programming.(1996) Stewart, Rosanne.This thesis examines the effectiveness of a specially constructed computer based support environment for the teaching of computer programming to novice programmers. In order to achieve this, the following distinct activities were pursued. Firstly, an in-depth investigation of programming misconceptions and techniques used for overcoming them was carried out. Secondly, the educational principles gained from this investigation were used to design and implement a computer based environment to support novice programmers learning the Pascal language. Finally, several statistical methods were used to compare students who made use of the support environment to those who did not and the results are discussed.Item The development of a behavioural video analysis system and a toolbox prototype.(1996) Naidoo, Throshni.; Dempster, Robert.Behavioural research studies often include a data acquisition process, during which subjects are observed, and the displayed behaviours recorded, and a subsequent analysis process, during which the observed behaviours are analysed. In addition the data acquisition process could occur in real-time, or at a later stage, by referring to a recording of the original session. This dissertation addresses the latter approach through the use of computer based digital video technology. A windows based video analysis system, called VAS, that was developed to assist with the acquisition of observed behaviours from the video, and the analysis thereof is discussed. This is followed by a discussion of the implementation of VAS. Finally the directions for further research are identified, and discussed.Item The application of computer technology in South African distance education.(1996) Owusu-Sekyere, Charles.; Meyerowitz, Jane Julia.The advent of on-line Computer-Assisted Instruction and Computer Mediated Communication may improve instruction and communication in distance education in South African universities. On-line Computer-Assisted Instruction in distance education makes the reinforcement of knowledge both systematic and immediate. With instructional media such printed text, audio-cassettes, radio and television broadcasts the student at a distance is an isolated and passive recipient of knowledge. On-line Computer-Assisted Instruction supported by Computer Mediated Communication for interaction and feedback could close the gaps in time and distance between the teacher and the student in distance education. The current network capabilities of the computer makes it possible for such a student to interact with peers and lecturers before, during and after instructional episodes. Computer Mediated Communication can facilitate the use of electronic messaging such as Electronic Mail, Internet Relay Chat, List Servers, Multi-User Domains and Bulletin Board Services for interactions and feedback. This thesis investigates whether instruction and communication in South African universities with a distance education option can be improved using on-line Computer-Assisted Instruction and Computer Mediated Communication respectively. The thesis also makes proposals for their implementation in South Africa by analysing the applications of computer technology in degree awarding distance education institutions in some developed and developing countries that use on-line Computer-Assisted Instruction and Computer Mediated Communication.Item On case representation and indexing in a case-based reasoning system for waste management.(1997) Wortmann, Karl Lyndon.; Petkov, Doncho.; Senior, Eric.Case-Based Reasoning is a fairly new Artificial Intelligence technique which makes use of past experience as the basis for solving new problems. Typically, a case-based reasoning system stores actual past problems and solutions in memory as cases. Due to its ability to reason from actual experience and to save solved problems and thus learn automatically, case-based reasoning has been found to be applicable to domains for which techniques such as rule-based reasoning have traditionally not been well-suited, such as experience-rich, unstructured domains. This applicability has led to it becoming a viable new artificial intelligence topic from both a research and application perspective. This dissertation concentrates on researching and implementing indexing techniques for casebased reasoning. Case representation is researched as a requirement for implementation of indexing techniques, and pre-transportation decision making for hazardous waste handling is used as the domain for applying and testing the techniques. The field of case-based reasoning was covered in general. Case representation and indexing were researched in detail. A single case representation scheme was designed and implemented. Five indexing techniques were designed, implemented and tested. Their effectiveness is assessed in relation to each other, to other reasoners and implications for their use as the basis for a case-based reasoning intelligent decision support system for pre-transportation decision making for hazardous waste handling are briefly assessed.Item A real time, system independent, secure, Internet based auctioning system.(2000) Brown, Cuan.; Murrell, Hugh Crozier.; Swart, Hendrika Cornelia Scott.This thesis outlines the creation of a secure, real time, system independent, Internet based auctioning application. The system has been developed to meet the needs of today's stringent reqUirements on secure Internet based applications. To attain this goal, the latest cryptographic algorithms and development platforms have been used. The result is a JAVA based server and client auctioning application. The client application is designed to run In any common web browser, and the server to execute on any JAVA enabled operating system with a web server and Internet connection. The real time system uses a relatively secure hybrid cryptosystem for communication. This involves the use of RSA for secure key exchange, and RC6 and MARS for secure communication.Item The design and simulation of routing protocols for mobile ad hoc networks.(2000) Kabeto, Mieso Denko.; Goddard, Wayne.This thesis addresses a novel type of network known as a mobile ad hoc network. A mobile ad hoc network is a collection of entirely mobile nodes that can establish communication in the absence of any fixed infrastructure. Envisioned applications of these networks include virtual classrooms, emergency relief operations, military tactical communications, sensor networks and community networking. Mobile ad hoc networking poses several new challenges in the design of network protocols. This thesis focuses on the routing problem. The main challenges in the design of a routing protocol for mobile ad hoc networks result from them having limited resources and there being frequent topological changes that occur unpredictably. Moreover, there is no fixed infrastructure that supports routing. The conventional routing protocols are not generally suitable for mobile ad hoc networks, as they cannot react quickly to the changing network topology, cause excessive communication and computation, or converge very slowly creating routing loops. In this thesis we propose two classes of routing schemes for mobile ad hoc networks. The first class is known as Limited Flooding Protocol. The protocol is fully reactive and does not require the computation of routing tables. It uses some basic principles of flooding, but reduces the communication overhead by restricting packet propagation through the network. Several variations of limited flooding are considered including deterministic, randomised and priority-based mechanisms. The main advantage of this protocol is that it can be used in networks with unpredictable topological changes and highly mobile nodes, since maintaining routing table at the intermediate nodes is not required. The second class of routing protocols is based on hierarchical clustering architecture and is intended for use in a relatively low mobility environment. The basic idea of this protocol is to partition the entire network into smaller units known as clusters and define routing mechanisms both within and between clusters using a hierarchical architecture. The main advantage of this architecture is reduction of storage requirements of routing information, communication overhead and computational overhead at each node. Discrete-event simulation is used for modelling and performance evaluation. Various options and variations of the protocols are examined in the…[Page 2 of abstract is missing.]Item Computer assisted education : design, development and evaluation.(2001) Murrell, Katherine Ann.; Meyerowitz, Jane Julia.Educational institutions throughout the world are increasingly facing classes of educationally, culturally and linguistically diverse student groups. At the same time economic constraints require these institutions to expand their student base and they are therefore looking to distance education and continuing education modules to meet these challenges. Simultaneously rapid advances in desktop computing capabilities and Internet delivered information have revived interest in Computer Assisted Education (CAE). The University of Natal is no exception to these trends; schools, departments and individual members of staff are increasingly exploring the possibility of using the University's computer infrastructure to assist in delivering quality education, maintaining current standards, and addressing the multiple needs of the students. To investigate these issues a CAE program was developed for use in the Nelson R. Mandela School of Medicine to investigate how students would make use 'of the technology, and to report on the development and evaluation processes of such a development. In doing so various lessons could be learnt which could inform the further development of such software at the University. In order to support the development of the CAE program an extensive literature survey into current educational theory was conducted. Its objectives were to explore and understand all the factors affecting the development and use of computer based systems as an educational tool. Particular aspects considered were • the debate between constructivist and instructivist theory in their applicability to both the medium and the subject material. • instructional styles, and with them the learning styles, that could be used to support the educational goals of the diverse student population. • instructional design methodologies that are currently used as well as media production methodologies. The goal of this aspect of the research was to advise both the development of the case study and to gain a broader understanding of the methodology that could be used for other developments. Included in this phase of the research are methods and criteria for selection of authoring systems and interface design issues in a multi-cultural multi-lingual environment. • the review of different evaluation strategies in order to incorporate appropriate evaluation in the CAE case study. • the investigation of broader sociological and historical factors that may influence the way in which CAE can be used effectively in a South African context. The presumption was that students from historically disadvantaged backgrounds and those with English as a second language would be less willing to use technological interventions than those who were more likely to have had access to computers earlier in their education. The case study set out to investigate if this presumption was valid, and if so what elements of design and delivery could facilitate these students' usage of such systems. However, these presumptions were not validated by the case study, showing the exact opposite of expectations, with more historically disadvantaged students showing a willingness to use the module.