College of Science and Engineering CSE publications Recent publications in Information Technology

Apply now

Recent publications in Information Technology

Adolescents with ASD face challenges in forming positive friendships due to their ASD condition. This study developed a social networking platform based on the needs of a small group of ASD adolescents and their parents/carers and examined what potential benefits such a system could provide. We conducted seven co-design workshops with six adolescents with ASD over eight months. The team exchanged ideas and communicated through group discussions and drawings. The findings suggest that: (1) participants demonstrated self-advocacy skills through an iterative co-design process; (2) a safe and familiar environment encourages active participation from adolescents with ASD as co-designers; and (3) parents, community group and fellow participants play a pivotal role in engaging adolescents with ASD on a social-network.
This paper considers the automatic labeling of emotions in face images found on social media. Facial landmarks are commonly used to classify the emotions from a face image. However, it is difficult to accurately segment landmarks for some faces and for subtle emotions. Previous authors used a Gaussian prior for the refinement of landmarks, but their model often gets stuck in a local minima. Instead, the calibration of the landmarks with respect to the known emotion class label using principal component analysis is proposed in this paper. Next, the face image is generated from the landmarks using an image translation model. The proposed model is evaluated on the classification of facial expressions and also for fish identification underwater and outperforms baselines in accuracy by over 20%.
This paper proposes a realistic agent-based framework for crowd simulations that can encompass the input phase, the simulation process phase, and the output evaluation phase. In order to achieve this gathering, the three types of real-world data (physical, mental and visual) need to be considered. However, existing research has not used all the three data types to develop an agent-based framework since current data gathering methods are unable to collect all the three types. This paper introduces anew hybrid data gathering approach using a combination of virtual reality and questionnaires to gather all three data types. The data collected are incorporated into the simulation model to provide realism and flexibility. The performance of the framework is evaluated and benchmarked to prove the robustness and effectiveness of our framework. Various types of settings (self-set parameters and random parameters) are simulated to demonstrate that the framework can produce real-world like simulation.
With the booming of cyber attacks and cyber criminals against cyber-physical systems (CPSs), detecting these attacks remains challenging. It might be the worst of times, but it might be the best of times because of opportunities brought by machine learning (ML), in particular deep learning (DL). In general, DL delivers superior performance to ML because of its layered setting and its effective algorithm for extract useful information from training data. DL models are adopted quickly to cyber attacks against CPS systems. In this survey, a holistic view of recently proposed DL solutions is provided to cyber attack detection in the CPS context. A six-step DL driven methodology is provided to summarize and analyze the surveyed literature for applying DL methods to detect cyber attacks against CPS systems. The methodology includes CPS scenario analysis, cyber attack identification, ML problem formulation, DL model customization, data acquisition for training, and performance evaluation. The reviewed works indicate great potential to detect cyber attacks against CPS through DL modules. Moreover, excellent performance is achieved partly because of several high-quality datasets that are readily available for public use. Furthermore, challenges, opportunities, and research trends are pointed out for future research.
Personalisation plays a vital role in the engagement of adult learners in online learning environments. Historically, research has focused on applying adaptive technologies, including Artificial Intelligence, without integrating those technologies within teaching and learning approaches. Academagogy is a teaching and learning approach that allows an educator to select appropriate parts from the models pedagogy (educator-centred), andragogy (learner-centred), and heutagogy (learner-driven) for better learning outcomes. Previous studies used academagogy in face-to-face learning contexts; however, academagogy's application has been limited in online learning contexts. This paper presents our interim observations of applying academagogy to an online Information Technology course. Using mixed methods to analyse learner self-reflections, online learning analytics, learner grades, learner surveys, and learner interviews, we observed that learners progressed along the PAH continuum towards andragogy and heutagogy, with some exceptions. The exceptions were found where learners regressed on the PAH continuum when they encountered problems. The exceptions provide insights into the problem areas that block learner engagement and achievement, laying a foundation for future work in personalising the online learning experience.
The Internet of Things (IoT) networks promote significant convenience in every aspect of our life, including smart vehicles, smart cities, smart homes, etc. With the advancement of IoT technologies, the IoT platforms bring many new features to the IoT devices so that these devices can not only passively monitor the environment (e.g. conventional sensors), but also interact with the physical surroundings (e.g. actuators). In this light, new problems of safety and security arise due to the new features. For instance, the unexpected and undesirable physical interactions might occur among devices, which is known as inter-rule vulnerability. A few work have investigated the inter-rule vulnerability from both cyberspace and physical channels. Unfortunately, only few research papers take advantage of run-time simulation techniques to properly model trigger action environments. Moreover, no simulation platform is capable of modeling primary physical channels and studies the impacts of physical interactions on IoT safety and security. In this paper, we introduce TAESim, a simulation testbed to support reusable simulations in the research of IoT safety and security, especially for the IoT activities in home automation that could involve possibly unexpected interactions. TAESim operates over MATLAB/Simulink and constructs a digital twin for modeling the nature of the trigger-action environment using simulations. It is an open-access platform and can be used by the research community, government, and industry who work toward preventing the safety and security consequences in the IoT ecosystem. In order to evaluate the effectiveness and efficiency of the testbed, we conduct some experiments and the results show that the simulations are completed in a few seconds. We also present two case studies that can report unexpected consequences.
Online social networks (OSNs) are a rich source of information, and the data (including user-generated content) can be mined to facilitate real-world event prediction. However, the dynamic nature of OSNs and the fast-pace nature of social events or hot topics compound the challenge of event prediction. This is a key limitation in many existing approaches. For example, our evaluations of six baseline approaches (i.e., logistic regression latent Dirichlet allocation (LDA)-based logistic regression (LR), multi-task learning (MTL), long short-term memory (LSTM) and convolutional neural networks, and transformer-based model) on three datasets collected as part of this research (two from Twitter and one from a news collection site1), reveal that the accuracy of these approaches is between 50% and 60%, and they are not capable of utilizing new events in event predictions. Hence, in this article, we develop a novel DNN-based framework (hereafter referred to as event prediction with feedback mechanism— EPFM. Specifically, EPFM makes use of a feedback mechanism based on emerging events detection to improve the performance of event prediction. The feedback mechanism ensembles three outlier detection processes and returns a list of new events. Some of the events will then be chosen by analysts to feed into the fine-tuning process to update the predictive model. To evaluate EPFM, we conduct a series of experiments on the same three datasets, whose findings show that EPFM achieves 80% accuracy in event detection and outperforms the six baseline approaches.We also validate EPFM’s capability of detecting new events by empirically analyzing the feedback mechanism under different thresholds.
In the last two years, the outbreak of COVID-19 has significantly affected human life, society, and the economy worldwide. To prevent people from contracting COVID-19 and mitigate its spread, it is crucial to timely distribute complete, accurate, and up-to-date information about the pandemic to the public. In this article, we propose a spatial-temporally bursty-aware method called STBA for real-time detection of COVID-19 events from Twitter. STBA has three consecutive stages. In the first stage, STBA identifies a set of keywords that represent COVID-19 events according to the spatiotemporally bursty characteristics of words using Ripley's K function. STBA will also filter out tweets that do not contain the keywords to reduce the interference of noise tweets on event detection. In the second stage, STBA uses online density-based spatial clustering of applications with noise clustering to aggregate tweets that describe the same event as much as possible, which provides more information for event identification. In the third stage, STBA further utilizes the temporal bursty characteristic of event location information in the clusters to identify real-world COVID-19 events. Each stage of STBA can be regarded as a noise filter. It gradually filters out COVID-19-related events from noisy tweet streams. To evaluate the performance of STBA, we collected over 116 million Twitter posts from 36 consecutive days (from March 22, 2020 to April 26, 2020) and labeled 501 real events in this dataset. We compared STBA with three state-of-the-art methods, EvenTweet, event detection via microblog cliques (EDMC), and GeoBurst+ in the evaluation. The experimental results suggest that STBA outperforms GeoBurst+ by 13.8%, 12.7%, and 13.3% in terms of precision, recall, and F₁ score. STBA achieved even more improvements compared with EvenTweet and EDMC.
With the rapid development of deep learning techniques, the popularity of voice services implemented on various Internet of Things (IoT) devices is ever increasing. In this paper, we examine user-level membership inference in the problem space of voice services, by designing an audio auditor to verify whether a specific user had unwillingly contributed audio used to train an automatic speech recognition (ASR) model under strict black-box access. With user representation of the input audio data and their corresponding translated text, our trained auditor is effective in user-level audit. We also observe that the auditor trained on specific data can be generalized well regardless of the ASR model architecture. We validate the auditor on ASR models trained with LSTM, RNNs, and GRU algorithms on two state-of-the-art pipelines, the hybrid ASR system and the end-to-end ASR system. Finally, we conduct a real-world trial of our auditor on iPhone Siri, achieving an overall accuracy exceeding 80%. We hope the methodology developed in this paper and findings can inform privacy advocates to overhaul IoT privacy.
Evolutionary optimization aims to tune the hyper-parameters during learning in a computationally fast manner. For optimization of multi-task problems evolution is done by creating a unified search space with a dimensionality that can include all the tasks. Multi-task evolution is achieved via selective imitation where two individuals with the same type of skill are encouraged to crossover. Due to the relatedness of the tasks, the resulting offspring may have a skill for a different task. In this way, we can simultaneously evolve a population where different individuals excel in different tasks. In this paper, we consider a type of evolution called Genetic Programming (GP) where the population of genes have a tree like structure and can be of different lengths and hence can naturally represent multiple tasks. Methods : We apply the model to multi-task neuroevolution that aims to determine the optimal hyper-parameters of a neural network such as number of nodes, learning rate and number of training epochs using evolution. Here each gene is encoded with the hyper parameters for a single neural network. Previously, optimization was done by enabling or disabling individual connections between neurons during evolution. This method is extremely slow and does not generalize well to new neural architectures such as Seq2Seq. To overcome this limitation, we follow a modular approach where each sub-tree in a GP can be a sub-neural architecture that is preserved during crossover across multiple tasks. Lastly, in order to leverage on the inter-task covariance for faster evolutionary search we project the features from both tasks to common space using fuzzy membership functions. Conclusions :The proposed model is used to determine the optimal topology of a feed-forward neural network for classification of emotions in physiological heart signals and also a Seq2seq chatbot that can converse with kindergarten children. We can outperform baselines by over $10\%$ in accuracy.
Detecting text portion from scene images can be found to be one of the prevalent research topics. Text detection is considered challenging and non-interoperable since there could be multiple scripts in a scene image. Each of these scripts can have different properties, therefore, it is crucial to research the scene text detection based on the geographical location owing to different scripts. As no work on large-scale multi-script Thai scene text detection is found in the literature, the work conducted in this study focuses on multi-script text that includes Thai, English (Roman), Chinese or Chinese-like script, and Arabic. These scripts can generally be seen around Thailand. Thai script contains more consonants, vowels, and has numerals when compared to the Roman/ English script. Furthermore, the placement of letters, intonation marks, as well as vowels, are different from English or Chinese-like script. Hence, it could be considered challenging to detect and recognise the Thai text. This study proposed a multi-script dataset which includes the aforementioned scripts and numerals, along with a benchmarking employing Single Shot Multi-Box Detector (SSD) and Faster Regions with Convolutional Neural Networks (F-RCNN). The proposed dataset contains scene images which were recorded in Thailand. The dataset consists of 600 images, together with their manual detection annotation. This study also proposed a detection technique hypothesising a multiscript scene text detection problem as a multi-class detection problem which found to work more effective than legacy approaches. The experimental results from employing the proposed technique with the dataset achieved encouraging precision and recall rates when compared with such methods. The proposed dataset is available upon email request to the corresponding authors.
Coroutines will be added to C++ as part of the C++20 standard. Coroutines provide native language support for asynchronous operations. This study evaluates the C++ coroutine specification from the perspective of embedded systems developers. We find that the proposed language features are generally beneficial but that memory management of the coroutine state needs to be improved. Our experiments on an ARM Cortex-M4microcontroller evaluate the time and memory costs of coroutines in comparison with alternatives, and we show that context switching with coroutines is significantly faster than with thread-based real time operating systems. Furthermore, we analysed the impact of these language features on prototypical IoT sensor software. We find that the proposed language enhancements potentially bring significant benefits to programming in C++ for embedded computers, but that the implementation imposes constraints that may prevent its widespread acceptance among the embedded development community.
Background and Objective The Coronavirus 2019, or shortly COVID-19, is a viral disease that causes serious pneumonia and impacts our different body parts from mild to severe depending on patient’s immune system. This infection was first reported in Wuhan city of China in December 2019, and afterward, it became a global pandemic spreading rapidly around the world. As the virus spreads through human to human contact, it has affected our lives in a devastating way, including the vigorous pressure on the public health system, the world economy, education sector, workplaces, and shopping malls. Preventing viral spreading requires early detection of positive cases and to treat infected patients as quickly as possible. The need for COVID-19 testing kits has increased, and many of the developing countries in the world are facing a shortage of testing kits as new cases are increasing day by day. In this situation, the recent research using radiology imaging (such as X-ray and CT scan) techniques can be proven helpful to detect COVID-19 as X-ray and CT scan images provide important information about the disease caused by COVID-19 virus. The latest data mining and machine learning techniques such as Convolutional Neural Network (CNN) can be applied along with X-ray and CT scan images of the lungs for the accurate and rapid detection of the disease, assisting in mitigating the problem of scarcity of testing kits. Methods Hence a novel CNN model called CoroDet for automatic detection of COVID-19 by using raw chest X-ray and CT scan images have been proposed in this study. CoroDet is developed to serve as an accurate diagnostics for 2 class classification (COVID and Normal), 3 class classification (COVID, Normal, and non-COVID pneumonia), and 4 class classification (COVID, Normal, non-COVID viral pneumonia, and non-COVID bacterial pneumonia). Results The performance of our proposed model was compared with ten existing techniques for COVID detection in terms of accuracy. A classification accuracy of 99.1% for 2 class classification, 94.2% for 3 class classification, and 91.2% for 4 class classification was produced by our proposed model, which is obviously better than the state-of-the-art-methods used for COVID-19 detection to the best of our knowledge. Moreover, the dataset with x-ray images that we prepared for the evaluation of our method is the largest datasets for COVID detection as far as our knowledge goes. Conclusion The experimental results of our proposed method CoroDet indicate the superiority of CoroDet over the existing state-of-the-art-methods. CoroDet may assist clinicians in making appropriate decisions for COVID-19 detection and may also mitigate the problem of scarcity of testing kits.
Internet of Health Things (IoHT) involves intelligent, low-powered, and miniaturized sensors nodes that measure physiological signals and report them to sink nodes over wireless links. IoHTs have a myriad of applications in e-health and personal health monitoring. Because of the data’s sensitivity measured by the nodes and power-constraints of the sensor nodes, reliability and energy-efficiency play a critical role in communication in IoHT. Reliability is degraded by the increase in packets’ loss due to inefficient MAC, routing protocols, environmental interference, and body shadowing. Simultaneously, inefficient node selection for routing may cause the depletion of critical nodes’ energy resources. Recent advancements in cross-layer protocol optimizations have proven their efficiency for packet-based Internet. In this article, we propose a MAC/Routing-based Cross-layer protocol for reliable communication while preserving the sensor nodes’ energy resource in IoHT. The proposed mechanism employs a timer-based strategy for relay node selection. The timer-based approach incorporates the metrics for residual energy and received signal strength indicator to preserve the vital underlying resources of critical sensors in IoHT. The proposed approach is also extended for multiple sensor networks, where sensor in vicinity are coordinating and cooperating for data forwarding. The performance of the proposed technique is evaluated for metrics like Packet Loss Probability, End-To-End delay, and energy used per data packet. Extensive simulation results show that the proposed technique improves the reliability and energy-efficiency compared to the Simple Opportunistic Routing protocol.
The idea of co-operative caching in a cache-enabled wireless network has gained much interest due to its services in terms of short service delay and improved transmission rate at the user end. In this article, we consider a co-operative caching mechanism for a fog-enabled Internet of Things (IoT) network. We propose a delay-minimizing policy for fog nodes (FNs), where the goal is to reduce the service delay for the IoT nodes, also known as terminal nodes (TNs). To this end, a novel smart clustering mechanism is proposed, aiming to efficiently assign FNs to the TNs while improving the network benefit by finding a tradeoff between the delay and the network’s energy consumption. We perform mathematical analysis and extensive simulations to highlight the potential gain and the proposed policy.
Optimal content caching has been an important topic in dense small cell networks. Due to spatial and temporal variation in the popularity of data, most content requests cannot be directly served by the lower tiers of the network, increasing the chances of congestion at the core network. This raises the issues of what to cache and where to cache, especially for content with different popularity patterns in a given region. In this work, we focus on the issue of redundant caching of popular files in a cluster when designing a content allocation scheme. We formulate the considered problem as a stable matching theory problem, where the preferences of each cache entity are sent to the Macro Base Station (MBS) for stable matching. The caches share their request lists with the MBS, which subsequently uses Irving OneSided matching algorithm to generate a unique preference list for each caching entity such that every preference list is a representative of the popular data in that region. The algorithm achieves the desired goal of efficient caching with few but smartly planned repetitions of the popular files. Results show that our proposed scheme provides better performance in terms of cache hit ratio with increasing number of requests as compared to a popularity based scheme.
In a secret sharing scheme, a dealer, D, distributes shares of a secret, S among a set of n participants, such that only authorised subsets of these participants can reconstruct S, by pooling their shares. Unauthorised subsets should gain no information. An extensively researched area within this field is how to cope with participants who arbitrarily modify their shares (i.e. cheaters). A secret sharing scheme with cheating detection capabilities (SSCD) allows participants to detect cheating upon reconstruction time. The most common way of achieving this is to utilise an algebraic manipulation detection (AMD) code alongside a secret sharing scheme. The dealer essentially encodes S in an AMD code and distributes this code to participants. Participants then reconstruct the code and use this to detect cheating. The problem with this approach is that even if cheating is detected, the cheaters still get the secret. To overcome this problem, we propose a new protocol: outsourced SSCD (OSSCD). Our proposed protocol utilises the same techniques as SSCD; however, before the secret is reconstructed, we have participants distribute their shares among a set of special validation servers. These validation servers then perform a public computation to determine if cheating has occurred. They do this without reconstructing S. The result of this is that if cheating has occurred, the servers can halt the protocol, ensuring no one learns the secret. We present two efficient constructions of our proposed OSSCD protocol: one capable of detecting cheating with high probability and the other capable of tolerating many secrets simultaneously.
In recent years video games have become one of the most popular entertainment mediums. This can partly be attributed to advances in computer graphics, and the availability, affordability and performance of hardware which have made modern video games the most realistic and immersive they have ever been. These games have a rich story with large open worlds, and a diverse cast of fully voice acted characters which also means that they take up large amounts of disk space. While a large percentage of this audio is sound effects and music, modern, character-driven, open world games contain multiple hours and many gigabytes of spoken voice audio. This paper examines how audio compression in video games poses distinctly different challenges than in telecommunications or archiving, the primary motivating factor that inspired audio compression systems currently used in video games. By evaluating new, deep learning based, methods of voice compression with video games in mind, we determine the criteria needed to be met for a new method to succeed current methods in measures of compression factor and quality at an acceptable level of algorithmic performance and what directions new research is needed to meet this criteria.
Document sentiment classification is an area of study that has been developed for decades. However, sentiment classification of Email data is rather a specialized field that has not yet been thoroughly studied. Compared to typical social media and review data, Email data has characteristics of length variance, duplication caused by reply and forward messages, and implicitness in sentiment indicators. Due to these characteristics, existing techniques are incapable of fully capturing the complex syntactic and relational structure among words and phrases in Email documents. In this study, we introduce a dependency graph-based position encoding technique enhanced with weighted sentiment features, and incorporate it into the feature representation process. We combine encoded sentiment sequence features with traditional word embedding features as input for a revised deep CNN model for Email sentiment classification. Experiments are conducted on three sets of real Email data with adequate label conversion processes. Empirical results indicate that our proposed SSE-CNN model obtained the highest accuracy rate of 88.6%, 74.3% and 82.1% for three experimental Email datasets over other comparative state-of-the-art algorithms. Furthermore, our performance evaluations on the preprocessing and sentiment sequence encoding justify the effectiveness of Email preprocessing and sentiment sequence encoding with dependency-graph based position and SWN features on the improvement of Email document sentiment classification.

More...