This study aims to enhance diagnostic capabilities for optimising the performance of the anaerobic sewage treatment lagoon at Melbourne Water's Western Treatment Plant (WTP) through a novel machine ...learning (ML)-based monitoring strategy. This strategy employs ML to make accurate probabilistic predictions of biogas performance by leveraging diverse real-life operational and inspection sensor and other measurement data for asset management, decision making, and structural health monitoring (SHM). The paper commences with data analysis and preprocessing of complex irregular datasets to facilitate efficient learning in an artificial neural network. Subsequently, a Bayesian mixture density neural network model incorporating an attention-based mechanism in bidirectional long short-term memory (BiLSTM) was developed. This probabilistic approach uses a distribution output layer based on the Gaussian mixture model and Monte Carlo (MC) dropout technique in estimating data and model uncertainties, respectively. Furthermore, systematic hyperparameter optimisation revealed that the optimised model achieved a negative log-likelihood (NLL) of 0.074, significantly outperforming other configurations. It achieved an accuracy approximately 9 times greater than the average model performance (NLL = 0.753) and 22 times greater than the worst performing model (NLL = 1.677). Key factors influencing the model's accuracy, such as the input window size and the number of hidden units in the BiLSTM layer, were identified, while the number of neurons in the fully connected layer was found to have no significant impact on accuracy. Moreover, model calibration using the expected calibration error was performed to correct the model's predictive uncertainty. The findings suggest that the inherent data significantly contribute to the overall uncertainty of the model, highlighting the need for more high-quality data to enhance learning. This study lays the groundwork for applying ML in transforming high-value assets into intelligent structures and has broader implications for ML in asset management, SHM applications, and renewable energy sectors.
Autonomous rock image classification can enhance the capability of robots for geological detection and enlarge the scientific returns, both in investigation on Earth and planetary surface exploration ...on Mars. Since rock textural images are usually inhomogeneous and manually hand-crafting features is not always reliable, we propose an unsupervised feature learning method to autonomously learn the feature representation for rock images. In our tests, rock image classification using the learned features shows that the learned features can outperform manually selected features. Self-taught learning is also proposed to learn the feature representation from a large database of unlabelled rock images of mixed class. The learned features can then be used repeatedly for classification of any subclass. This takes advantage of the large dataset of unlabelled rock images and learns a general feature representation for many kinds of rocks. We show experimental results supporting the feasibility of self-taught learning on rock images.
•We propose to learn feature representation with K-means for rock image classification.•We show the unsupervised feature learning is flexible and can outperform manual features on rock image classification.•We prove self-taught learning is feasible to learn the feature representation from unlabelled rock images.
A conversational agent powered by artificial intelligence, commonly known as a chatbot, is one of the most recent innovations used to provide information and services during the COVID-19 pandemic. ...However, the multitude of conversational agents explicitly designed during the COVID-19 pandemic calls for characterization and analysis using rigorous technological frameworks and extensive systematic reviews.
This study aims to describe the general characteristics of COVID-19 chatbots and examine their system designs using a modified adapted design taxonomy framework.
We conducted a systematic review of the general characteristics and design taxonomy of COVID-19 chatbots, with 56 studies included in the final analysis. This review followed the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines to select papers published between March 2020 and April 2022 from various databases and search engines.
Results showed that most studies on COVID-19 chatbot design and development worldwide are implemented in Asia and Europe. Most chatbots are also accessible on websites, internet messaging apps, and Android devices. The COVID-19 chatbots are further classified according to their temporal profiles, appearance, intelligence, interaction, and context for system design trends. From the temporal profile perspective, almost half of the COVID-19 chatbots interact with users for several weeks for >1 time and can remember information from previous user interactions. From the appearance perspective, most COVID-19 chatbots assume the expert role, are task oriented, and have no visual or avatar representation. From the intelligence perspective, almost half of the COVID-19 chatbots are artificially intelligent and can respond to textual inputs and a set of rules. In addition, more than half of these chatbots operate on a structured flow and do not portray any socioemotional behavior. Most chatbots can also process external data and broadcast resources. Regarding their interaction with users, most COVID-19 chatbots are adaptive, can communicate through text, can react to user input, are not gamified, and do not require additional human support. From the context perspective, all COVID-19 chatbots are goal oriented, although most fall under the health care application domain and are designed to provide information to the user.
The conceptualization, development, implementation, and use of COVID-19 chatbots emerged to mitigate the effects of a global pandemic in societies worldwide. This study summarized the current system design trends of COVID-19 chatbots based on 5 design perspectives, which may help developers conveniently choose a future-proof chatbot archetype that will meet the needs of the public in the face of growing demand for a better pandemic response.
Abstract Standing sway can be reduced simply by conscious effort but the extent to which this ability changes with stance conditions is unknown. Here, the influence of stance width and vision upon ...the ability to voluntarily reduce sway was investigated. 14 subjects were asked to stand either relaxed or still. Three stance conditions (wide/narrow/tandem) were compared, with eyes open or closed. When standing still, subjects successfully reduced body sway by up to 24% (root mean square of lateral trunk velocity), primarily by attenuating their peak sway frequency (0.2–0.4 Hz). Standing still was associated with a mean increase in ankle muscle co-contraction, but the extent of this increase did not correlate with the ability to reduce sway for individual subjects. Within each stance condition, subjects who swayed more when relaxed also displayed the greatest scope for sway reduction when asked to stand still. However, the opposite trend was observed across conditions: as relaxed sway increased, the capacity for sway reduction was reduced. Hence, voluntary control was lowest during tandem stance and greatest with feet apart, an effect augmented by eye closure. The results show that the degree to which sway can be voluntarily modified is not fixed, but reflects the difficulty of the standing task.
This paper presents an overview of integrating new research outcomes into the development of a structural health monitoring strategy for the floating cover at the Western Treatment Plant (WTP) in ...Melbourne, Australia. The size of this floating cover, which covers an area of approximately 470 m × 200 m, combined with the hazardous environment and its exposure to extreme weather conditions, only allows for monitoring techniques based on remote sensing. The floating cover is deformed by the accumulation of sewage matter beneath it. Our research has shown that the only reliable data for constructing a predictive model to support the structural health monitoring of this critical asset is obtained directly from the actual floating cover at the sewage treatment plant. Our recent research outcomes lead us towards conceptualising an advanced engineering analysis tool designed to support the future creation of a digital twin for the floating cover at the WTP. Foundational work demonstrates the effectiveness of an unmanned aerial vehicle (UAV)-based photogrammetry methodology in generating a digital elevation model of the large floating cover. A substantial set of data has been acquired through regular UAV flights, presenting opportunities to leverage this information for a deeper understanding of the interactions between operational conditions and the structural response of the floating cover. This paper discusses the current findings and their implications, clarifying how these outcomes contribute to the ongoing development of an advanced digital twin for the floating cover.
In various engineering applications, remote sensing images such as digital elevation models (DEMs) and orthomosaics provide a convenient means of generating 3D representations of physical assets, ...enabling the discovery of new insights and analyses. However, the presence of noise and artefacts, particularly unwanted natural features, poses significant challenges, and their removal requires the application of filtering techniques prior to conducting analysis. Unmanned aerial vehicle-based photogrammetry is used at Melbourne Water’s Western Treatment Plant as a cost-effective and efficient method of inspecting the floating covers on the anaerobic lagoons. The focus of interest is the elevation profile of the floating covers for these sewage-processing lagoons and its implications for sub-surface scum accumulation, which can compromise the structural integrity of the engineered assets. However, unwanted artefacts due to trapped rainwater, debris, dirt, and other irrelevant structures can significantly distort the elevation profile. In this study, a machine learning algorithm is utilised to group distinct features on the floating cover based on an image segmentation process. An unsupervised k-means clustering algorithm is employed, which operates on a stacked 4D array composed of the elevation of the DEM and the RGB channels of the associated orthomosaic. In the cluster validation process, seven cluster groups were considered optimal based on the Calinski–Harabasz criterion. Furthermore, by utilising the k-means method as a filtering technique, three clusters contain features related to the elevations associated with the floating cover membrane, collectively representing 84% of the asset, with each cluster contributing at least 19% of the asset. The artefact groups constitute less than 6% of the asset and exhibit significantly different features, colour characteristics, and statistical measurements from those of the membrane groups. The study found notable improvements using the k-means filtering method, including a 59.4% average reduction in outliers and a 36.3% decrease in standard deviation compared to raw data. Additionally, employing the proposed method in the scum hardness analysis improved correlation strength by 13.1%, removing approximately 16% of the artefacts in total assets, in contrast to a 3.6% improvement with the median filtering method. This improved imaging will lead to significant benefits when integrating imagery into deep learning models for structural health monitoring and asset performance.
Two oxidation techniques that afford high yields of monomers and dimers were used to more accurately estimate the syringyl to guaiacyl (S:G) ratio of hardwood lignins. Permanganate oxidation of the ...woodmeal after a CuO pre-hydrolysis step gave poor results and this was attributed to preferential oxidation and degradation of syringyl nuclei by CuO. However, this procedure did provide a good estimate of the percentages of both S and G phenylpropane (C
9) units that were uncondensed. When the total S and G products from nitrobenzene oxidation (NBO) of the uncondensed fractions were corrected, credible S:G ratios were obtained. These ratios were in good agreement with results from KMnO
4 oxidation of dissolved kraft lignin without CuO pre-hydrolysis. The corrected NBO method was used to determine the S:G ratio of 13 poplars, and the values ranged from 1.01 to 1.68. Unlike results from other investigations, an excellent linear correlation (
R
2 =
0.846) was obtained for a decreasing lignin content (28% to 16.5%) with an increase in the S:G ratio.
•The Newborn Hearing Screening Reference Center (NHSRC), Philippine National Ear Institute (PNEI) and the National Telehealth Center (NTHC) of the National Institutes of Health UP Manila developed ...the Philippine Electronic National Newborn Hearing Screening Registry (ENNHSR).•The usability of ENNHSR as well as user perspectives and satisfaction on the training modules were determined using surveys and time and motion studies.•The accuracy in encoding patient data was 92% while hearing screening results was 88.64%.•The system usability scale (SUS) score of ENNHSR was computed at 75.5 falling within grade B.•Participants noted that when using ENNHSR, it was easy to find patient data, results and that it was streamlined with easy to track information.
The study determined the usability of the online and offline versions of the Philippine Electronic National Newborn Hearing Screening Registry (ENNHSR) as well as user perspectives and satisfaction on the training modules and the online and offline systems. The steps in creating the systems, training modules, and evaluation of the user training manual and video training modules, accuracy and time and motion studies on data entry as well as determination of user perspectives and satisfaction were the specific objectives.
With the combined efforts of the staff of Newborn Hearing Screening Reference Center (NHSRC), Philippine National Ear Institute (PNEI) and the National Telehealth Center (NTHC) of the National Institutes of Health UP Manila, the development of the online and offline versions of the ENNHSR took six (6) months from January 2021 to June 2021 to complete. Creation of the user manual and training modules took three (3) months from July 2021 to September 2021. The pilot of the systems was carried out in 2 Zoom Conferencing sessions with the participation of 28 existing certified newborn hearing center users with different roles, backgrounds, demographics from all over the Philippines. Written evaluation as well as focused group discussions on the training modules and the database were conducted during the sessions. Effectivity of the training modules was determined using a 10-point learning check. The time and accuracies in encoding each data field per user were also determined.
All 28 participants were able to attend and actively participate in the required Zoom Conferencing sessions as well as submit the 2 evaluation surveys for the training modules and the ENNHSR. During the learning check 93% or 26 out of 28 passed. The surgical intervention module took the longest time to encode while the fastest module to complete was for speech therapy. The average mean time to complete all modules was 3382 s or around 57 min while the time range was between 32 and 104 min. A screener would need 18 min while an implant programmer who is a clinical audiologist would need 52 min to enter data. The accuracy in encoding patient data was 92% while hearing screening results was 88.64%. The system usability scale (SUS) score of ENNHSR was computed at 75.5 which was the average of individual SUS scores, falling within grade B or 74.1 to 77.1 as its corresponding numerical score range in percentile. Most of the participants noted that it was easy to find patient data, results and that it was streamlined with easy to track information.
Data gathering and analysis both play important roles in health management, policy implementation and quality assurance. We were able to uncover areas where the system performed well - effectively, efficiently, and with satisfaction. We realize that all the possible problems cannot be detected with a small number of participants and variety in information. This testing will serve as both a means to record or benchmark current usability, but also to identify areas where improvements must be made.