Automated surveillance systems are becoming a critical requirement in maritime domain, due to the continuous expansion of maritime security threats. Although several automated systems have been ...developed, detection of maritime threats is becoming more challenging due to the constantly changing tactics adopted by seafarers to evade detection. Machine learning algorithms are a popular choice when detecting maritime threats based on the abnormalities of vessels. This paper categorizes the security threats according to three processing levels: abnormal activities, behaviors, and intents, and presents available machine learning techniques to detect these threats, including several deep learning techniques which is the current trend in detecting abnormalities. Supervised learning and unsupervised learning techniques used in the literature are discussed in this paper, where the advantages and disadvantages of each approach in the context of maritime surveillance are discussed in detail. Supervised learning was used predominantly for detecting relatively simple abnormal behaviors and intents such as movement abnormalities. Such learning methods yielded higher accuracy values in comparison to unsupervised learning methods, which achieved 80–95% accuracy. Supervised learning methods perform between 93 and 99% accuracy, where the highest accuracy is achieved by SVM (support vector machine) and 91% accuracy by CNN (convolutional neural network) as the best among deep learning methods. Furthermore, this analysis reveals that supervised deep learning methods such as CNN and long short-term memory (LSTM) will be the future trends in developing high-accurate maritime surveillance systems with the ability to detect more maritime threats.
The use of deep learning models for the network intrusion detection task has been an active area of research in cybersecurity. Although several excellent surveys cover the growing body of research on ...this topic, the literature lacks an objective comparison of the different deep learning models within a controlled environment, especially on recent intrusion detection datasets. In this paper, we first introduce a taxonomy of deep learning models in intrusion detection and summarize the research papers on this topic. Then we train and evaluate four key deep learning models - feed-forward neural network, autoencoder, deep belief network and long short-term memory network - for the intrusion classification task on two legacy datasets (KDD 99, NSL-KDD) and two modern datasets (CIC-IDS2017, CIC-IDS2018). Our results suggest that deep feed-forward neural networks yield desirable evaluation metrics on all four datasets in terms of accuracy, F1-score and training and inference time. The results also indicate that two popular semi-supervised learning models, autoencoders and deep belief networks do not perform better than supervised feed-forward neural networks. The implementation and the complete set of results have been released for future use by the research community. Finally, we discuss the issues in the research literature that were revealed in the survey and suggest several potential future directions for research in machine learning methods for intrusion detection.
•Gives a taxonomy and survey of deep learning models for intrusion detection.•Evaluates four deep learning models on four intrusion detection datasets.•Feed-forward neural networks perform best across all metrics on all datasets.•Discusses issues in intrusion detection research and future directions.
Supermarket refrigeration systems are integral to food security and the global economy. Their massive scale, characterized by numerous evaporators, remote condensers, miles of intricate piping, and ...high working pressure, frequently leads to problematic leaks. Such leaks can have severe consequences, impacting not only the profits of the supermarkets, but also the environment. With the advent of Industry 4.0 and machine learning techniques, data-driven automatic fault detection and diagnosis methods are becoming increasingly popular in managing supermarket refrigeration systems. This paper presents a novel leak-detection framework, explicitly designed for supermarket refrigeration systems. This framework is capable of identifying both slow and catastrophic leaks, each exhibiting unique behaviours. A noteworthy feature of the proposed solution is its independence from the refrigerant level in the receiver, which is a common dependency in many existing solutions for leak detection. Instead, it focuses on parameters that are universally present in supermarket refrigeration systems. The approach utilizes the categorical gradient boosting regression model and a thresholding algorithm, focusing on features that are sensitive to leaks as target features. These include the coefficient of performance, subcooling temperature, superheat temperature, mass flow rate, compression ratio, and energy consumption. In the case of slow leaks, only the coefficient of performance shows a response. However, for catastrophic leaks, all parameters except energy consumption demonstrate responses. This method detects slow leaks with an average F1 score of 0.92 within five days of occurrence. The catastrophic leak detection yields F1 scores of 0.7200 for the coefficient of performance, 1.0000 for the subcooling temperature, 0.4118 for the superheat temperature, 0.6957 for the mass flow rate, and 0.8824 for the compression ratio, respectively.
Simultaneous localization and map-building (SLAM) continues to draw considerable attention in the robotics community due to the advantages it can offer in building autonomous robots. It examines the ...ability of an autonomous robot starting in an unknown environment to incrementally build
an environment map and simultaneously localize itself within this map. Recent advances in computer vision have contributed a whole class of solutions for the challenge of SLAM. This paper surveys contemporary progress in SLAM algorithms, especially those using computer vision as main sensing
means, i.e., visual SLAM. We categorize and introduce these visual SLAM techniques with four main frameworks: Kalman filter (KF)-based, particle filter (PF)-based, expectation-maximization (EM)-based and set membership-based schemes. Important topics of SLAM involving different frameworks
are also presented. This article complements other surveys in this field by being current as well as reviewing a large body of research in the area of vision-based SLAM, which has not been covered. It clearly identifies the inherent relationship between the state estimation via the
KF versus PF and EM techniques, all of which are derivations of Bayes rule. In addition to the probabilistic methods in other surveys, non-probabilistic approaches are also covered.
Fluorescence microscopic analysis of newly replicated DNA has revealed discrete granular sites of replication (RS). The average size and number of replication sites from early to mid S-phase suggest ...that each RS contains numerous replicons clustered together. We are using fluorescence laser scanning confocal microscopy in conjunction with multidimensional image analysis to gain more precise information about RS and their spatial-temporal dynamics. Using a newly improved imaging segmentation program, we report an average of ∼1,100 RS after a 5-min pulse labeling of 3T3 mouse fibroblast cells in early S-phase. Pulse-chase-pulse double labeling experiments reveal that RS take ∼45 min to complete replication. Appropriate calculations suggest that each RS contains an average of 1 mbp of DNA or ∼6 average-sized replicons. Double pulse-double chase experiments demonstrate that the DNA sequences replicated at individual RS are precisely maintained temporally and spatially as the cell progresses through the cell cycle and into subsequent generations. By labeling replicated DNA at the G1/ S borders for two consecutive cell generations, we show that the DNA synthesized at early S-phase is replicated at the same time and sites in the next round of replication.
Rapid advances in smart devices tremendously facilitate our day-to-day lives. However, these can be exploited remotely via existing cyber vulnerabilities to cause disruption at the physical ...infrastructure level. In this paper, we discover a novel distributed and stealthy attack that uses malicious actuation of a large number of small-scale loads residing within a distribution network (DN). This attack is capable of cumulatively violating the underlying operational system limits, leading to widespread and prolonged disruptions. A key element of this attack is the efficient use of attack resources, planned via Stackelberg games. To mitigate this type of an attack, we propose a countermeasure strategy which adaptively suppresses adverse effects of the attack when detected in a timely manner. The effectiveness of the proposed mitigation strategy is demonstrated via theoretical convergence studies, practical evaluations, and comparisons with the state-of-the-art strategies using realistic load flow and DN infrastructure models.
Automated human chromosome segmentation and feature extraction aim to improve the overall quality of genetic disorder diagnosis by addressing the limitations of tedious manual processes such as ...expertise dependence, time-inefficiency, observer variability and fatigue errors. Nevertheless, significant differences caused by staining methods, chromosome damage which may occur during imaging, cell and staining debris, inhomogeneity, weak boundaries, morphological variations, premature sister chromatid separation, as well as the presence of overlapping, touching, di-centric and bent chromosomes pose challenges in automated human chromosome segmentation and feature extraction. This review paper extensively discusses how the approaches presented in literature have addressed these challenges, and their strengths and limitations. Human chromosome segmentation algorithms are presented under four broad categories; thresholding, clustering, active contours and convex-concave points-based methods. Chromosome feature extraction methods are discussed under two main categories based on banding-pattern and geometry. In addition, new insights for the improvement of fully automated karyotyping are provided.
•Analyzing Auditory Brainstem Responses (ABRs) is subjective and time-consuming for clinicians.•We systematically reviewed the literature and found 34 articles that applied machine learning to the ...analysis of ABRs and met the inclusion criteria of the review.•Three application categories of ABRs were found and the use of ML for each category has been reviewed separately.•A clear and comprehensive overview is provided of the ML techniques applied to ABRs and several challenges are identified.•Potential avenues are suggested to further explore ML paradigms to analyse the ABRs providing opportunities to future researchers.
The application of machine learning algorithms for assessing the auditory brainstem response has gained interest over recent years with a considerable number of publications in the literature. In this systematic review, we explore how machine learning has been used to develop algorithms to assess auditory brainstem responses. A clear and comprehensive overview is provided to allow clinicians and researchers to explore the domain and the potential translation to clinical care.
The systematic review was performed based on PRISMA guidelines. A search was conducted of PubMed, IEEE-Xplore, and Scopus databases focusing on human studies that have used machine learning to assess auditory brainstem responses. The duration of the search was from January 1, 1990, to April 3, 2021. The Covidence systematic review platform (www.covidence.org) was used throughout the process.
A total of 5812 studies were found through the database search and 451 duplicates were removed. The title and abstract screening process further reduced the article count to 89 and in the proceeding full-text screening, 34 articles met our full inclusion criteria.
Three categories of applications were found, namely neurologic diagnosis, hearing threshold estimation, and other (does not relate to neurologic or hearing threshold estimation). Neural networks and support vector machines were the most commonly used machine learning algorithms in all three categories. Only one study had conducted a clinical trial to evaluate the algorithm after development. Challenges remain in the amount of data required to train machine learning models. Suggestions for future research avenues are mentioned with recommended reporting methods for researchers.