The widespread acceptance and increase of the Internet and mobile technologies have revolutionized our existence. On the other hand, the world is witnessing and suffering due to technologically aided ...crime methods. These threats, including but not limited to hacking and intrusions and are the main concern for security experts. Nevertheless, the challenges facing effective intrusion detection methods continue closely associated with the researcher’s interests. This paper’s main contribution is to present a host-based intrusion detection system using a C4.5-based detector on top of the popular Consolidated Tree Construction (CTC) algorithm, which works efficiently in the presence of class-imbalanced data. An improved version of the random sampling mechanism called Supervised Relative Random Sampling (SRRS) has been proposed to generate a balanced sample from a high-class imbalanced dataset at the detector’s pre-processing stage. Moreover, an improved multi-class feature selection mechanism has been designed and developed as a filter component to generate the IDS datasets’ ideal outstanding features for efficient intrusion detection. The proposed IDS has been validated with state-of-the-art intrusion detection systems. The results show an accuracy of 99.96% and 99.95%, considering the NSL-KDD dataset and the CICIDS2017 dataset using 34 features.
Molecular inversion probes (MIPs) enable cost-effective multiplex targeted gene resequencing in large cohorts. However, the design of individual MIPs is a critical parameter governing the performance ...of this technology with respect to capture uniformity and specificity. MIPgen is a user-friendly package that simplifies the process of designing custom MIP assays to arbitrary targets. New logistic and SVM-derived models enable in silico predictions of assay success, and assay redesign exhibits improved coverage uniformity relative to previous methods, which in turn improves the utility of MIPs for cost-effective targeted sequencing for candidate gene validation and for diagnostic sequencing in a clinical setting.
MIPgen is implemented in C++. Source code and accompanying Python scripts are available at http://shendurelab.github.io/MIPGEN/.
Advanced driver assistance systems (ADAS) need to account for the driver's awareness of the environment to be effectively used. This study examines the impact of environmental features (eg, visual ...complexity, object density, roadway type, lighting) on drivers' situation awareness (SA). This is achieved using a controlled study with 40 participants. Using a split-plot design, the participants were shown 30 out of 75 real-world driving scenarios displayed in a driving simulator environment. Participants' responses to Situational Awareness Global Assessment Technique (SAGAT) queries on the type and coordinates of objects in the scene were used to calculate SA scores. A hurdle model was developed to estimate participants' SA scores. The key findings highlight visual complexity as a significant predictor of SA scores. This predictor was easy to compute and able to capture the complexity of objects that impact road safety as well as the visual clutter in the background. The model showed that drivers were able to identify at least one object of interest in complex environments with high visual complexity and with many objects. A higher proportion of vulnerable road users was associated with a greater likelihood of a non-zero SA score, but the SA score was lower compared to environments with higher proportions of cars. The findings of this study provide insights into the environmental factors to be considered for SA predictive models.
The monkeypox virus is a zoonotic illness with a tropical distribution in Africa, and around the world. The disease is spread through contact with infected animals or humans, and can also be spread ...from person to person through close contact with respiratory or bodily fluids. Fever, swollen lymph nodes, blisters, and crusted rashes characterize the disease. The incubation period is five to twenty-one days. It is difficult to distinguish the rash caused by infection from varicella and smallpox. Laboratory investigations are essential aspects of illness diagnosis and surveillance, and novel tests are required for more accurate and faster diagnosis. Antiviral drugs are being used to treat monkeypox. Scarring as well as other comorbidities, are prevalent in survivors, with the case mortality rate varying from 1 to 11%. The virus was found in monkeys at a Danish research facility in 1958, from which the term 'monkeypox' is derived. The primary human case was found in a child in the Democratic Republic of the Congo (DRC) in 1970. The World Health Organisation (WHO) has recently declared monkeypox a public health emergency of international concern. This manuscript attempts to review the various aspects of monkeypox disease and its allopathic as well as alternative treatment options available and serves as a valuable resource for healthcare professionals, researchers, and the general public.
The high leakage current has been one of the critical issues in SRAM-based Field Programmable Gate Arrays (FPGAs). In recent works, resistive non-volatile memories (NVMs) have been utilized to tackle ...the issue with their superior energy efficiency and fast power-on speed. Phase Change Memory (PCM) is one of the most promising resistive NVMs with the advantages of low cost, high density and high resistance ratio. However, most of the reported PCM-based FPGAs have significant active leakage power and reliability issues. This paper presents a low active leakage power and high reliability PCM based non-volatile SRAM (nvSRAM). The low active leakage power and high reliability are achieved by biasing PCM cells at 0 V during FPGA operation. Compared to the state-of-the-art, the proposed nvSRAM based 4-input look up table (LUT) achieves 174 times reduction in active leakage power and 15000 times increase in retention time. In addition, the proposed nvSRAM-based FPGA system significantly accelerates the loading speed to less than 1 ns with 2.54 fJ/cell loading energy.
In Mixed-Criticality (MC) systems, due to encountering multiple Worst-Case Execution Times (WCETs) for each task corresponding to the system operation modes, estimating appropriate WCETs for tasks in ...lower-criticality (LO) modes is essential to improve the system's timing behavior. While numerous studies focus on determining WCET in the high-criticality mode, determining the appropriate WCET in the LO mode poses significant challenges and has been addressed in a few research works due to its inherent complexity. This article introduces ESOMICS, a novel scheme, to obtain appropriate WCET for LO modes, in which we propose an ML-based approach for WCET estimation based on the application's source code analysis and the model training using a comprehensive data set. The experimental results show a significant improvement in utilization by up to 23.3% compared to state-of-the-art works, while mode switching probability is bounded by 7.19%, in the worst-case scenario.
The massive parallelism provided by Graphics Processing Units (GPUs) to accelerate compute-intensive tasks makes it preferable for Real-Time Systems such as autonomous vehicles. Such systems require ...the execution of heavy Machine Learning (ML) and Computer Vision applications because of the computing power of GPUs. However, such systems need a guarantee of timing predictability. It means the Worst-Case Execution Time (WCET) of the application is estimated tightly and safely to schedule each application before its deadline to avoid catastrophic consequences. As more applications use GPUs, running many applications simultaneously on the same GPU becomes necessary. To provide predictable performance while the application is running in parallel, it must be WCET-aware, which GPUs do not fully support in a multitasking environment. Nvidia recently added a feature called the Multi-Process Service. It allows the different applications to run simultaneously in the same CUDA context by partitioning the compute resources of the GPU. Using this feature, we can measure the interference from co-running GPU applications to estimate WCET. In this paper, we propose a novel technique to estimate the WCET of the GPU kernel using an ML approach. Our approach is based on the application's source, and the model is trained based on the large data set. The approach is flexible and can be applied to different GPU-sharing mechanisms. We allow the victim and enemy kernel of the GPU to execute in parallel to get the maximum interference from the enemy to estimate the WCET of the victim kernel. Enemy kernels are chosen to cause a higher slowdown by acquiring the resources of the victim kernel. We compare our implementation with state-of-the-art approaches to show its effectiveness. Our ML approach reduces the time by 99% in most cases because inferences take only seconds to predict WCET, and the resource consumption required to estimate WCET compared to traditional approaches is minimal because we don't need to execute the application on GPU for hours. Although our approach does not offer safety guarantees because of its empirical nature, we observed that predicted WCETs are always higher than any observed execution times for all benchmarks, and the maximum overestimation factor observed is 11x.
Increased focus on sustainability and energy decentralization has positively impacted the adoption of nanogrids. With the tremendous growth, load forecasting has become crucial for their daily ...operation. Since the loads of nanogrids have large variations with sudden usage of large household electrical appliances, existing forecasting models, majorly focused on lower volatile loads, may not work well. Moreover, abrupt operation of electrical appliances in a nanogrid, even for shorter durations, especially in “Peak Hours”, raises the energy cost substantially. In this paper, an ANN model with dynamic feature selection is developed to predict the hour-ahead load of nanogrids based on meteorological data and a load lag of 1 h (t-1). In addition, by thresholding the predicted load against the average load of previous hours, peak loads, and their time indices are accurately identified. Numerical testing results show that the developed model can predict loads of nanogrids with the Mean Square Error (MSE) of 0.03 KW, the Mean Absolute Percentage Error (MAPE) of 9%, and the coefficient of variation (CV) of 11.9% and results in an average of 20% daily energy cost savings by shifting peak load to off-peak hours.
This study presents the compressive strength predictions of Ground Granulated Blast Furnace Slag (GGBS) based cement mortars containing two different types of M Sands (normal M sand and white M sand) ...which have been replaced effectively with the ordinary Portland cement. The defined mix ratios of mortar cubes are examined for compressive strength at 7, 14 and 28 days. Artificial Neural Network is a useful tool to predict various data's strengths, making the work much more comfortable. Then the obtained compressive strength results of GGBS based cement based mortars varied with two types of M-sands at different days were feed into ANN tool box in MATLAB software to acquire the strength predictions. Experimental results indicated that the compressive strength results of GGBS based mortar with white M-sand showed superior results than the normal M sand. The predicted compressive strength results of GGBS based cement mortars obtained from ANN framework was in good agreement with the experimental results.
Energy consumption is a crucial domain in energy system management. Recently, it was observed that there has been a rapid rise in the consumption of energy throughout the world. Thus, almost every ...nation devises its strategies and models to limit energy usage in various areas, ranging from large buildings to industrial firms and vehicles. With technological advancements, computational intelligence models have been successfully contributing to the prediction of the consumption of energy. Machine learning and deep learning-based models enhance the precision and robustness compared to traditional approaches, making it more reliable. This article performs a review analysis of the various computational intelligence approaches currently being utilized to predict energy consumption. An extensive survey procedure is conducted and presented in this study, and relevant works are discussed. Different criteria are considered during the aggregation of the relevant studies relating to the work. The author’s perspective, future trends and various novel approaches are also presented as a part of the discussion. This article thereby lays a foundation stone for further research works to be undertaken for energy prediction.