Load forecasting is a vital part of smart grids for predicting the required electrical power using artificial intelligence (AI). Deep learning is broadly used for load forecasting in the smart grid ...using the artificial neural network (ANN). Generally, computing the deep learning in the smart grid requires massive data aggregation or centralization and significant computational time. This paper presents a survey of deep learning-based load forecasting techniques from 2015 to 2020. This survey discusses the studies based on their deep learning techniques, Distributed Deep Learning (DDL) techniques, Back Propagation (BP) based works, and non-BP based works in the load forecasting process. Consequent to the survey, it was determined that data aggregation dependency would be beneficial for reducing computational time in load forecasting. Therefore, a conceptual model of DDL for smart grids has been presented, where the HSIC (Hilbert-Schmidt Independence Criterion) Bottleneck technique has been incorporated to provide higher accuracy.
In recent times, Heterogeneous Network (HetNet) achieves the capacity and coverage for indoors through the deployment of small cells i.e. femtocells (HeNodeBs). These HeNodeBs are plug-and-play ...Customer Premises Equipment’s which are associated with the internet protocol backhaul to macrocell (macro-eNodeB). The random placement of HeNodeBs deployed in co-channel along with macro-eNodeB is causing severe system performance degradation. Thereby, these HeNodeBs are suggested as the ultimate and the most significant cause of interference in Orthogonal Frequency-Division Multiple-Access based HetNets due to the restricted co-channel deployment. The CTI in such systems can significantly reduce the throughput, and the outages can rise to the unacceptable limit or extremely high levels. These lead to severe system performance degradation in HetNets. This paper presents a novel HGBBDSA-CTI approach capable of strategically allocate the subcarriers and thereby improves the throughput as well as the outage. The enhanced system performance is able to mitigate CTI issues in HetNets. This paper also analyses the time complexity for the proposed HGBBDSA algorithm and also compares it with the Genetic Algorithm-based Dynamic Subcarrier Allocation (DSA), and Particle Swarm Optimization-based DSA as well. The key target of this study is to allocate the unoccupied subcarriers by sharing among the HeNodeBs. The reason is also to enhance the system performance such as throughput of HeNodeB, the average throughput of HeNodeB Users, and outage. The simulation results show that the proposed HGBBDSA-CTI approach enhances the average throughput (92.05 and 74.44%), throughput (30.50 and 74.34%), and the outage rate reduced to 52.9 and 50.76% compare with the existing approaches. The result also indicates that the proposed HGBBDSA approach has less time complexity than the existing approaches.
Extension principles for picture fuzzy sets Hasan, Mohammad Kamrul; Ali, Md. Yasin; Sultana, Abeda ...
Journal of intelligent & fuzzy systems,
01/2023, Letnik:
44, Številka:
4
Journal Article
Recenzirano
Picture fuzzy set (PFS), is a newly developed apparatus to treaty with uncertainties in problems where the opinions are yes, no, neutral, and refusal types. Extension principle is one of the key ...tools for describing uncertainties. It provides a general method for existing classical mathematical concepts to address fuzzy quantities. It has numerous applications in various arena of our real life. However, there are less works on extension principle for picture fuzzy sets. In this article, new extension principles namely minimal extension principle and average extension principle are proposed for picture fuzzy sets. Various properties of the minimal extension principle and the average extension principle for PFSs are also established. We also prove some properties of Zadeh’s extension principle for PFSs. Finally, arithmetic operations for PFSs based on the average extension principle are developed with numerical illustrations.
Modern communication networks and digital control techniques are used in a smart grid. The first step is to classify the features of several communication networks and conduct a comparative ...investigation of the communication networks applicable to the smart grid. The integration of distributed generation has significantly increased as the global energy demand rises, and sustainable energy for electric vehicles and renewable energies worldwide are being pursued. Additional explanations for this surge include environmental concerns, the reforming of the power sector, and the advancing of small-scale electricity generation technologies. Smart monitoring and control of interconnected systems are required to successfully integrate distributed generation into an existing conventional power system. Electric-vehicles-based smart grid technologies are capable of playing this part. Smart grids are crucial to avoid becoming locked in an obsolete energy infrastructure and to draw in new investment sources and build an effective and adaptable grid system. To achieve reliability and high-quality power systems, it is also necessary to apply intelligent grid technologies at the bulk power generation and transmission levels. This paper presents smart grid applicable communication networks and electric vehicles empowering distributed generation systems. Additionally, we address some constraints and challenges and make recommendations that will give proper guidelines for academicians and researchers to resolve the current issues.
A Review on Text Steganography Techniques Majeed, Mohammed Abdul; Sulaiman, Rossilawati; Shukur, Zarina ...
Mathematics (Basel),
11/2021, Letnik:
9, Številka:
21
Journal Article
Recenzirano
Odprti dostop
There has been a persistent requirement for safeguarding documents and the data they contain, either in printed or electronic form. This is because the fabrication and faking of documents is ...prevalent globally, resulting in significant losses for individuals, societies, and industrial sectors, in addition to national security. Therefore, individuals are concerned about protecting their work and avoiding these unlawful actions. Different techniques, such as steganography, cryptography, and coding, have been deployed to protect valuable information. Steganography is an appropriate method, in which the user is able to conceal a message inside another message (cover media). Most of the research on steganography utilizes cover media, such as videos, images, and sounds. Notably, text steganography is usually not given priority because of the difficulties in identifying redundant bits in a text file. To embed information within a document, its attributes must be changed. These attributes may be non-displayed characters, spaces, resized fonts, or purposeful misspellings scattered throughout the text. However, this would be detectable by an attacker or other third party because of the minor change in the document. To address this issue, it is necessary to change the document in such a manner that the change would not be visible to the eye, but could still be decoded using a computer. In this paper, an overview of existing research in this area is provided. First, we provide basic information about text steganography and its general procedure. Next, three classes of text steganography are explained: statistical and random generation, format-based methodologies, and linguistics. The techniques related to each class are analyzed, and particularly the manner in which a unique strategy is provided for hiding secret data. Furthermore, we review the existing works in the development of approaches and algorithms related to text steganography; this review is not exhaustive, and covers research published from 2016 to 2021. This paper aims to assist fellow researchers by compiling the current methods, challenges, and future directions in this field.
Ensuring the security of critical Industrial Internet of Things (IIoT) systems is of utmost importance, with a primary focus on identifying cyber-attacks using Intrusion Detection Systems (IDS). Deep ...learning (DL) techniques are frequently utilized in the anomaly detection components of IDSs. However, these models often generate high false-positive rates, and their decision-making rationale remains opaque, even to experts. Gaining insights into the reasons behind an IDS's decision to block a specific packet can aid cybersecurity professionals in assessing the system's effectiveness and creating more cyber-resilient solutions. In this paper, we offer an explainable ensemble DL-based IDS to improve the transparency and robustness of DL-based IDSs in IIoT networks. The framework incorporates Shapley additive explanations (SHAP) and Local comprehensible-independent Clarifications (LIME) methods to elucidate the decisions made by DL-based IDSs, providing valuable insights to experts responsible for maintaining IIoT network security and developing more cyber-resilient systems. The ToN_IoT dataset was used to evaluate the efficacy of the suggested framework. As a baseline intrusion detection system, the extreme learning machines (ELM) model was implemented and compared with other models. Experiments show the effectiveness of ensemble learning to improve the results.
While the cloudification of networks with a micro-services-oriented design is a well-known feature of 5G, the 6G era of networks is closely related to intelligent network orchestration and ...management. Consequently, artificial intelligence (AI), machine learning (ML), and deep learning (DL) have a big part to play in the 6G paradigm that is being imagined. Future end-to-end automation of networks requires proactive threat detection, the use of clever mitigation strategies, and confirmation that 6G networks will be self-sustaining. To strengthen and consolidate the role of AI in safeguarding 6G networks, this article explores how AI may be employed in 6G security. In order to achieve this, a novel anomaly detection system for 6G networks (AD6GNs) based on ensemble learning (EL) for communication networks was redeveloped in this study. The first stage in the EL-ADCN process is pre-processing. The second stage is the feature selection approach. It applies the reimplemented hybrid approach using a comparison of the ensemble learning and feature selection random forest algorithms (CFS-RF). NB2015, CIC_IDS2017, NSL KDD, and CICDDOS2019 are the three datasets, each given a reduced dimensionality, and the top subset characteristic for each is determined separately. Hybrid EL techniques are used in the third step to find intrusions. The average voting methodology is employed as an aggregation method, and two classifiers—support vector machines (SVM) and random forests (RF)—are modified to be used as EL algorithms for bagging and adaboosting, respectively. Testing the concept of the last step involves employing classification forms that are binary and multi-class. The best experimental results were obtained by applying 30, 35, 40, and 40 features of the reimplemented system to the three datasets: NSL_KDD, UNSW_NB2015, CIC_IDS2017, and CICDDOS2019. For the NSL_KDD dataset, the accuracy was 99.5% with a false alarm rate of 0.0038; the accuracy was 99.9% for the UNSW_NB2015 dataset with a false alarm rate of 0.0076; and the accuracy was 99.8% for the CIC_IDS2017 dataset with a false alarm rate of 0.0009. However, the accuracy was 99.95426% for the CICDDOS2019 dataset, with a false alarm rate of 0.00113.
Handoff management is an indispensable component in supporting network mobility. The handoff situation raises while the Mobile Router (MR) or Mobile Node (MN) crosses the different wireless ...communication access technologies. At the time of inter technology handoff the multiple interface based MR can accomplish multihoming features such as enhanced availability, traffic load balancing with seamless flow distribution. These multihoming topographies greatly responsible reducing network delays during inter technology handoff. This article proposes a multihoming based Mobility management in Proxy NEMO (MM-PNEMO) scheme that considers benefits of using multiple interfaces. To support the proposed scheme design a numerical framework is developed that will be used to assess the performance of the proposed MM-PNEMO scheme. The performance is evaluated in the state-of-art numerical simulation approach focusing the key success metrics of signalling cost and packet delivery cost, that eventually scaling the total handoff cost. The numerical simulation result shows that the proposed MM-PENMO delightedly reduces the average handoff cost to 60% compared to existing NEMO Basic support protocol (NEMO-BSP) and PNEMO.