Blockchains guarantee data integrity through consensus of distributed ledgers based on multiple validation nodes called miners. For this reason, any blockchain system can be critically disabled by a ...malicious attack from a majority of the nodes (e.g., 51% attack). These attacks are more likely to succeed as the number of nodes required for consensus is smaller. Recently, as blockchains are becoming too large (making them difficult to store, send, receive, and manage), sharding is being considered as a technology to help improve the transaction throughput and scalability of blockchains. Sharding distributes block validators to disjoint sets to process transactions in parallel. Therefore, the number of validators of each shard group is smaller, which makes shard-based blockchains more vulnerable to 51% attacks than blockchains that do not use sharding. To solve this problem, this paper proposes a trust-based shard distribution (TBSD) scheme that assigns potential malicious nodes in the network to different shards, preventing malicious nodes from gaining a dominating influence on the consensus of a single shard. TBSD uses a trust-based shard distribution scheme to prevent malicious miners from gathering in on one shard by integration of a trust management system and genetic algorithm (GA). First, the trust of all nodes is computed based on the previous consensus result. Then, a GA is used to compute the shard distribution set to prevent collusion of malicious miners. The performance evaluation shows that the proposed TBSD scheme results in a shard distribution with a higher level of fairness than existing schemes, which provides an improved level of protection against malicious attacks.
IoT (IoT) networks generate massive amounts of data while supporting various applications, where the security and protection of IoT data are very important. In particular, blockchain technology ...supporting IoT networks is considered as the most secure, expandable, and scalable database storage solution. However, existing blockchain systems have scalability problems due to low throughput and high resource consumption, and security problems due to malicious attacks. Several studies have proposed blockchain technologies that can improve the scalability or the security level, but there have been few studies that improve both at the same time. In addition, most existing studies do not consider malicious attack scenarios in the consensus process, which deteriorates the blockchain security level. In order to solve the scalability and security problems simultaneously, this paper proposes a Dueling Double Deep-Q-network with Prioritized experience replay (D3P) based secure trust-based delegated consensus blockchain (TDCB-D3P) scheme that optimizes the blockchain performance by applying deep reinforcement learning (DRL) technology. The TDCB-D3P scheme uses a trust system with a delegated consensus algorithm to ensure the security level and reduce computing costs. In addition, DRL is used to compute the optimum blockchain parameters under the dynamic network state and maximize the transactions per second (TPS) performance and security level. The simulation results show that the TDCB-D3P scheme can provide a superior TPS and resource consumption performance. Furthermore, in blockchain networks with malicious nodes, the simulation results show that the proposed scheme significantly improves the security level when compared to existing blockchain schemes by effectively reducing the influence of malicious nodes.
High levels of scalability and reliability are needed to support the massive Internet-of-Things (IoT) services. In particular, blockchains can be effectively used to safely manage data from ...large-scale IoT networks. However, current blockchain systems have low transactions per second (TPS) rates and scalability limitations that make them unsuitable. To solve the above issues, this article proposes a deep <inline-formula> <tex-math notation="LaTeX">Q </tex-math></inline-formula> network shard-based blockchain (DQNSB) scheme that dynamically finds the optimal throughput configuration. In this article, a novel analysis of sharded blockchain latency and security-level characterization is provided. Using the analysis equations, the DQNSB scheme estimates the level of maliciousness and adapts the blockchain parameters to enhance the security level considering the amount of malicious attacks on the consensus process. To achieve this purpose, deep reinforcement learning (DRL) agents are trained to find the optimal system parameters in response to the network status, and adaptively optimizes the system throughput and security level. The simulation results show that the proposed DQNSB scheme provides a much higher TPS than the existing DRL-enabled blockchain technology while maintaining a high security level.
Circuit design plays an essential role in all consumer electronics products. Printed circuit board (PCB) and very-large-scale integration (VLSI) circuit designing requires optimization of the ...electronic component's placement and wire routing to connect the components. Currently, circuit routing processes have been performed manually by experts, which greatly increases the cost of human resources and time. Such heuristic circuit designs are not optimized and may have errors, which is why automated circuit routing algorithms are important. However, it is difficult to obtain an optimal solution in circuit routing as it is an NP-hard problem. In addition, poor circuit routing increases the wire length of the circuit, which causes an increase in circuit cost and weight as well as performance degradation. In order to achieve routing optimization, many technologies have been proposed, in which some have applied artificial intelligence (AI) to improve the overall performance and reduce the designing time. Accordingly, in this paper, routing problems in PCB and VLSI are explained, and proposed technologies to solve these routing problems are introduced. Especially, a detailed investigation and analysis of AI technologies grafted into circuit routing algorithms are explained, and the considerations for AI-based routing algorithms are presented.
In this article, a deep reinforcement learning (DRL) control scheme is proposed to satisfy the strict Quality-of-Service (QoS) requirements of ultrareliability low-latency communication (URLLC) and ...enhanced mobile broadband (eMBB) using 5G multiple radio access technology (RAT)-based partial offloading and multiaccess edge-computing (MEC) resource allocation. In the proposed scheme, the user equipment (UE) makes optimal offloading decisions while the MEC server dynamically adjusts the server resources based on offloading requests from multiple UEs using DRL technology. The aim of the proposed scheme is to minimize the energy consumption of the UEs while maximizing the system utility (SU) performance, which is composed of the spectral efficiency (SE) and offloading success rate (OSR) of the MEC server. In addition, multiagent distributed learning technology and best experience push (BEP) techniques are used to enhance the learning efficiency of the DRL framework. The simulation result shows that the proposed scheme provides an improved SU and energy consumption performance compared to the benchmark offloading schemes.
Advancements in metaverse modeling & simulation (M&S) are becoming possible due to realization of extended reality (XR) systems, which combines mixed reality (MR) with advanced human-computer ...interaction (HCI) devices. To use these technologies appropriately, high performing system requirements, such as, high data rate and low latency, are required. For this, fifth-generation (5G) New-Radio (NR) technology that achieves high data rate, low latency, and massive connectivity can be applied. Although the mmWave technology adopted in 5G can satisfy high data rates by using high frequencies, it has a disadvantage of easily being vulnerable to blockage. To alleviate this problem, multi-path Transmission Control Protocol (MPTCP) which uses multiple Transmission Control Protocol (TCP) subflows can be used. However, MPTCP has a problem with its reordering delay that occurs due to mixed up packet arrival orders. To overcome this issue, in this paper, extensions to the linked increases algorithm (LIA) scheme are made to form the minimized reordering delay LIA (MRLIA) scheme, which is a new MPTCP congestion control scheme that increases the data rate by minimizing the reordering delay while using network resources fairly. Through simulation, it is confirmed that the proposed method can provide an improved performance compared to existing methods in terms of goodput and latency.
With the advent of 5G, the development of extended reality (XR) technology, which combines augmented reality (AR), virtual reality (VR), and advanced human-computer interaction (HCI) technology, is ...considered one of the key technologies of future metaverse engineering. Especially, XR real-time modeling and simulation (M&S) devices that can be applied to various fields (e.g., emergency training simulations, etc.) have tasks with large amounts of data to be processed. However, if the XR task is processed only by wireless user equipment (UE), the UE's energy may be quickly depleted, and the quality of service (QoS) may not be satisfied. To solve these problems, this paper proposes a partial offloading optimization scheme through multiple access edge computing (MEC). In addition, deep reinforcement learning (DRL) is used to reflect the dynamic state of the MEC system and to minimize the delay. The simulation results show that the proposed scheme optimizes the delay performance by efficiently offloading the XR tasks.
The advancement of autonomous driving and Advanced Driver Assistance System (ADAS) technology leads to improved public driving. However, due to the lack of public trust in self-driving vehicles, the ...actual use of Autonomous Vehicles (AVs) is still surprisingly limited. Based on such needs, this paper proposes a new attempt for a customized ADAS taking each driver's driving style into account in order to make individuals feel much more comfortable with autonomous driving. In this paper, a novel customized ADAS algorithm using Support Vector Machine (SVM) with high classification accuracy is proposed, which categorizes drivers into assertive and defensive driving styles. Since the importance of ADAS parameters that affect driving propensity varies depending on the driving situation of each driver, this paper compares and analyzes the driving styles of drivers in three driving scenarios. Each driver's driving data is collected using CARLA, an Unreal Engine-based realistic simulator that can imitate real-world scenarios. Based on this precise categorization, the ADAS sensors can enable more advanced driving safety support. The suggested scheme in this paper is particularly significant since the present state-of-the-art in autonomous driving is at level 3, which calls for sophisticated and advanced functions that can assist drivers using ADAS technology.