The demand for a new generation of high-temperature dielectric materials toward capacitive energy storage has been driven by the rise of high-power applications such as electric vehicles, aircraft, ...and pulsed power systems where the power electronics are exposed to elevated temperatures. Polymer dielectrics are characterized by being lightweight, and their scalability, mechanical flexibility, high dielectric strength, and great reliability, but they are limited to relatively low operating temperatures. The existing polymer nanocomposite-based dielectrics with a limited energy density at high temperatures also present a major barrier to achieving significant reductions in size and weight of energy devices. Here we report the sandwich structures as an efficient route to high-temperature dielectric polymer nanocomposites that simultaneously possess high dielectric constant and low dielectric loss. In contrast to the conventional single-layer configuration, the rationally designed sandwich-structured polymer nanocomposites are capable of integrating the complementary properties of spatially organized multicomponents in a synergistic fashion to raise dielectric constant, and subsequently greatly improve discharged energy densities while retaining low loss and high charge–discharge efficiency at elevated temperatures. At 150 °C and 200 MV m−1, an operating condition toward electric vehicle applications, the sandwich-structured polymer nanocomposites outperform the state-of-the-art polymer-based dielectrics in terms of energy density, power density, charge–discharge efficiency, and cyclability. The excellent dielectric and capacitive properties of the polymer nanocomposites may pave a way for widespread applications in modern electronics and power modules where harsh operating conditions are present.
Accounting has always been influenced by digital technology, although most of it has been replacing analogue instruments with digital versions. A blockchain is a digital ledger that is used to record ...transactions between different participants in a network. It is an internet-based, peer-to-peer distributed ledger that contains all transactions since its inception. Blockchain technology has the potential to revolutionize the world humanity implementing in the business based on the concept of transmitting valuable digital assets like bitcoin without the need of a third-party intermediary. Blockchain is considered as a type of database or a sort of digital ledger, which is widely used by many financial organizations. It's a distributed ledger which keeps records of immutable and verifiable data. This blockchain technology permits decentralized ledger transactions to be produced without the intervention of a third party. Because of its decentralization, networks have a high degree of protection. The aim of the study is to investigate various decisions making factors, affecting in adopting blockchain technology in the field of accounting. The result showed that secure and private, transparent and auditable, immutable, better transparency, reduce cost, transparency, real-time transaction, and flexible are most likely influencing factors in adoption of blockchain technology. The result indicates that the Quorum, Sap Hana, and Ethereum, platforms are most consistent and trusted platforms for block chain technology and the blockchain platforms are found most suitable, secured and stronger platform.
Received: 25 October 2021 / Accepted: 2 February 2022 / Published: 5 March 2022
Blockchain, as the underlying technology of crypto-currencies, has attracted significant attention. It has been adopted in numerous applications, such as smart grid and Internet-of-Things. However, ...there is a significant scalability barrier for blockchain, which limits its ability to support services with frequent transactions. On the other side, edge computing is introduced to extend the cloud resources and services to be distributed at the edge of the network, but currently faces challenges in its decentralized management and security. The integration of blockchain and edge computing into one system can enable reliable access and control of the network, storage, and computation distributed at the edges, hence providing a large scale of network servers, data storage, and validity computation near the end in a secure manner. Despite the prospect of integrated blockchain and edge computing systems, its scalability enhancement, self organization, functions integration, resource management, and new security issues remain to be addressed before widespread deployment. In this survey, we investigate some of the work that has been done to enable the integrated blockchain and edge computing system and discuss the research challenges. We identify several vital aspects of the integration of blockchain and edge computing: motivations, frameworks, enabling functionalities, and challenges. Finally, some broader perspectives are explored.
Today, cloud computing applications are rapidly constructed by services belonging to different cloud providers and service owners. This work presents the inter-cloud elasticity framework, which ...focuses on cloud load balancing based on dynamic virtual machine reconfiguration when variations on load or on user requests volume are observed. We design a dynamic reconfiguration system, called inter-cloud load balancer (ICLB), that allows scaling up or down the virtual resources (thus providing automatized elasticity), by eliminating service downtimes and communication failures. It includes an inter-cloud load balancer for distributing incoming user HTTP traffic across multiple instances of inter-cloud applications and services and we perform dynamic reconfiguration of resources according to the real time requirements. The experimental analysis includes different topologies by showing how real-time traffic variation (using real world workloads) affects resource utilization and by achieving better resource usage in inter-cloud.
Bitcoin and Ethereum, respectively the first and the second generations of blockchains, exhibit two main problems, mostly connected to the increase of network traffic and load onto their respective ...networking and service models: scalability and interoperability. To solve these issues, several technologies have been introduced-thus paving the way to the so-called third-generation blockchains-which are divided into three main categories: (1) Layer 1 solutions, (2) rollups, and (3) side-chains. We present a validated framework for the evaluation and comparison of these categories, based on the three main non-functional aspects-reflecting therefore a trilemma-that discriminate their use for the design and orchestration of complex blockchain-oriented service applications, namely: scalability, decentralization, and security.
The performance evaluation of video transport mechanisms becomes increasingly important as encoded video accounts for growing portions of the network traffic. Compared to the widely studied MPEG-4 ...encoded video, the recently adopted H.264 video coding standards include novel mechanisms, such as hierarchical B frame prediction structures and highly efficient quality scalable coding, that have important implications for network transport. This tutorial introduces a trace-based evaluation methodology for the network transport of H.264 encoded video. We first give an overview of H.264 video coding, and then present the trace structures for capturing the characteristics of H.264 encoded video. We give an overview of the typical video traffic and quality characteristics of H.264 encoded video. Finally, we explain how to account for the H.264 specific coding mechanisms, such as hierarchical B frames, in networking studies.
Internet of Things (IoT) enables smart campuses more convenient for cloud services. The availability of cloud resources to its users appears as a fundamental challenge. The existing research presents ...several auto-scaling techniques to scale the resources with the increase in users' demands. However, still, the cloud users of auto-scaled servers experience service disruption, delayed responses, and the occurrence of service bursts. The prevailing burst management framework exhibits limitations in the context of burdening the existing auto-scaled machines for cost estimation and resource allocation. This research presents a 3-axis auto-scaling framework for load balancing and resource allocation by incorporating a dedicated cost estimator and allocator (on the z-axis). The cost estimation server develops a log of existing load estimates of vertical and horizontal servers and scales the new users' requests in case the vertical threshold is breached with new requests. The cost estimator, in its data structure, keeps track of the current resources available at both vertical and horizontal servers. The historical information of available resources and the new resources' requests is decided by the cost estimator as per demand and supply scenario. The general characteristics of servers are resources pooling, requests queue development, burst identification, automatic scaling, and load balancing. The cost estimator also prioritizes vertical servers for resource allocations, and switches to the horizontal server when the vertical server reaches its 75% quota of resources. The study simulates 1000 users' requests of smart campus, adopts state-of-the-art ensemble with bagging strategy and handles an effective class imbalance situation.
We revisit large kernel design in modern convolutional neural networks (CNNs). Inspired by recent advances in vision transformers (ViTs), in this paper, we demonstrate that using a few large ...convolutional kernels instead of a stack of small kernels could be a more powerful paradigm. We suggested five guidelines, e.g., applying re-parameterized large depthwise convolutions, to design efficient high-performance large-kernel CNNs. Following the guidelines, we propose RepLKNet, a pure CNN architecture whose kernel size is as large as 31×31, in contrast to commonly used 3×3. RepLKNet greatly closes the performance gap between CNNs and ViTs, e.g., achieving comparable or superior results than Swin Transformer on ImageNet and a few typical downstream tasks, with lower latency. RepLKNet also shows nice scalability to big data and large models, obtaining 87.8% top-1 accuracy on ImageNet and 56.0% mIoU on ADE20K, which is very competitive among the state-of-the-arts with similar model sizes. Our study further reveals that, in contrast to small-kernel CNNs, large-kernel CNNs have much larger effective receptive fields and higher shape bias rather than texture bias. Code & models at https://github.com/megvii-research/RepLKNet.
Graph Convolutional Network (GCN) has achieved extraordinary success in learning representations of nodes in graphs. However, regarding Heterogeneous Information Network (HIN), existing HIN-oriented ...GCN methods still suffer from two deficiencies: (1) they cannot flexibly explore all possible meta-paths and extract the most useful ones for each target object, which hinders both effectiveness and interpretability; (2) before performing aggregation, they often require some additional time-consuming pre-processing operations, which increase the computational complexity. To address the above issues, we propose an interpretable and efficient Heterogeneous Graph Convolutional Network (ie-HGCN) to learn the representations of objects in HINs. It is designed as a hierarchical aggregation architecture, i.e., object-level aggregation and type-level aggregation. The new architecture can automatically evaluate all possible meta-paths within a length limit, and discover and exploit the most useful ones for each target object, i.e., at fine granularity. It also reduces the computational cost by avoiding additional time-consuming pre-processing operations. Theoretical analysis shows its ability to evaluate the usefulness of all possible meta-paths, its connection to the spectral graph convolution on HINs, and its quasi-linear time complexity. Extensive experiments on four real network datasets demonstrate its interpretability, efficiency as well as its superiority against thirteen baselines.
Power converters with multi-objective functionality have ample scope of research in the power electronics field. As a consequence, bidirectional multilevel buck rectifier (BMBR) is proposed for power ...factor correction applications. Unlike the existing bidirectional multilevel rectifiers, the proposed BMBR has excellent load scalability and stability. Additionally, the proposed BMBR synthesizes multilevel voltage across its input terminals to shape the supply current and reduce its harmonic content. Further, the proposed BMBR eliminates the need for an additional output voltage sensor. Also, its continuous current mode operation reduces the requirement for capacitive and inductive filter at its input and output-side, respectively. All the power switches in the proposed BMBR operates at a lower switching frequency and have equal peak inverse voltage. In addition, the BMBR is capable of feeding single as well as double similar/dissimilar loads simultaneously without compromising converter stability. Through experimental findings, the superiority of proposed BMBR has been confirmed under steady-state and dynamic operations.