Display omitted
•Preparation methods of graphene-like materials from biomass were summarized.•The mechanisms and product characteristics of different methods were discussed.•Common characterization ...instruments to determine the structure were discussed.•Tailored designs of graphene-like material need further investigation.
Two-dimensional graphene materials attracted much attention worldwide because of their superior performance in electronic devices, sensors, and energy storage. However, its application is limited by high cost and insufficient production. The work to find out a simple and environmentally friendly process is highly needed. Designed pyrolysis of biomass precursors can derive graphene-like materials. This review summarizes some typical preparation processes for graphene-like materials synthesis from biomass carbonization via pyrolysis, including salt-based activation, chemical blowing, template-based confinement, coupling with hydrothermal carbonization pretreatment, post exfoliation, and some other methods. The operation of these methods and the performance of obtained graphene-like materials were closely highlighted. The scalability of the techniques and the applications of the biomass graphene-like carbon were also discussed. Some advanced characterization methods, such as SEM, TEM, AFM, Raman, and XPS to determine the graphene-like structure and graphitization degree were also discussed. In the end, some current challenges and future perspectives of the synthesis of these graphene-like materials were concluded.
In the past decade, crypto-currencies such as Bitcoin and Litecoin have developed rapidly. Blockchain as the underlying technology of these digital crypto-currencies has attracted great attention ...from academia and industry. Blockchain has many good features, such as trust-free, transparency, anonymity, democracy, automation, decentralization and security. Despite these promising features, scalability is still a key barrier when the blockchain technology is widely used in real business environments. In this article, we focus on the scalability issue, and provide a brief survey of recent studies on scalable blockchain systems. We first discuss the scalability issue from the perspectives of throughput, storage and networking. Then, existing enabling technologies for scalable blockchain systems are presented. We also discuss some research challenges and future research directions for scalable blockchain systems.
The demand for a new generation of high-temperature dielectric materials toward capacitive energy storage has been driven by the rise of high-power applications such as electric vehicles, aircraft, ...and pulsed power systems where the power electronics are exposed to elevated temperatures. Polymer dielectrics are characterized by being lightweight, and their scalability, mechanical flexibility, high dielectric strength, and great reliability, but they are limited to relatively low operating temperatures. The existing polymer nanocomposite-based dielectrics with a limited energy density at high temperatures also present a major barrier to achieving significant reductions in size and weight of energy devices. Here we report the sandwich structures as an efficient route to high-temperature dielectric polymer nanocomposites that simultaneously possess high dielectric constant and low dielectric loss. In contrast to the conventional single-layer configuration, the rationally designed sandwich-structured polymer nanocomposites are capable of integrating the complementary properties of spatially organized multicomponents in a synergistic fashion to raise dielectric constant, and subsequently greatly improve discharged energy densities while retaining low loss and high charge–discharge efficiency at elevated temperatures. At 150 °C and 200 MV m−1, an operating condition toward electric vehicle applications, the sandwich-structured polymer nanocomposites outperform the state-of-the-art polymer-based dielectrics in terms of energy density, power density, charge–discharge efficiency, and cyclability. The excellent dielectric and capacitive properties of the polymer nanocomposites may pave a way for widespread applications in modern electronics and power modules where harsh operating conditions are present.
Cyber–Physical Systems are increasingly complex and frequently integrated into modern societies via critical infrastructure systems, products, and services. Consequently, there is a need for reliable ...functionality of these complex systems under various scenarios, from physical failures due to aging to cyber attacks. However, the development of effective strategies to restore disrupted cyber–physical infrastructure systems continues to be a major challenge. Even though there have been an increasing number of papers evaluating recovery planning in cyber–physical infrastructures networks, a comprehensive literature review focusing on mathematical modeling and optimization methods is still lacking. Thus, this study critically analyzes the literature on optimization techniques for recovery planning of cyber–physical infrastructure networks after a disruption, to synthesize key findings on the current methods in this domain. A total of 152 relevant research papers are reviewed following an extensive assessment of all major scientific databases. The main mathematical modeling practices and optimization methods are identified for both deterministic and stochastic formulations, categorizing them based on the solution approach (exact, heuristic, metaheuristic), objective function, and network size. Having identified the gaps, a set of future trends for both the methodology and application of optimization algorithms is presented. Overall, there is a need to shift toward scalable optimization solution algorithms, empowered by data-driven methods and machine learning algorithms, to provide reliable and computationally efficient decision-support systems for decision-makers and practitioners.
•Conducted a comprehensive analysis of optimization techniques for cyber-physical infrastructures facing disruptions.•Reviewed 152 papers after an extensive assessment of major scientific databases.•Identified optimization practices for deterministic and stochastic formulations.•Analyzed models regarding objective functions, decision variables, and constraints.•Proposed possible future directions in the modeling and solution algorithm.
Accounting has always been influenced by digital technology, although most of it has been replacing analogue instruments with digital versions. A blockchain is a digital ledger that is used to record ...transactions between different participants in a network. It is an internet-based, peer-to-peer distributed ledger that contains all transactions since its inception. Blockchain technology has the potential to revolutionize the world humanity implementing in the business based on the concept of transmitting valuable digital assets like bitcoin without the need of a third-party intermediary. Blockchain is considered as a type of database or a sort of digital ledger, which is widely used by many financial organizations. It's a distributed ledger which keeps records of immutable and verifiable data. This blockchain technology permits decentralized ledger transactions to be produced without the intervention of a third party. Because of its decentralization, networks have a high degree of protection. The aim of the study is to investigate various decisions making factors, affecting in adopting blockchain technology in the field of accounting. The result showed that secure and private, transparent and auditable, immutable, better transparency, reduce cost, transparency, real-time transaction, and flexible are most likely influencing factors in adoption of blockchain technology. The result indicates that the Quorum, Sap Hana, and Ethereum, platforms are most consistent and trusted platforms for block chain technology and the blockchain platforms are found most suitable, secured and stronger platform.
Received: 25 October 2021 / Accepted: 2 February 2022 / Published: 5 March 2022
Computing Continuum (CC) systems are challenged to ensure the intricate requirements of each computational tier. Given the system’s scale, the Service Level Objectives (SLOs), which are expressed as ...these requirements, must be disaggregated into smaller parts that can be decentralized. We present our framework for collaborative edge intelligence, enabling individual edge devices to (1) develop a causal understanding of how to enforce their SLOs and (2) transfer knowledge to speed up the onboarding of heterogeneous devices. Through collaboration, they (3) increase the scope of SLO fulfillment. We implemented the framework and evaluated a use case in which a CC system is responsible for ensuring Quality of Service (QoS) and Quality of Experience (QoE) during video streaming. Our results showed that edge devices required only ten training rounds to ensure four SLOs; furthermore, the underlying causal structures were also rationally explainable. The addition of new types of devices can be done a posteriori; the framework allowed them to reuse existing models, even though the device type had been unknown. Finally, rebalancing the load within a device cluster allowed individual edge devices to recover their SLO compliance after a network failure from 22% to 89%.
•Identifying causal relations between environmental metrics and SLO fulfillment.•Transfer and combination of ML models between heterogeneous devices•Offloading load according to devices’ capabilities to fulfill SLOs.•Creating hierarchical structures in the CC for requirements assurance.
Today, cloud computing applications are rapidly constructed by services belonging to different cloud providers and service owners. This work presents the inter-cloud elasticity framework, which ...focuses on cloud load balancing based on dynamic virtual machine reconfiguration when variations on load or on user requests volume are observed. We design a dynamic reconfiguration system, called inter-cloud load balancer (ICLB), that allows scaling up or down the virtual resources (thus providing automatized elasticity), by eliminating service downtimes and communication failures. It includes an inter-cloud load balancer for distributing incoming user HTTP traffic across multiple instances of inter-cloud applications and services and we perform dynamic reconfiguration of resources according to the real time requirements. The experimental analysis includes different topologies by showing how real-time traffic variation (using real world workloads) affects resource utilization and by achieving better resource usage in inter-cloud.
Deterministic concurrency control is able to avoid the expensive two-phase commit in distributed databases and can solve the single-thread bottleneck of transaction processing in blockchain systems. ...Most existing deterministic concurrency control protocols rely on prior knowledge of the transaction’s read-write set, which is impractical in most cases. The state-of-the-art deterministic concurrency control protocols Aria and DOCC break this limitation. However, they do not perform well in multi-node and multi-core scalability.
To solve the scalability issues, we propose Dodo, a novel deterministic concurrency control protocol. Dodo processes transactions in multi-batches, and each batch is divided into three phases. In the first phase, transactions are executed as read committed and staged. In the second phase, transactions are validated for the read-write conflict. In the third phase, only the continuous un-conflicted transactions at the head of the batch can be committed. For the aborted transaction that will be rerun in the next batch, we utilize its write set in the previous execution to reduce the read-write conflicts in the next batch. In this way, Dodo has the following benefits. First, Dodo does not rely on the prior knowledge of the read-write set. Second, Dodo commits transactions in pre-determined orders (TIDs) providing high multi-node scalability. Third, Dodo runs transactions in each phase concurrently, and the aborted transactions are re-executed in a conflict-less manner, enabling high multi-core scalability. Besides, we propose two optimism-based improvements, lazy decision and early-write visibility, to reduce aborts and blocking. Our evaluation shows that Dodo outperforms Aria and DOCC by up to 16.5x and 8.0x in a single-node setting and scales well in a multi-node setting.
•We propose a novel deterministic concurrency control protocol Dodo.•Dodo solves the scalability issue of state-of-the-art protocols (Aria and DOCC).•Dodo outperforms Aria and DOCC by up to 16.5x in multi-core settings.•Dodo has better multi-node scalability than Aria.
Internet of Things (IoT) enables smart campuses more convenient for cloud services. The availability of cloud resources to its users appears as a fundamental challenge. The existing research presents ...several auto-scaling techniques to scale the resources with the increase in users' demands. However, still, the cloud users of auto-scaled servers experience service disruption, delayed responses, and the occurrence of service bursts. The prevailing burst management framework exhibits limitations in the context of burdening the existing auto-scaled machines for cost estimation and resource allocation. This research presents a 3-axis auto-scaling framework for load balancing and resource allocation by incorporating a dedicated cost estimator and allocator (on the z-axis). The cost estimation server develops a log of existing load estimates of vertical and horizontal servers and scales the new users' requests in case the vertical threshold is breached with new requests. The cost estimator, in its data structure, keeps track of the current resources available at both vertical and horizontal servers. The historical information of available resources and the new resources' requests is decided by the cost estimator as per demand and supply scenario. The general characteristics of servers are resources pooling, requests queue development, burst identification, automatic scaling, and load balancing. The cost estimator also prioritizes vertical servers for resource allocations, and switches to the horizontal server when the vertical server reaches its 75% quota of resources. The study simulates 1000 users' requests of smart campus, adopts state-of-the-art ensemble with bagging strategy and handles an effective class imbalance situation.