Display omitted
Despite the massive interest and recent developments in the field of nanomedicine, only a limited number of formulations have found their way to the clinics. This shortcoming reveals ...the challenges facing the clinical translation of this technology. In the current article, we summarize and evaluate the status, market situation, and clinical profiles of the reported nanomedicines, the shortcomings limiting their clinical translation, as well as some approaches designed to break through this barrier. Moreover, some emerging technologies that have the potential to compete with nanomedicines are highlighted. Lastly, we identify the key factors that should be considered in nanomedicine-related research to be clinically-translatable. These can be classified into five areas: rational design during the research and development stage, the recruitment of representative preclinical models, careful design of clinical trials, development of specific and uniform regulatory protocols, and calls for non-classic sponsorship. This new field of endeavor was firmly established during the last two decades and more in-depth progress is expected in the coming years.
This paper applies an interlayer restoration deep neural network (IRDNN) for scalable high efficiency video coding (SHVC) to improve visual quality and coding efficiency. It is the first time to ...combine deep neural network (DNN) and SHVC. Considering the coding architecture of SHVC, we elaborate a multi-frame and multi-layer neural network to restore the interlayer of SHVC by utilizing both the adjacent reconstructed frames of the base layer (BL) and enhancement layer (EL). Moreover, we analyze the temporal motion relationship of frames in one layer and the compression degradation relationship of frames between different layers, and propose the synergistic mechanism of motion restoration and compression restoration in our IRDNN. The network can generate an interlayer with higher quality serving for the EL coding and thus enhance the coding efficiency. A large-scale and various-quality-degradation dataset is self-made for the task of interlayer restoration of SHVC. The experimental results show that with our implementation on SHVC, the EL Bj<inline-formula> <tex-math notation="LaTeX">\phi </tex-math></inline-formula>ntegaard delta bit-rate (BD-BR) reduction is 9.291% and 6.007% in signal-to-noise ratio scalability and spatial scalability, respectively. The code is available at https://github.com/icecherylXuli/IRDNN .
In the above article <xref ref-type="bibr" rid="ref1">1 , in the discussions section it is stated "However, there are still challenges related to the scalability and key distribution problem that ...need to be addressed before symmetric key solutions can be adopted." The authors would like to clarify that these challenges can be successfully met in an independently verified, provably secure way using proprietary methods.
The full-replication data storage mechanism, as commonly utilized in existing blockchains, is the barrier to the system's scalability, since it retains a copy of entire blockchain at each node so ...that the overall storage consumption per block is O(n) with n participants. Yet another drawback is that this mechanism may limit the throughput in permissioned blockchain. Moreover, due to the existence of Byzantine nodes, existing partitioning methods, though widely adopted in distributed systems for decades, cannot suit for blockchain systems directly, so that it is critical to devise new storage mechanism for blockchain systems. This article proposes a novel storage engine, called BFT-Store, to enhance storage scalability by integrating erasure coding with Byzantine Fault Tolerance (BFT) consensus protocol. The first property of BFT-store is that the storage consumption per block can be reduced to O(1) for the first time, which enlarges overall storage capability when more nodes attend the blockchain. Second, we design an efficient online re-encoding protocol for storage scale-out and a hybrid replication scheme to enhance reading performance. Analysis in theory and extensive experimental results illustrate the scalability, availability and efficiency of BFT-Store via the implementation in an open-source permissioned blockchain Tendermint.
Consensus is a fundamental problem of distributed computing. While this problem has been known to be unsolvable since 1985, existing protocols were designed these past three decades to solve ...consensus under various assumptions. Today, with the recent advent of blockchains, various consensus implementations were proposed to make replicas reach an agreement on the order of transactions updating what is often referred to as a distributed ledger. Very little work has however been devoted to explore its theoretical ramifications. As a result existing proposals are sometimes misunderstood and it is often unclear whether the problems arising during their executions are due to implementation bugs or more fundamental design issues.
In this paper, we discuss the mainstream blockchain consensus algorithms and how the classic Byzantine consensus can be revisited for the blockchain context. In particular, we discuss proof-of-work consensus and illustrate the differences between the Bitcoin and the Ethereum proof-of-work consensus algorithms. Based on these definitions, we warn about the dangers of using these blockchains without understanding precisely the guarantees their consensus algorithm offers. In particular, we survey attacks against the Bitcoin and the Ethereum consensus algorithms. We finally discuss the advantage of the recent Blockchain Byzantine consensus definition over previous definitions, and the promises offered by emerging consistent blockchains.
•We compare the different consensus problems tackled by blockchains, the distributed computing literature and a more recent definition.•We propose a formalization of Bitcoin and Ethereum consensus algorithms.•We warn about the dangers of using these blockchains without understanding precisely the guarantees their consensus offers.•We present a survey of attacks against proof-of-work blockchain systems.
The n-queen problem represents a classic challenge in artificial intelligence (AI) research. It involves the placement of n queens on an n x n chessboard, with the objective of ensuring that no queen ...threatens another. This problem has long been a source of fascination for mathematicians and computer scientists alike due to its inherent complexity. It is a well-known fact that as the value of ‘n’ increases, the problem becomes more challenging and falls into the NP problem class. Given the computational demands of the problem, parallel methods are of critical importance. It is noteworthy that scalable and parallel approaches for the n-queen problem remain to be developed. The majority of existing methods working on graphs attempt to parallelize a recursive sequential algorithm. However, the unpredictable nature of these algorithms makes it challenging to parallelize them on modern computer architectures. Consequently, we have selected an iterative algorithm from the literature in order to facilitate parallelization. This paper presents an innovative approach to parallelization that differs from traditional matrix-based strategies. The n-queen graph is distributed among a network of nodes, ensuring effective load balancing through dynamic partitioning and real-time computation. Our bespoke distributed algorithm, designed for the maximum clique problem on the n-queen graph, operates with true concurrency, thus obviating the necessity for resource or data sharing. The results of our assessment demonstrate that the parallel algorithm outperforms a cutting-edge sequential algorithm in terms of task completion time. The findings demonstrate that the speedups are almost perfect and that the workloads are distributed evenly across the network nodes. Furthermore, the results demonstrate high scalability, with task completion times decreasing as the number of nodes increases.
Metaverse, as an evolving paradigm of the next-generation Internet, aims to build a fully immersive, hyper spatiotemporal, and self-sustaining virtual shared space for humans to play, work, and ...socialize. Driven by recent advances in emerging technologies such as extended reality, artificial intelligence, and blockchain, metaverse is stepping from science fiction to an upcoming reality. However, severe privacy invasions and security breaches (inherited from underlying technologies or emerged in the new digital ecology) of metaverse can impede its wide deployment. At the same time, a series of fundamental challenges (e.g., scalability and interoperability) can arise in metaverse security provisioning owing to the intrinsic characteristics of metaverse, such as immersive realism, hyper spatiotemporality, sustainability, and heterogeneity. In this paper, we present a comprehensive survey of the fundamentals, security, and privacy of metaverse. Specifically, we first investigate a novel distributed metaverse architecture and its key characteristics with ternary-world interactions. Then, we discuss the security and privacy threats, present the critical challenges of metaverse systems, and review the state-of-the-art countermeasures. Finally, we draw open research directions for building future metaverse systems.
We compare the long-time error bounds and spatial resolution of finite difference methods with different spatial discretizations for the Dirac equation with small electromagnetic potentials ...characterized by ɛ∈(0,1 a dimensionless parameter. We begin with the simple and widely used finite difference time domain (FDTD) methods, and establish rigorous error bounds of them, which are valid up to the time at O(1/ɛ). In the error estimates, we pay particular attention to how the errors depend explicitly on the mesh size h and time step τ as well as the small parameter ɛ. Based on the results, in order to obtain “correct” numerical solutions up to the time at O(1/ɛ), the ɛ-scalability (or meshing strategy requirement) of the FDTD methods should be taken as h=O(ɛ1/2) and τ=O(ɛ1/2). To improve the spatial resolution capacity, we apply the Fourier spectral method to discretize the Dirac equation in space. Error bounds of the resulting finite difference Fourier pseudospectral (FDFP) methods show that they exhibit uniform spatial errors in the long-time regime, which are optimal in space as suggested by the Shannon’s sampling theorem. Extensive numerical results are reported to confirm the error bounds and demonstrate that they are sharp.
At present, and increasingly so in the future, much of the captured visual content will not be seen by humans. Instead, it will be used for automated machine vision analytics and may require ...occasional human viewing. Examples of such applications include traffic monitoring, visual surveillance, autonomous navigation, and industrial machine vision. To address such requirements, we develop an end-to-end learned image codec whose latent space is designed to support scalability from simpler to more complicated tasks. The simplest task is assigned to a subset of the latent space (the base layer), while more complicated tasks make use of additional subsets of the latent space, i.e., both the base and enhancement layer(s). For the experiments, we establish a 2-layer and a 3-layer model, each of which offers input reconstruction for human vision, plus machine vision task(s), and compare them with relevant benchmarks. The experiments show that our scalable codecs offer 37%-80% bitrate savings on machine vision tasks compared to best alternatives, while being comparable to state-of-the-art image codecs in terms of input reconstruction.
The concepts of ‘scaling,’ ‘scalability,’ and ‘scale-up’ are increasingly used in business research and practice. However, the literature reveals a range of definitions for each, and often, their ...meanings are only implied. This diminishes the ability to build cumulative and meaningful insight - and conduct research - on each concept. In this editorial, we offer a systematic review that assesses and harmonizes prior definitions of these important concepts. This allows us to define and differentiate between (a) scaling as an organizational process, (b) scalability as an ordinary organizational capability, and (c) scale-up as a phase of organizational development. Complementing and extending existing scholarly work, we develop a rich agenda for scaling-related research in entrepreneurship.
•Systematic literature review of 57 articles (2001–2023) on organizational scaling•Extant definitions for scaling, scalability, and scale-up are assessed•New definitions for scaling, scalability, and scale-up are offered•Research agenda developed for research on the above concepts