In episode 217 of Software Engineering Radio, host Charles Anderson talks with James Turnbull, a software developer and security specialist who's vice president of services at Docker. Lightweight ...Docker containers are rapidly becoming a tool for deploying microservice-based architectures.
Docker container virtualization technology is being widely adopted in cloud computing environments because of its lightweight and efficiency. However, it requires adequate control and management via ...an orchestrator. As a result, cloud providers are adopting the open-access Kubernetes platform as the standard orchestrator of containerized applications. To ensure applications’ availability in Kubernetes, the latter uses Raft protocol’s replication mechanism. Despite its simplicity, Raft assumes that machines fail only when shutdown. This failure event is rarely the only reason for a machine’s malfunction. Indeed, software errors or malicious attacks can cause machines to exhibit Byzantine (i.e. random) behavior and thereby corrupt the accuracy and availability of the replication protocol. In this paper, we propose a Kubernetes multi-Master Robust (KmMR) platform to overcome this limitation. KmMR is based on the adaptation and integration of the BFT-SMaRt fault-tolerant replication protocol into Kubernetes environment. Unlike Raft protocol, BFT-SMaRt is resistant to both Byzantine and non-Byzantine faults. Experimental results show that KmMR is able to guarantee the continuity of services, even when the total number of tolerated faults is exceeded. In addition, KmMR provides on average a consensus time 1000 times shorter than that achieved by the conventional platform (with Raft), in such condition. Finally, we show that KmMR generates a small additional cost in terms of resource consumption compared to the conventional platform.
•Docker, Kubernetes.•DDoS attack, Byzantine Faults.•Raft protocol, BFT-SMaRt.•State machine replication.
Genome assemblies are foundational for understanding the biology of a species. They provide a physical framework for mapping additional sequences, thereby enabling characterization of, for example, ...genomic diversity and differences in gene expression across individuals and tissue types. Quality metrics for genome assemblies gauge both the completeness and contiguity of an assembly and help provide confidence in downstream biological insights. To compare quality across multiple assemblies, a set of common metrics are typically calculated and then compared to one or more gold standard reference genomes. While several tools exist for calculating individual metrics, applications providing comprehensive evaluations of multiple assembly features are, perhaps surprisingly, lacking. Here, we describe a new toolkit that integrates multiple metrics to characterize both assembly and gene annotation quality in a way that enables comparison across multiple assemblies and assembly types.
Our application, named GenomeQC, is an easy-to-use and interactive web framework that integrates various quantitative measures to characterize genome assemblies and annotations. GenomeQC provides researchers with a comprehensive summary of these statistics and allows for benchmarking against gold standard reference assemblies.
The GenomeQC web application is implemented in R/Shiny version 1.5.9 and Python 3.6 and is freely available at https://genomeqc.maizegdb.org/ under the GPL license. All source code and a containerized version of the GenomeQC pipeline is available in the GitHub repository https://github.com/HuffordLab/GenomeQC.
The Multi-access Edge Computing (MEC) and Fog Computing paradigms are enabling the opportunity to have middleboxes either statically or dynamically deployed at network edges acting as local proxies ...with virtualized resources for supporting and enhancing service provisioning in edge localities. However, migration of edge-enabled services poses significant challenges in the edge computing environment. In this paper, we propose an edge computing platform architecture that supports service migration with different options of granularity (either entire service/data migration, or proactive application-aware data migration) across heterogeneous edge devices (either MEC-based servers or resource-poor Fog devices) that host virtualized resources (Docker Containers). The most innovative elements of the technical contribution of our work include i) the possibility to select either an application-agnostic or an application-aware approach, ii) the possibility to choose the appropriate application-aware approach (e.g., based on data access frequencies), iii) an automatic edge services placement support with the aim of finding a more effective placement with low energy consumption, and iv) the in-lab experimentation of the performance achieved over rapidly deployable environments with resource-limited edges such as Raspberry Pi devices.
Many bioinformatic applications require to exploit the capabilities of several computational resources to effectively access and process large and distributed datasets. In this context, Grid ...computing has been largely used to face unprecedented challenges in Computational Biology, at the cost of complex workarounds needed to make applications successfully running. The Grid computing paradigm, in fact, has always suffered from a lack of flexibility. Although this has been partially solved by Cloud computing, the on-demand approach is way distant from the original idea of volunteering computing that boosted the Grid paradigm. A solution to outpace the impossibility of creating custom environments for running applications in Grid is represented by the containerization technology. In this paper, we describe our experience in exploiting a Docker-based approach to run in a Grid environment a novel, computationally intensive, bioinformatic application, which models the DNA spatial conformation inside the nucleus of eukaryotic cells. Results assess the feasibility of this approach in terms of performance and efforts to run large experiments.
•With more flexibility, grid can be a cloud competitor, since it is free.•Docker-based solutions can help in customizing the grid environment.•Udocker can extend the grid utility, since work without root permissions.•We model the spatial DNA conformation in the nucleus, using a graph-based approach.•Preselecting grid resource can improve the computation efficiency and scalability.
The Tactile Internet (TI) can be regarded as the next evolution in the world of communication. With its envisioned purpose and potential in shaping up the economy, industry and society, this paradigm ...aims to bring a new dimension to life by enabling humans to interact with machines remotely and in real-time with haptic and kinesthetic feedback. However, to translate this into reality, Tactile Internet will need to meet the stringent requirements of extremely low latency in conjunction with ultra-high reliability, availability, and security. This poses a challenge on the available communication systems to achieve a round-trip delay within 1 to 10 milliseconds time bound that enables the timely delivery of critical tactile and haptic sensations.
This paper aims to evaluate the Real-Time Transport Protocol (RTP) through an emulation framework. It integrates containerization using Linux-based Docker Containers with NS-3 Network Simulator to conceptualize a haptic teleoperation system. The framework is then used to test the protocol’s feasibility for delivering texture haptic data between master and slave domains in accordance with the end-to-end delay requirements specified by IEEE 1918.1 standards. The results have shown that the timely provision of haptic data is achievable by obtaining an average round-trip delay of 17.8493 ms from the emulation experiment. As such, the results satisfy the expected IEEE 1918.1 standards constraints for medium-dynamic environment use cases.
We present a small multiples approach to analyze an ensemble of molecular dynamics trajectories related to reactions of astrochemical interest. A tiled visualization tool is being developed on the M ...ANDELBROT platform housed at Maison de la Simulation in Saclay
(France). Instead of one huge screen, the TileViz software allows to present every visualization tool outputs side by side. Scientists are able to analyze multiple simulations at the same time, with varied parameters, or to visually compare similar results. We have included in our
tool the VMD molecular dynamics viewer, making it useful for the broad molecular dynamics community. Here, as a case study, we applied the approach to chemical dynamics trajectories performed by some of us with the aim of understanding possible synthetic pathways for the formation of complex
organic molecules in space.
Collecting and preserving the smart environment logs connected to cloud storage is challenging due to the black-box nature and the multi-tenant cloud models which can pervade log secrecy and privacy. ...The existing work for log secrecy and confidentiality depends on cloud-assisted models, but these models are prone to multi-stakeholder collusion problems. This study proposes ’PLAF,’ a holistic and automated architecture for proactive forensics in the Internet of Things (IoT) that considers the security and privacy-aware distributed edge node log preservation by tackling the multi-stakeholder issue in a fog enabled cloud. We have developed a test-bed to implement the specification, as mentioned earlier, by incorporating many state-of-the-art technologies in one place. We used Holochain to preserve log integrity, provenance, log verifiability, trust admissibility, and ownership non-repudiation. We introduced the privacy preservation automation of log probing via non-malicious command and control botnets in the container environment. For continuous and robust integration of IoT microservices, we used docker containerization technology. For secure storage and session establishment for logs validation, Paillier Homomorphic Encryption, and SSL with Curve25519 is used respectively. We performed the security and performance analysis of the proposed PLAF architecture and showed that, in stress conditions, the automatic log harvesting running in containers gives a 95% confidence interval. Moreover, we show that log preservation via Holochain can be performed on ARM-Based architectures such as Raspberry Pi in a very less amount of time when compared with RSA and blockchain.
With the prevalence of big-data-driven applications, such as face recognition on smartphones and tailored recommendations from Google Ads, we are on the road to a lifestyle with significantly more ...intelligence than ever before. Various neural network powered models are running at the back end of their intelligence to enable quick responses to users. Supporting those models requires lots of cloud-based computational resources, e.g., CPUs and GPUs. The cloud providers charge their clients by the amount of resources that they occupy. Clients have to balance the budget and quality of experiences (e.g., response time). The budget leans on individual business owners, and the required Quality of Experience (QoE) depends on usage scenarios of different applications. For instance, an autonomous vehicle requires an real-time response, but unlocking your smartphone can tolerate delays. However, cloud providers fail to offer a QoE-based option to their clients. In this paper, we propose DQoES , differentiated quality of experience scheduler for deep learning inferences. DQoES accepts clients' specifications on targeted QoEs, and dynamically adjusts resources to approach their targets. Through the extensive cloud-based experiments, DQoES demonstrates that it can schedule multiple concurrent jobs with respect to various QoEs and achieve up to 8x times more satisfied models when compared to the existing system.
Distributed Cloud environments are now resorting to Cloud applications composed of heterogeneous microservices. Cloud service providers strive to provide high quality of service (QoS) and response ...time is one of the key QoS attributes for microservices. The dynamism of microservice ecosystems necessitates runtime adaptations and microservices rescheduling to avoid performance degradation. Existing works target rescheduling in hypervisor‐based systems, while ignoring the influence of configuration parameters of container‐based microservices. In an effort to address these challenges, this article describes a novel microservice rescheduling framework, throttling and interaction‐aware anticorrelated rescheduling for microservices, to proactively perform rescheduling activities whilst ensuring timely service responses. Based on periodic monitoring of the performance attributes, the framework schedules container migrations. Considering the exponentially large solution space, a metaheuristic approach based on multiverse optimization is developed to generate the near‐optimal mapping of microservices to the datacenter resources. Experimental results indicate that our framework provides superior performance with a reduction of up to 13.97% in the average response time, when compared with systems with no support for rescheduling.