Microservice architecture is a service-oriented paradigm that enables the decomposition of cumbersome monolithic-based software systems. Using microservice design principles, it is possible to ...develop flexible, scalable, reusable, and loosely coupled software that could be containerized and deployed in a distributed edge/cloud environment. The flexible deployment of microservices in an edge environment increases system performance in terms due to dynamic service function placement and chaining possibly resulting in latency reduction, fault tolerance, scalability, efficient resource utilization, cost reduction, and energy consumption reduction. On the other hand, virtualization and containerization of microservices add processing and communication overheads. Therefore, to evaluate end-to-end microservices-based system performance, we need to have an end-to-end mathematical formulation of the overall microservice-based network system. Incorporating the virtualization overhead, here we provide end-to-end mathematical formulation considering system parameters: latency, throughput, computational resource usage, and energy consumption. We then evaluate the formulation in a testbed environment with the Microservice-based SDN (MSN) framework that decomposes the Software-defined Networking (SDN) controller in microservices with Docker Container. The final result validates the presented mathematical modeling of the system’s dynamic behavior which can be used to design a microservice-based system.
The Internet of Things gets bigger and bigger audiences. This topic is really popular in science and also in industry. There are many fields for research. One of them is efficient deployment against ...resource utilization. Another one is containerization within IoT platforms. One of the commonalities of these two topics is different CPU affinity against containerized platforms to get the best performance. There were plenty of papers dedicated to containerization even in IoT but none of these focused on core affinity. As this survey analyzes the scalability and stability of the platform in different core-container configurations based on the IoT platform - DeviceHive, it brings a novelty to this area. Most interesting observations were made in the field of the same configurations in terms of the number of nodes but varying with core affinity. Analyzed observations may be useful during the architecture planning phase for containerized IoT platforms.
Thanks to the development of cloud platforms and containerization technologies, the spectrum of opportunities for deploying complex multi-component systems is extremely wide. On the one hand, the ...available tools unify the solution of complex problems. But on the other hand they do not have sufficient means for automatic synthesis and analysis. The article presents an approach that allows unifying and automating the task of building deployment and initialization code for of multi-component software systems. The proposed solution is based on the use of applicative computing systems and abstract algebraic structures that are interpreted in various ways to generate various parts of the software system.
In a LoRaWAN network, the backend is generally distributed as Software as a Service (SaaS) based on container technology, and recently, a containerized version of the LoRaWAN node stack is also ...available. Exploiting the disaggregation of LoRaWAN components, this paper focuses on the emulation of complex end-to-end architecture and infrastructures for smart city scenarios, leveraging on lightweight virtualization technology. The fundamental metrics to gain insights and evaluate the scaling complexity of the emulated scenario are defined. Then, the methodology is applied to use cases taken from a real LoRaWAN application in a smart city with hundreds of nodes. As a result, the proposed approach based on containers allows for the following: (i) deployments of functionalities on diverse distributed hosts; (ii) the use of the very same SW running on real nodes; (iii) the simple configuration and management of the emulation process; (iv) affordable costs. Both premise and cloud servers are considered as emulation platforms to evaluate the resource request and emulation cost of the proposed approach. For instance, emulating one hour of an entire LoRaWAN network with hundreds of nodes requires very affordable hardware that, if realized with a cloud-based computing platform, may cost less than USD 1.
Cloud-native computing principles such as virtualization and orchestration are key to transferring to the promising paradigm of edge computing. Challenges of containerization, operative models and ...scarce availability of established tools make a thorough review indispensable. Therefore, the authors have described the practical methods and tools found in the literature as well as in current community-led development projects, and have thoroughly exposed the future directions of the field. Container virtualization and its orchestration through Kubernetes have dominated the cloud computing domain, while major efforts have been recently recorded focused on the adaptation of these technologies to the edge. Such initiatives have addressed either the reduction of container engines and the development of specific tailored operating systems or the development of smaller K8s distributions and edge-focused adaptations (such as KubeEdge). Finally, new workload virtualization approaches, such as WebAssembly modules together with the joint orchestration of these heterogeneous workloads, seem to be the topics to pay attention to in the short to medium term.