•The outcomes of a systematic mapping study on cloud-native applications (CNA) are presented.•Identified principles, architectures, and methodologies for CNA are explained.•Existing engineering ...trends for cloud-native applications are summarized.•Research implications of existing CNA studies are discussed.•Promising future research directions for CNA engineering are proposed.
It is common sense that cloud-native applications (CNA) are intentionally designed for the cloud. Although this understanding can be broadly used it does not guide and explain what a cloud-native application exactly is. The term “cloud-native” was used quite frequently in birthday times of cloud computing (2006) which seems somehow obvious nowadays. But the term disappeared almost completely. Suddenly and in the last years the term is used again more and more frequently and shows increasing momentum. This paper summarizes the outcomes of a systematic mapping study analyzing research papers covering “cloud-native” topics, research questions and engineering methodologies. We summarize research focuses and trends dealing with cloud-native application engineering approaches. Furthermore, we provide a definition for the term “cloud-native application” which takes all findings, insights of analyzed publications and already existing and well-defined terminology into account.
There has been a great effort to evaluate software quality using proper tools and methods against different development environments changing over time. Quality Model for Object Oriented Design ...(QMOOD) is a verified model used for quality assessment of object-oriented software. The model associates quality metrics gathered from the source code and quality attributes in use to present a quality measurement. However, the model should be revised for recent multi-client software including native client applications, because there is a deficiency of metric gathering tools in such environments. More specifically, it is sometimes not possible to gather all quality properties required by QMOOD in all native development platforms of client applications. Hence, even though different client applications have the same design, the implementation quality cannot be monitored for the quality assurance. Analyzing and simplifying the metric set may alleviate this challenge, and a convenient quality assessment might be achieved. Thus, we propose to change the operational aspect of QMOOD by inserting an additional layer, Data Analytic, to the hierarchical structure of the conventional model. Accordingly, we provide a discussion on a case study including five native client applications. For this purpose, a design quality of one of the client applications is achieved to validate the appropriateness of the design, the data analytic on the metric set are implemented and the proposed data-oriented simplified QMOOD is applied to the other client applications. Finally, it is stated that the proposed approach successfully alleviated the problems in metric gathering for multi-client applications while applying QMOOD.
The evolution of Cloud Computing into a service utility, along with the pervasive adoption of the IoT paradigm, has promoted a significant growth in the need of computational and storage services. ...The traditional use of cloud services, focused on the consumption of one provider, is not valid anymore due to different shortcomings being the risk of vendor lock-in a critical. We are assisting to a change of paradigm, from the usage of a single cloud provider to the combination of multiple cloud service types, affecting the way in which applications are designed, developed, deployed and operated over such heterogeneous ecosystems. The result is an effective heterogeneity of architectures, methods, tools, and frameworks, copying with the multi-cloud application concept. The goal of this study is manifold. Firstly, it aims to characterize the multi-cloud concept from the application development perspective by reviewing existing definitions of multi-cloud native applications in the literature. Secondly, we set up the basis for the architectural characterization of these kind of applications. Finally, we highlight several open research issues drawn up from the analysis carried out. To achieve that, we have conducted a systematic literature review (SLR), where, a large set of primary studies published between 2011 and 2021 have been studied and classified. The in-depth analysis has revealed five main research trends for the improvement of the development and operation DevOps lifecycle of “multi-cloud native applications”. The paper finishes with directions for future work and research challenges to be addressed by the software community.
Coverage-guided fuzzing is one of the most popular approaches to detect bugs in programs. Existing work has shown that coverage metrics are a crucial factor in guiding fuzzing exploration of targets. ...A fine-grained coverage metric can help fuzzing to detect more bugs and trigger more execution states. Cloud-native applications that written by Golang play an important role in the modern computing paradigm. However, existing fuzzers for Golang still employ coarse-grained block coverage metrics, and there is no fuzzer specifically for cloud-native applications, which hinders the bug detection in cloud-native applications. Using fine-grained coverage metrics introduces more seeds and even leads to seed explosion, especially in large targets such as cloud-native applications.
Therefore, we employ an accurate edge coverage metric in fuzzer for Golang, which achieves finer test granularity and more accurate coverage information than block coverage metrics. To mitigate the seed explosion problem caused by fine-grained coverage metrics and large target sizes, we propose smart seed selection and adaptive task scheduling algorithms based on a variant of the classical adversarial multi-armed bandit (AMAB) algorithm. Extensive evaluation of our prototype on 16 targets in real-world cloud-native infrastructures shows that our approach detects 233% more bugs than go-fuzz, achieving an average coverage improvement of 100.7%. Our approach effectively mitigates seed explosion by reducing the number of seeds generated by 41% and introduces only 14% performance overhead.
Federated learning (FL) is a decentralized machine learning (ML) method that enables model training while preserving privacy. FL is gaining attention because it avoids data transfer to the server, ...facilitating the decentralized learning of the traditional ML model. Despite its potential, FL project is significantly more challenging to develop than centralized ML methods owing to decentralized local data. We propose FedOps , federated learning operations for constructing systematic FL project by enhancing machine learning operations (MLOps) to be effectively applied to FL while preserving its core process. To address complexity of FL implementation, we developed FedOps platform , which involves FedOps -based projects to manage the whole lifecycle in FL context. We also investigated methods to identify performance degradation factors in FL and suggest an approach for improvement. FedOps Platform provides an analysis tool for client heterogeneity, called chunk-bench . This tool enables researchers and engineers to gain insights into systems heterogeneity by using only small chunk of the clients' data to execute test in the shortest time possible while tracking the systems heterogeneity across the clients. By addressing systems heterogeneity, FedOps Platform achieved 13%-43% improvement in communication cost-to-accuracy and 20%-68% improvement in time-to-accuracy. We believe that FedOps Platform offers an optimal solution for end-to-end development of FL projects, with significantly improving both computational and communication efficiencies.
Nowadays the major trend in IT dictates deploying applications in the cloud, cutting the monolithic software into small, easily manageable and developable components, and running them in a ...microservice scheme. With these choices come the questions: which cloud service types to choose from the several available options, and how to distribute the monolith in order to best resonate with the selected cloud features. We propose a model that presents monolithic applications in a novel way and focuses on key properties that are crucial in the development of cloud-native applications. The model focuses on the organization of scaling units, and it accounts for the cost of provisioned resources in scale-out periods and invocation delays among the application components. We analyze dis-aggregated monolithic applications that are deployed in the cloud, offering both Container-as-a-Service (CaaS) and Function-as-a-Service (FaaS) platforms. We showcase the efficiency of our proposed optimization solution by presenting the reduction in operation costs as an illustrative example. We propose to group similarly low scale components together in CaaS, while running dynamically scaled components in FaaS. By doing so, the price is decreased as unnecessary memory provisioning is eliminated, while application response time does not show any degradation.