This study discusses wireless data flow controller design accounting for input signal saturation and an unknown delay. A main focus is to maintain a low real-time computational complexity. This is ...motivated by a need to run a very large number of controllers in a single network node, e.g. in wireless 5G data flow control applications. Linear quadratic Gaussian design is therefore first applied to compute feedback and feedforward gains, for assumed nominal delays without accounting for the saturation that is caused by the one-directional data flow. Gridding of design penalties, system parameters and delays are then used to pre-compute and tabulate a subset of the ℒ2 stability region by repeated evaluation of the Popov criterion. This step secures a second requirement of robust stability with respect to the uncertain feedback delay. The design is validated with simulation using a detailed model of a networked wireless data flow controller used in cellular communications.
A Survey on Data-Flow Testing Su, Ting; Wu, Ke; Miao, Weikai ...
ACM computing surveys,
01/2018, Letnik:
50, Številka:
1
Journal Article
Recenzirano
Data-flow testing (DFT) is a family of testing strategies designed to verify the interactions between each program variable's definition and its uses. Such a test objective of interest is referred to ...as a def-use pair. DFT selects test data with respect to various test adequacy criteria (i.e., data-flow coverage criteria) to exercise each pair. The original conception of DFT was introduced by Herman in 1976. Since then, a number of studies have been conducted, both theoretically and empirically, to analyze DFT's complexity and effectiveness. In the past four decades, DFT has been continuously concerned, and various approaches from different aspects are proposed to pursue automatic and efficient data-flow testing. This survey presents a detailed overview of data-flow testing, including challenges and approaches in enforcing and automating it: (1) it introduces the data-flow analysis techniques that are used to identify def-use pairs; (2) it classifies and discusses techniques for data-flow-based test data generation, such as search-based testing, random testing, collateral-coverage-based testing, symbolic-execution-based testing, and model-checking-based testing; (3) it discusses techniques for tracking data-flow coverage; (4) it presents several DFT applications, including software fault localization, web security testing, and specification consistency checking; and (5) it summarizes recent advances and discusses future research directions toward more practical data-flow testing.
Most software attacks subvert the intended data-flow of a program via exploiting the memory corruption vulnerabilities. Data-Flow Integrity (DFI) is a generic defense against such attacks. Its ...security guarantee mainly depends on the accuracy of the static Data-Flow Graph (DFG) generated from Data-Flow Analysis (DFA), but the static DFG is conservatively over-approximated due to the imprecision of DFA. Hence a natural question is: what is the real protective power of DFI and how to measure it? In this work, we first evaluate the effectiveness of DFI based on the constructed memory corruption offense-defense model and the proposed attack Data-Flow Bending (DFB). We show how DFB corrupts memory data while adhering to DFI through a proof-of-concept exploit. Furthermore, we verify the possibility of the state-of-the-art data-oriented attacks using practical cases in the presence of DFI. Our work indicates that DFI may be ineffective against the exploitation of memory corruption vulnerabilities in certain circumstances, and that DFB can circumvent DFI to carry out memory corruption attacks.
Dynamic Configuration of Partitioning in Spark Applications Gounaris, Anastasios; Kougka, Georgia; Tous, Ruben ...
IEEE transactions on parallel and distributed systems,
2017-July-1, 2017-7-1, 20170701, 2017-07-01, Letnik:
28, Številka:
7
Journal Article, Publication
Recenzirano
Odprti dostop
Spark has become one of the main options for large-scale analytics running on top of shared-nothing clusters. This work aims to make a deep dive into the parallelism configuration and shed light on ...the behavior of parallel spark jobs. It is motivated by the fact that running a Spark application on all the available processors does not necessarily imply lower running time, while may entail waste of resources. We first propose analytical models for expressing the running time as a function of the number of machines employed. We then take another step, namely to present novel algorithms for configuring dynamic partitioning with a view to minimizing resource consumption without sacrificing running time beyond a user-defined limit. The problem we target is NP-hard. To tackle it, we propose a greedy approach after introducing the notions of dependency graphs and of the benefit from modifying the degree of partitioning at a stage; complementarily, we investigate a randomized approach. Our polynomial solutions are capable of judiciously use the resources that are potentially at user's disposal and strike interesting trade-offs between running time and resource consumption. Their efficiency is thoroughly investigated through experiments based on real execution data.
Data flow systems allow the user to design a flow diagram that specifies the relations between system components which process, filter or visually present the data. Visualization systems may benefit ...from user-defined data flows as an analysis typically consists of rendering multiple plots on demand and performing different types of interactive queries across coordinated views. In this paper, we propose VisFlow, a web-based visualization framework for tabular data that employs a specific type of data flow model called the subset flow model. VisFlow focuses on interactive queries within the data flow, overcoming the limitation of interactivity from past computational data flow systems. In particular, VisFlow applies embedded visualizations and supports interactive selections, brushing and linking within a visualization-oriented data flow. The model requires all data transmitted by the flow to be a data item subset (i.e. groups of table rows) of some original input table, so that rendering properties can be assigned to the subset unambiguously for tracking and comparison. VisFlow features the analysis flexibility of a flow diagram, and at the same time reduces the diagram complexity and improves usability. We demonstrate the capability of VisFlow on two case studies with domain experts on real-world datasets showing that VisFlow is capable of accomplishing a considerable set of visualization and analysis tasks. The VisFlow system is available as open source on GitHub.
Context-sensitive methods of program analysis increase the precision of interprocedural analysis by achieving the effect of call inlining. These methods have been defined using different formalisms ...and hence appear as algorithms that are very different from each other. Some methods traverse a call graph top-down, whereas some others traverse it bottom-up first and then top-down. Some define contexts explicitly, whereas some do not. Some of them directly compute data flow values, while some first compute summary functions and then use them to compute data flow values. Further, different methods place different kinds of restrictions on the data flow frameworks supported by them. As a consequence, it is difficult to compare the ideas behind these methods in spite of the fact that they solve essentially the same problem. We argue that these incomparable views are similar to those of blind men describing an elephant, called context sensitivity, and make it difficult for a non-expert reader to form a coherent picture of context-sensitive data flow analysis.
We bring out this whole-elephant view of context sensitivity in program analysis by proposing a unified model of context sensitivity that provides a clean separation between computation of contexts and computation of data flow values. Our model captures the essence of context sensitivity and defines simple soundness and precision criteria for context-sensitive methods. It facilitates declarative specifications of context-sensitive methods, insightful comparisons between them, and reasoning about their soundness and precision. We demonstrate this by instantiating our model to many known context-sensitive methods.
Workflow technology has become a standard solution for managing increasingly complex business processes. Successful business process management depends on effective workflow modeling and analysis. ...One of the important aspects of workflow analysis is the data-flow perspective because, given a syntactically correct process sequence, errors can still occur during workflow execution due to incorrect data-flow specifications. However, there have been only scant treatments of the data-flow perspective in the literature and no formal methodologies are available for systematically discovering data-flow errors in a workflow model. As an indication of this research gap, existing commercial workflow management systems do not provide tools for data-flow analysis at design time. In this paper, we provide a data-flow perspective for detecting data-flow anomalies such as missing data, redundant data, and potential data conflicts. Our data-flow framework includes two basic components: data-flow specification and data-flow analysis; these components add more analytical rigor to business process management.
Students have difficulty learning data flow diagram material because of the lack of references and constraints in accessing other reference sources. In addition, students also have difficulty ...understanding the notations and symbols used in making DFDs. Therefore interactive multimedia aims to produce computer-based learning products, which can be used by students as a source of learning media in making data flow diagrams at the Instituto Superior Cristal. The resulting interactive multimedia product is in the form of an .exe file. so that it is easy for students to use. This development research uses the Lee & Owens 2004 model. This model has five stages, namely, Analysis, Design, Development, Implementation, and Evaluation. The validity test of interactive multimedia products was carried out by media experts and material experts, and gave comments that multimedia was feasible to use. Practicality and attractiveness tests were carried out by informatics students at the Instituto Superior Cristal and gave positive responses in using interactive multimedia in small group and large group trials. Thus the interactive multimedia product on data flow diagram material is declared valid, practical and interesting to be used by students in the learning process at the Instituto Superior Cristal.AbstrakMahasiswa kesulitan dalam pembelajaran Materi data flow diagram karena minimnya referensi dan terkendala dalam mengakses sumber referensi lain . Oleh karena itu Multimedia interaktif bertujuan untuk menghasilkan produk pembelajaran, berbasis komputer yang dapat digunakan oleh mahasiswa sebagai salah satu sumber media pembelajaran dalam pembuatan data flow diagram di Instituto Superior Cristal. Produk multimedia interaktif yang dihasilkan berupa file .exe. sehingga mudah digunakan oleh mahasiswa. Penelitian pengembangan ini menggunakan model Lee & owens 2004. Pada model tersebut memiliki lima tahapan yaitu, Analisis, Desain, Pengembangan, Implementasi, dan Evaluasi. Uji kelayakan produk multimedia interaktif dilakukan oleh ahli media dan ahli materi, dan memberikan tanggapan bahwa multimedia layak untuk digunakan. Uji kepraktisan dan kemenarikan dilakukan oleh mahasiswa informatika Instituto Superior Cristal dan memberikan respon yang positif dalam menggunakan multimedia interaktif. Dengan demikian produk multimedia interaktif pada materi data flow diagram dapat digunakan oleh mahasiswa dalam proses pembelajaran di Instituto Superior Cristal.
Many sketches based on estimator sharing have been proposed to estimate cardinality with huge flows in data streams. However, existing sketches suffer from large estimation errors due to allocating ...the same memory size for each estimator without considering the skewed cardinality distribution. Here, a filtering method called SuperFilter is proposed to enhance existing sketches. SuperFilter intelligently identifies high‐cardinality flows from the data stream, and records them with the large estimator, while other low‐cardinality flows are recorded using a traditional sketch with small estimators. The experimental results show that SuperFilter can reduce the average absolute error of cardinality estimation by over 81% compared with existing approaches.
This paper proposed a way called SuperFilter to enhance existing sketches without requiring a radically different solution.
SuperFilter intelligently separates high‐cardinality flows with large cardinality from the data stream and keeps the information of these flows with the large estimator, while using a sketch with small estimators to record other low‐cardinality flows.
As static data-flow analysis becomes able to report increasingly complex bugs, using an evergrowing set of complex internal rules encoded into flow functions, the analysis tools themselves grow more ...and more complex. In result, for users to be able to effectively use those tools on specific codebases, they require special configurations-a task which in industry is typically performed by individual developers or dedicated teams. To efficiently use and configure static analysis tools, developers need to build a certain understanding of the analysis' rules, i.e., how the underlying analyses interpret the analyzed code and their reasoning for reporting certain warnings. In this article, we explore how to assist developers in understanding the analysis' warnings, and finding weaknesses in the analysis' rules. To this end, we introduce the concept of rule graphs that expose to the developer selected information about the internal rules of data-flow analyses. We have implemented rule graphs on top of a taint analysis, and show how the graphs can support the abovementioned tasks. Our user study and empirical evaluation show that using rule graphs helps developers understand analysis warnings more accurately than using simple warning traces, and that rule graphs can help developers identify causes for false positives in analysis rules.