The Real-Time Linux Kernel Reghenzani, Federico; Massari, Giuseppe; Fornaciari, William
ACM computing surveys,
01/2020, Letnik:
52, Številka:
1
Journal Article
Recenzirano
The increasing functional and nonfunctional requirements of real-time applications, the advent of mixed criticality computing, and the necessity of reducing costs are leading to an increase in the ...interest for employing COTS hardware in real-time domains. In this scenario, the Linux kernel is emerging as a valuable solution on the software side, thanks to the rich support for hardware devices and peripherals, along with a well-established programming environment. However, Linux has been developed as a general-purpose operating system, followed by several approaches to introduce actual real-time capabilities in the kernel. Among these, the PREEMPT_RT patch, developed by the kernel maintainers, has the goal to increase the predictability and reduce the latencies of the kernel directly modifying the existent kernel code. This article aims at providing a survey of the state-of-the-art approaches for building real-time Linux-based systems, with a focus on PREEMPT_RT, its evolution, and the challenges that should be addressed in order to move PREEMPT_RT one step ahead. Finally, we present some applications and use cases that have already benefited from the introduction of this patch.
Despite a major increase in the range and number of software offerings now available to help researchers produce evidence syntheses, there is currently no generic tool for producing figures to ...display and explore the risk‐of‐bias assessments that routinely take place as part of systematic review. However, tools such as the R programming environment and Shiny (an R package for building interactive web apps) have made it straightforward to produce new tools to help in producing evidence syntheses. We present a new tool, robvis (Risk‐Of‐Bias VISualization), available as an R package and web app, which facilitates rapid production of publication‐quality risk‐of‐bias assessment figures. We present a timeline of the tool's development and its key functionality.
The number of students taking high school computer science classes is growing. Increasingly, these students are learning with graphical, block-based programming environments either in place of or ...prior to traditional text-based programming languages. Despite their growing use in formal settings, relatively little empirical work has been done to understand the impacts of using block-based programming environments in high school classrooms. In this article, we present the results of a 5-week, quasi-experimental study comparing isomorphic block-based and text-based programming environments in an introductory high school programming class. The findings from this study show students in both conditions improved their scores between pre- and postassessments; however, students in the blocks condition showed greater learning gains and a higher level of interest in future computing courses. Students in the text condition viewed their programming experience as more similar to what professional programmers do and as more effective at improving their programming ability. No difference was found between students in the two conditions with respect to confidence or enjoyment. The implications of these findings with respect to pedagogy and design are discussed, along with directions for future work.
The paper presents the way of determining the dimensions of the elements of a planar linkage when imposing certain conditions regarding the movement of a component piston, aiming at obtaining a ...mechanism whose total mass is minimal. A computer program that simulates the kinematics of the analyzed mechanism has been developed by the authors using Maple programming environment. The NLPSolve function included in the Optimization Package of Maple has been used for establishing the dimensions of the elements that assure the minimal mass of the linkage. Finally, some simulation results are presented.
Over the last few years, the integration of coding activities for children in K-12 education has flourished. In addition, novel technological tools and programming environments have offered new ...opportunities and increased the need to design effective learning experiences. This paper presents a design-based research (DBR) approach conducted over two years, based on constructionism-based coding experiences for children, following the four stages of DBR. Three iterations (cycles) were designed and examined in total, with participants aged 8–17 years old, using mixed methods. Over the two years, we conducted workshops in which students used a block-based programming environment (i.e., Scratch) and collaboratively created a socially meaningful artifact (i.e., a game). The study identifies nine design principles that can help us to achieve higher engagement during the coding activity. Moreover, positive attitudes and high motivation were found to result in the better management of cognitive load. Our contribution lies in the theoretical grounding of the results in constructionism and the emerging design principles. In this way, we provide both theoretical and practical evidence of the value of constructionism-based coding activities.
•Design-based research approach to investigate constructionism-based coding activities for children.•Identify elements of children's engagement in constructionism-based coding activities.•Theoretical grounding of the findings on constructionism theory.•Instructional design principles to facilitate constructionism-based coding activities for children.
Quantum chemistry is a discipline which relies heavily on very expensive numerical computations. The scaling of correlated wave function methods lies, in their standard implementation, between O ( N ...5 ) and O ( e N ) , where N is proportional to the system size. Therefore, performing accurate calculations on chemically meaningful systems requires (i) approximations that can lower the computational scaling and (ii) efficient implementations that take advantage of modern massively parallel architectures. Quantum Package is an open-source programming environment for quantum chemistry specially designed for wave function methods. Its main goal is the development of determinant-driven selected configuration interaction (sCI) methods and multireference second-order perturbation theory (PT2). The determinant-driven framework allows the programmer to include any arbitrary set of determinants in the reference space, hence providing greater methodological freedom. The sCI method implemented in Quantum Package is based on the CIPSI (Configuration Interaction using a Perturbative Selection made Iteratively) algorithm which complements the variational sCI energy with a PT2 correction. Additional external plugins have been recently added to perform calculations with multireference coupled cluster theory and range-separated density-functional theory. All the programs are developed with the IRPF90 code generator, which simplifies collaborative work and the development of new features. Quantum Package strives to allow easy implementation and experimentation of new methods, while making parallel computation as simple and efficient as possible on modern supercomputer architectures. Currently, the code enables, routinely, to realize runs on roughly 2 000 CPU cores, with tens of millions of determinants in the reference space. Moreover, we have been able to push up to 12 288 cores in order to test its parallel efficiency. In the present manuscript, we also introduce some key new developments: (i) a renormalized second-order perturbative correction for efficient extrapolation to the full CI limit and (ii) a stochastic version of the CIPSI selection performed simultaneously to the PT2 calculation at no extra cost.
The Hurst exponent allows to classify time series according to the level of their stochasticity. To calculate it, the rescaled range analysis (R/S analysis) is applied. The article presents the R/S ...analysis algorithm. A program has been developed in the LabView programming environment. Performance check program carried out on the model signals.
50 Years of Data Science Donoho, David
Journal of computational and graphical statistics,
10/2017, Letnik:
26, Številka:
4
Journal Article
Recenzirano
Odprti dostop
More than 50 years ago, John Tukey called for a reformation of academic statistics. In "The Future of Data Analysis," he pointed to the existence of an as-yet unrecognized science, whose subject of ...interest was learning from data, or "data analysis." Ten to 20 years ago, John Chambers, Jeff Wu, Bill Cleveland, and Leo Breiman independently once again urged academic statistics to expand its boundaries beyond the classical domain of theoretical statistics; Chambers called for more emphasis on data preparation and presentation rather than statistical modeling; and Breiman called for emphasis on prediction rather than inference. Cleveland and Wu even suggested the catchy name "data science" for this envisioned field. A recent and growing phenomenon has been the emergence of "data science" programs at major universities, including UC Berkeley, NYU, MIT, and most prominently, the University of Michigan, which in September 2015 announced a $100M "Data Science Initiative" that aims to hire 35 new faculty. Teaching in these new programs has significant overlap in curricular subject matter with traditional statistics courses; yet many academic statisticians perceive the new programs as "cultural appropriation." This article reviews some ingredients of the current "data science moment," including recent commentary about data science in the popular media, and about how/whether data science is really different from statistics. The now-contemplated field of data science amounts to a superset of the fields of statistics and machine learning, which adds some technology for "scaling up" to "big data." This chosen superset is motivated by commercial rather than intellectual developments. Choosing in this way is likely to miss out on the really important intellectual event of the next 50 years. Because all of science itself will soon become data that can be mined, the imminent revolution in data science is not about mere "scaling up," but instead the emergence of scientific studies of data analysis science-wide. In the future, we will be able to predict how a proposal to change data analysis workflows would impact the validity of data analysis across all of science, even predicting the impacts field-by-field. Drawing on work by Tukey, Cleveland, Chambers, and Breiman, I present a vision of data science based on the activities of people who are "learning from data," and I describe an academic field dedicated to improving that activity in an evidence-based manner. This new field is a better academic enlargement of statistics and machine learning than today's data science initiatives, while being able to accommodate the same short-term goals. Based on a presentation at the Tukey Centennial Workshop, Princeton, NJ, September 18, 2015.
Psi4NumPy demonstrates the use of efficient computational kernels from the open-source Psi4 program through the popular NumPy library for linear algebra in Python to facilitate the rapid development ...of clear, understandable Python computer code for new quantum chemical methods, while maintaining a relatively low execution time. Using these tools, reference implementations have been created for a number of methods, including self-consistent field (SCF), SCF response, many-body perturbation theory, coupled-cluster theory, configuration interaction, and symmetry-adapted perturbation theory. Furthermore, several reference codes have been integrated into Jupyter notebooks, allowing background, underlying theory, and formula information to be associated with the implementation. Psi4NumPy tools and associated reference implementations can lower the barrier for future development of quantum chemistry methods. These implementations also demonstrate the power of the hybrid C++/Python programming approach employed by the Psi4 program.
A survey of software refactoring Mens, T.; Tourwe, T.
IEEE transactions on software engineering,
2004-Feb., 2004-02-00, 20040201, Letnik:
30, Številka:
2
Journal Article
Recenzirano
Odprti dostop
We provide an extensive overview of existing research in the field of software refactoring. This research is compared and discussed based on a number of different criteria: the refactoring activities ...that are supported, the specific techniques and formalisms that are used for supporting these activities, the types of software artifacts that are being refactored, the important issues that need to be taken into account when building refactoring tool support, and the effect of refactoring on the software process. A running example is used to explain and illustrate the main concepts.