This document represents a comprehensive study of "Atom", one of the best open-source code editors available with many features built-in to support multitude of programming environments and to ...provide a more productive toolset for developers.
Single-cell RNA sequencing (scRNA-seq) can be used to gain insights into cellular heterogeneity within complex tissues. However, various technical artifacts can be present in scRNA-seq data and ...should be assessed before performing downstream analyses. While several tools have been developed to perform individual quality control (QC) tasks, they are scattered in different packages across several programming environments. Here, to streamline the process of generating and visualizing QC metrics for scRNA-seq data, we built the SCTK-QC pipeline within the singleCellTK R package. The SCTK-QC workflow can import data from several single-cell platforms and preprocessing tools and includes steps for empty droplet detection, generation of standard QC metrics, prediction of doublets, and estimation of ambient RNA. It can run on the command line, within the R console, on the cloud platform or with an interactive graphical user interface. Overall, the SCTK-QC pipeline streamlines and standardizes the process of performing QC for scRNA-seq data.
The design of new materials with useful properties is becoming increasingly important. Machine-learning tools Materials Genome Integration System Phase and Property Analysis (MIPHA) and rMIPHA (based ...on the R programming environment) have been independently developed to accelerate the process of materials discovery via a data-driven materials research approach. In the present work, MIPHA and rMIPHA are applied to steel, where machine-learning-based 2D/3D microstructural analysis, direct analysis of property predictions, and properties-to-microstructure inverse analysis were conducted. The results demonstrate that the prediction models deliver satisfactory performance. The inverse exploration of microstructures related to desired target properties (e.g., stress–strain curve, tensile strength, and total elongation) was realized. MIPHA and rMIPHA are still under improvement. The microstructure-to-processing inverse analysis is expected to be realized in the future.
R is an open‐source programming environment for statistical computing and graphics structured by numerous contributed packages. The current packages used for biodiversity research focus on limited, ...particular aspects of biodiversity. Most packages focus on the number and abundance of species.
I present an r package named adiv that provides additional methods to measure and analyse biodiversity. adiv contains approaches to quantify species‐based, trait‐based (functional) and phylogenetic diversity (a) within communities (α diversity), (b) between communities (β diversity) and (c) to partition it over space and time (α, β and γ levels of diversity). Partitioning approaches allow evaluating whether the levels of α and β diversity could have been obtained by chance. Moreover, groups of biological entities (e.g. species of the same clade or with similar biological characteristics) that drive each level of diversity (α, β and γ) can be identified via ordination analyses.
Although the package focuses on interspecific diversity in its current state, the developed approaches can also be applied to analyse intraspecific diversity or, at another level, ecosystem diversity. More generally, the functions can be applied in any discipline interested in the concept of diversity, such as economics or linguistics. Indeed, all available approaches can be easily applied at other scales and to other disciplines provided that the data have the required format: a matrix of abundance or presence/absence data of some entities in some collections and information on the differences between the entities.
adiv aims to complement existing r packages to provide scientists with a wide variety of diversity indices, as each index reflects a very specific facet of biodiversity. adiv will grow in the future to integrate as many validated approaches for biodiversity analysis as possible, not yet available in r. As it includes both traditional and recent viewpoints on how biodiversity should be evaluated, adiv offers a promising platform where methods to analyse biodiversity can be developed and compared in terms of their statistical behaviour and biological relevance. Applications of the most relevant tools for a given study aim will eventually improve research on human‐driven variations in biodiversity.
Abstract
The issue of implementation of software for rapid assessment of uncertainty of metrological control results of measuring instruments used in the field of state metrological control and ...supervision in calibration and evaluation of actual metrological characteristics of operating viscosimeters was considered. An interpretation of the application of the EXCEL program and other software for the special methodology for automating uncertainty calculations developed by us was carried out, which significantly simplified the work and reduced the time spent by specialists in the metrological laboratory. Basic requirements of comparison of software for calculation, calculation and estimation of uncertainty in the realm of mathematical models of validated techniques are determined. The experimental results of the comparative analysis taking into account the set tasks are presented, the most common and effective universal software for calculating the uncertainty of the results of the experiment. The object of the study is universal software for estimating the uncertainty of measurements and other metrological characteristics of viscometers, structured by types of measurements, in the graphic programming environment. The software can be used both in laboratories and in industrial premises. The workplace can be both a metrological laboratory, the working area of which is a table with instruments, and a production room. As a rule, accredited laboratories do not have a special staff of metrologists or specialists who perfectly understand the basics of mathematical statistics and the concept of measurement uncertainty. The complexity and necessary amounts of calculations that are carried out in assessing the uncertainty of measurements require significant laboratory resources.
This study aims to advocate that a visual programming environment offering graphical items and states of a computational problem could be helpful in supporting programming learning with computational ...problem-solving. A visual problem-solving environment for programming learning was developed, and 158 college students were conducted in a computational problem-solving activity. The students' activities of designing, composing, and testing solutions were recorded by log data for later analysis. To initially unveil the students' practice and strategies exhibited in the visual problem-solving environment, this study proposed several indicators to quantitatively represent students' computational practice (Sequence, Selection, Simple iteration, Nested iteration, and Testing), computational design (Problem decomposition, Abutment composition, and Nesting composition), and computational performance (Goal attainment and Program size). By the method of cluster analysis, some empirical patterns regarding the students' programming learning with computational problem-solving were identified. Furthermore, comparisons of computational design and computational performance among the different patterns of computational practice were conducted. Considering the relations of students' computational practice to computational design and performance, evidence-based suggestions on the design of supportive programming environments for novice programmers are discussed.
•A visual problem-solving environment was proposed to support programming learning.•Students exhibited different patterns of computational practice in the environment.•Patterns of computational practice were correlated with computational design and performance.
We conducted a training experimentation on computer coding whose aim is to probe ICT skills enhancement of pre-service teachers in Morocco. For that, we have developed and implemented training ...sessions using a visual programming tool (Scratch) targeting 63 prospective teachers at the Faculty of Educational Sciences (FSE) and the Regional Center for Education and Training Professions (CRMEF) in Nador, Morocco. During these sessions, trainees were introduced to algorithmic thinking where they implemented teaching sequences in their specialty subjects using Scratch. Pre and post surveys were conducted to measure the evolution of the trainees’ perceptions towards the integration of computer coding in the teaching and learning of their specialties. The analysis of the surveys showed the potential of integrating computer coding in the development of learners’ transversal skills. The training revealed different possibilities of exploiting visual block-based programming environments in the teaching and learning process.
There is an artificial intelligence-based technology that has the potential to augment the work of human programmers. This article discusses some capabilities built around generative artificial ...intelligence and large language models that impact programming education.
Microclimates are the thermal and hydric environments organisms actually experience, and estimates of them are increasingly needed in environmental research. The availability of global weather and ...terrain datasets, together with increasingly sophisticated microclimate modelling tools, makes the prospect of a global, web‐based microclimate estimation procedure feasible.
We have developed such an approach for the r programming environment which integrates existing r packages for obtaining terrain and sub‐daily atmospheric forcing data (elevatr and rncep), and two complementary microclimate modelling packages (NicheMapR and microclima). The procedure can be used to generate NicheMapR’s hourly time‐series outputs of above‐ and below‐ground conditions, including convective and radiative environments, soil temperature, soil moisture and snow cover, for a single point, using microclima to account for local topographic and vegetation effects. Alternatively, it can use microclima to produce high‐resolution grids of near‐surface temperatures, using NicheMapR to derive calibration coefficients normally obtained from experimental data.
We validate this integrated approach against a series of microclimate observations used previously in the tests of the respective models and show equivalent performance.
It is thus now feasible to produce realistic estimates of microclimate at fine (<30 m) spatial and temporal scales anywhere on earth, from 1957 to present.