Systematic reviews are vital to the pursuit of evidence-based medicine within healthcare. Screening titles and abstracts (T&Ab) for inclusion in a systematic review is an intensive, and often ...collaborative, step. The use of appropriate tools is therefore important. In this study, we identified and evaluated the usability of software tools that support T&Ab screening for systematic reviews within healthcare research.
We identified software tools using three search methods: a web-based search; a search of the online "systematic review toolbox"; and screening of references in existing literature. We included tools that were accessible and available for testing at the time of the study (December 2018), do not require specific computing infrastructure and provide basic screening functionality for systematic reviews. Key properties of each software tool were identified using a feature analysis adapted for this purpose. This analysis included a weighting developed by a group of medical researchers, therefore prioritising the most relevant features. The highest scoring tools from the feature analysis were then included in a user survey, in which we further investigated the suitability of the tools for supporting T&Ab screening amongst systematic reviewers working in medical research.
Fifteen tools met our inclusion criteria. They vary significantly in relation to cost, scope and intended user community. Six of the identified tools (Abstrackr, Colandr, Covidence, DRAGON, EPPI-Reviewer and Rayyan) scored higher than 75% in the feature analysis and were included in the user survey. Of these, Covidence and Rayyan were the most popular with the survey respondents. Their usability scored highly across a range of metrics, with all surveyed researchers (n = 6) stating that they would be likely (or very likely) to use these tools in the future.
Based on this study, we would recommend Covidence and Rayyan to systematic reviewers looking for suitable and easy to use tools to support T&Ab screening within healthcare research. These two tools consistently demonstrated good alignment with user requirements. We acknowledge, however, the role of some of the other tools we considered in providing more specialist features that may be of great importance to many researchers.
Physical designers typically employ heuristics to solve challenging problems in global routing. However, these heuristic solutions are not adaptable to the ever-changing fabrication demands, and the ...experience and creativity of designers can limit their effectiveness. Reinforcement learning (RL) is an effective method to tackle sequential optimization problems due to its ability to adapt and learn through trial and error. Hence, RL can create policies that can handle complex tasks. This work presents an RL framework for global routing that incorporates a self-learning model called RL-Ripper. The primary function of RL-Ripper is to identify the best nets that need to be ripped and rerouted in order to decrease the number of total short violations. In this work, we show that the proposed RL-Ripper framework’s approach can reduce the number of short violations for ISPD 2018 benchmarks when compared to the state-of-the-art global router CUGR. Moreover, RL-Ripper reduced the total number of short violations after the first iteration of detailed routing over the baseline while being on par with the wirelength, VIA, and runtime. The proposed framework’s major impact is providing a novel learning-based approach to global routing that can be replicated for newer technologies.
Low‐cost air pollutant sensors suffer several interferences due to the variation of climatic elements. Recent studies look for calibration solutions based on different regression and classification ...machine learning algorithms. The present work brings together the implementation and extraction of performance metrics from these algorithms in a single open‐source tool. Both the input data and parameters for each algorithm are automatically configured. This feature makes the tool compatible with any input dataset and removes the need to interact with complex codes.
In this paper the structure of an open‐source tool is introduced for evaluation of calibration techniques used in low‐cost air pollutant monitors. Different algorithms can be configured and used, such as regression techniques and machine learning classifiers. In addition, this tool was implemented to be compatible with any input dataset. These features remove the need for the user to interact with complex codes.
Cascading failure in electric power systems is a complicated problem for which a variety of models, software tools, and analytical tools have been proposed but are difficult to verify. Benchmarking ...and validation are necessary to understand how closely a particular modeling method corresponds to reality, what engineering conclusions may be drawn from a particular tool, and what improvements need to be made to the tool in order to reach valid conclusions. The community needs to develop the test cases tailored to cascading that are central to practical benchmarking and validation. In this paper, the IEEE PES working group on cascading failure reviews and synthesizes how benchmarking and validation can be done for cascading failure analysis, summarizes and reviews the cascading test cases that are available to the international community, and makes recommendations for improving the state of the art.
Recent developments in data science and machine learning have inspired a new wave of research into data-driven modeling for mathematical optimization of process applications. This paper first ...considers essential conditions for robustness to uncertainties and accurate extrapolation, which are required to integrate surrogates into process optimization. Next we consider two perspectives for developing process engineering surrogates: a surrogate-led and a mathematical programming-led approach. As these data-driven surrogate models must be integrated into a larger process optimization problem, we discuss the verification problem, i.e., checking that the optimum of the surrogate corresponds to the optimum of the truth model. The paper investigates two case studies on surrogate-based optimization for heat exchanger network synthesis and drill scheduling.
•We consider essential conditions for robustness to uncertainties and accurate extrapolation.•We investigate two perspectives for developing process engineering surrogates.•For the surrogate-ledperspective, we discuss heat exchanger network synthesis.•For the mathematical programming-ledapproach, we discuss drill scheduling.
Nowadays, collaborative learning in a virtual environment is highly relevant, especially in distance education and during the ongoing Covid-19 pandemic. Therefore, higher education institutions are ...striving to develop software tools for collaborative online learning and to support large numbers of students working simultaneously. For this purpose, it is important to collect information about the ongoing process of the collaborative work, especially learning and interaction data, e.g. how the students interact with the other group members and whether or how they exchange information with the teachers. These collected data are then analysed with methods of learning analytics with the help of the software tools and the results are used to support learners and teachers. In this paper, an architecture is proposed that enables collaborative writing by hundreds of students divided into many groups. It uses the synergy of the learning environment Moodle and the online editor Etherpad Lite. The needed software tools can be easily integrated into it. A prototype of the architecture and first required methods of data collection and learning analytics have already been developed and successfully tested in a first pilot usage with about 300 students. The long-term goal of this project is to support collaborative writing in near real time using self-developed software.
Heutzutage besitzt das Lernen in Gruppen in einer virtuellen Umgebung eine hohe Relevanz, erst recht natürlich in der Fernlehre und während der andauernden Covid-19-Pandemie. Daher sind die Hochschulen bestrebt, Software-Werkzeuge für das gemeinsame Online-Lernen und zur Unterstützung einer grossen Anzahl von gleichzeitig arbeitenden Studierenden zu entwickeln. Dafür ist es wichtig, Informationen über den laufenden Prozess des kollaborativen Arbeitens zu sammeln, insbesondere Lern- bzw. Interaktionsdaten, also z. B. Daten, die zeigen, wie die Studierenden mit den anderen Gruppenmitgliedern interagieren und ob bzw. wie sie sich mit den Lehrenden austauschen. Diese erhobenen Daten werden dann mit Verfahren der Learning-Analytics mithilfe der Software-Werkzeuge untersucht und die Ergebnisse werden zur Unterstützung der Studierenden bzw. Lehrenden verwendet. In der vorliegenden Arbeit wird eine Architektur vorgeschlagen, die das Studieren im Bereich des kollaborativen Schreibens von Hunderten von Studierenden, eingeteilt in viele Gruppen, erleichtert. Sie nutzt die Synergie der Lernumgebung Moodle und des Online-Editors Etherpad Lite. In ihr lassen sich die erforderlichen Software-Werkzeuge leicht integrieren. Ein Prototyp der Architektur sowie erste grundlegende Methoden der Datenerfassung und Learning-Analytics wurden bereits entwickelt und in einem ersten Piloteinsatz mit ca. 300 Studierenden erfolgreich getestet. Die langfristige Ziel des Projekts besteht darin, das kollaborative Schreiben mittels eigenentwickelter Software in nahezu Echtzeit zu unterstützen.
popsicleR workflow. The package is composed of seven main functions to perform exploration of quality-control metrics, filtering of low-quality cells, identification of cell doublets, data ...normalization, removal of technical and biological biases, cell clustering, and cell annotation.
Display omitted
•Pre-processing of single cell RNA-seq data requires both computational skills and biological sensibility.•An effective pre-processing requires the manual combination of different computational strategies to quantify QC-metrics.•Currently, no set of methods has been unanimously defined to pre-process single cell RNA-seq data.•popsicleR is a R package for the interactive pre-processing and quality control of single cell RNA-seq data.•popsicleR main functions integrates pre-processing methods derived from widely used computational workflows.
The advent of single-cell sequencing is providing unprecedented opportunities to disentangle tissue complexity and investigate cell identities and functions. However, the analysis of single cell data is a challenging, multi-step process that requires both advanced computational skills and biological sensibility. When dealing with single cell RNA-seq (scRNA-seq) data, the presence of technical artifacts, noise, and biological biases imposes to first identify, and eventually remove, unreliable signals from low-quality cells and unwanted sources of variation that might affect the efficacy of subsequent downstream modules. Pre-processing and quality control (QC) of scRNA-seq data is a laborious process consisting in the manual combination of different computational strategies to quantify QC-metrics and define optimal sets of pre-processing parameters.
Here we present popsicleR, a R package to interactively guide skilled and unskilled command line-users in the pre-processing and QC analysis of scRNA-seq data. The package integrates, into several main wrapper functions, methods derived from widely used pipelines for the estimation of quality-control metrics, filtering of low-quality cells, data normalization, removal of technical and biological biases, and for cell clustering and annotation. popsicleR starts from either the output files of the Cell Ranger pipeline from 10X Genomics or from a feature-barcode matrix of raw counts generated from any scRNA-seq technology. Open-source code, installation instructions, and a case study tutorial are freely available at https://github.com/bicciatolab/popsicleR.
This article describes the author’s unsuccessful attempt to use current generative artificial intelligence tools to reduce instructors’ workloads by creating course syllabi that reference no cost ...resources.
In shotgun proteomics, peptide and protein identification is most commonly conducted using database search engines, the method of choice when reference protein sequences are available. Despite its ...widespread use the database‐driven approach is limited, mainly because of its static search space. In contrast, de novo sequencing derives peptide sequence information in an unbiased manner, using only the fragment ion information from the tandem mass spectra. In recent years, with the improvements in MS instrumentation, various new methods have been proposed for de novo sequencing. This review article provides an overview of existing de novo sequencing algorithms and software tools ranging from peptide sequencing to sequence‐to‐protein mapping. Various use cases are described for which de novo sequencing was successfully applied. Finally, limitations of current methods are highlighted and new directions are discussed for a wider acceptance of de novo sequencing in the community.
Wind turbines are complicated systems with different aerodynamic and electromechanical aspects. An integrated platform which includes design, simulation, and experimental evaluation of wind energy ...conversion systems is very helpful to design, develop, and examine the performance of different wind turbine sub-systems. The previous studies exclude such a platform and this study tries to fill this gap. In this study, the blades of a 5.5 kW fixed-speed stall-regulated wind turbine are initially designed then employed in simulation and emulation. The same as the simulation setup, the utilised emulator uses AeroDyn and FAST software tools to model the aerodynamic and mechanical aspects of the turbine in a laboratory environment. The emulator is capable of reproducing the static and dynamic behaviour of the turbine in a laboratory similar to the real turbines. For simulation, the electrical parts are implemented in MATLAB/Simulink, whereas the real electrical parts are used for the emulator. The performance of the turbine with the designed blades is investigated in simulation and emulation considering a simple hub-height and turbulent wind profile, generated by TurbSim software tool based on the IEC standards. Moreover, the start-up process of the wind turbine is evaluated using the wind turbine emulator and the results are discussed.