The emerging field of Internet of Things (IoT) offers an unprecedented opportunity for a wide spectrum of applications. However, most applications have been integrating IoT devices through ...proprietary mechanisms and with closed technology stacks. The monolithic, mostly vendor-specific development architecture leads to soaring customization costs and limited component reusability. It impedes the full-fledged IoT applications in cross-organizational, general-purpose and rapid-changing scenarios. This research intends to provide a coherent architecture that enables interoperable, low-cost and user-customizable IoT rapid prototyping. Under this architecture, each IoT component, either a physical device or a control logic, is abstracted into an independent web service that described by a set of transferable states. By concatenating a valid chain of state transfers between web services, IoT components are further assembled into customizable applications. In this research, a Finite-State-Machine (FSM) model driven architecture is established and a typical implementation of the proposed architecture, i.e. the Hyper Sensor Markup Language (HSML) is provided. We also discuss two practical use cases and related evaluations.
•IoT development trends and open challenges in rapid prototyping.•Analysis of model driven composition approach and its potential benefits for IoT rapid prototyping.•Possible architecture of FSM model driven IoT rapid prototyping framework and its implementation.
Abstract
Development of interactive web applications to deposit, visualize and analyze biological datasets is a major subject of bioinformatics. R is a programming language for data science, which is ...also one of the most popular languages used in biological data analysis and bioinformatics. However, building interactive web applications was a great challenge for R users before the Shiny package was developed by the RStudio company in 2012. By compiling R code into HTML, CSS and JavaScript code, Shiny has made it incredibly easy to build web applications for the large R community in bioinformatics and for even non-programmers. Over 470 biological web applications have been developed with R/Shiny up to now. To further promote the utilization of R/Shiny, we reviewed the development of biological web applications with R/Shiny, including eminent biological web applications built with R/Shiny, basic steps to build an R/Shiny application, commonly used R packages to build the interface and server of R/Shiny applications, deployment of R/Shiny applications in the cloud and online resources for R/Shiny.
The paper presents the results of comparative analysis of two competing web application technologies: ASP.NET MVC from Microsoft and JavaServer Faces (JSF) supported by Oracle. The research was done ...by implementing two applications with the same functionality using the same MySQL database. The most commonly used ORM tools are Hibernate for JSF and Entity Framework for ASP.NET MVC. The research was done by comparison the application structure, ease of implementation, support of the development environment, community support, graphical interface components, and database performance.
The worldwide expansion of internet technologies and the World Wide Web (WWW) has witnessed a booming rise in popularity and adoption of Web Applications (WA). The current technological advancement ...has allowed web applications to become more innovative and practical in managing born-digital content. This requires developers to continue to expand their assessment repertoire to provide valuable and actionable feature coverage. This study demonstrates User Experience Assessment (UXA) as part of the Re-CRUD console framework formative assessment. Re-CRUD console framework is a code automation tool for web application development containing integrated records management features that help the information professional manage the digital content effectively. The assessment's primary goal was to get detailed feedback from information professionals on the Re-CRUD feature coverage to make Re-CRUD more pleasant for developers and content friendly. We conducted contextual discussions using the think-aloud protocol and usability testing with experts in WA development and information professionals. The findings revealed a positive review of Re-CRUD features coverage and code generation procedure but a less favourable review of authentication policy and audit trail. The feedback is used to improvise Re-CRUD feature coverage and increase code automation productivity.
World Wide Web was originally meant as a global information exchange but it has since then morphed into the largest available application platform. Especially during the past decade, mobile usage has ...been rising while the size of websites and applications has been steadily rising therefore making size an important target for optimization. In this article, we look into a new primitive called resumability. Resumability allows developers to avoid caveats of earlier approaches, such as hydration, by embedding some of the required data straight into HTML markup delivered to the client. Then the client resumes execution as an application becomes interactive. The technique allows frameworks to apply well-known techniques, such as code-splitting, automatically therefore reducing developer effort. By considering past developments and a couple of concrete examples, we propose resumability as a new primitive for web application development. Furthermore, we also discuss potential research directions for those wanting to understand the topic in greater detail.
Constructed wetlands (CWs) are engineering systems recognized as an efficient and sustainable option to wastewater treatment. Due to the growing interest in CWs for waste management, the number of ...works analyzing their footprint and impact has risen in the last years. Thus, the study of these systems and their components in construction, operation, and demolition phases is important to characterize the technology and achieve a fully environmental-friendly approach. Until now, no complete tools for measuring both direct and indirect greenhouse gas (GHG) emissions in CWs have been reported in the field. Some efforts in this line are Life Cycle Assessment tools, which can be economically expensive and usually require specific training. Therefore, this work aims to present a web application as an open and complete tool for the estimation of GHG emissions in CWs, including both direct and indirect emissions and considering all the stages involved in their management.
•A web-based application for calculating CO2 emissions in CW systems was developed.•The tool is freely accessible through https://crftpr.herokuapp.com/•It provides emissions generated in construction, operation, and demolition stages.•Validation tests and a running example have been also performed.•The accuracy of the application has been demonstrated.
The article presents the results of the web applications development effectiveness on the Java Enterprise Edition platform using JavaServer Faces and Spring Boot. The comparative analysis was ...performed using the specially prepared test applications, implemented in both technologies.
The World Wide Web has become a common platform for interactive software development. Most web applications feature custom user interfaces used by millions of people every day. Information ...architecture addresses the structural design of information to build quality web applications with improved usability of content, navigation, and findability. One of the most frequently utilized information architecture methods is card sorting—an affordable, user-centered approach for eliciting and evaluating categories and navigable items. Card sorting facilitates decision-making during the development process based on users’ mental models of a given application domain. However, although the qualitative analysis of card sorts has become common practice in information architecture, the quantitative analysis of card sorting is less widely applied. The reason for this gap is that quantitative analysis often requires the use of customized techniques to extract meaningful information for decision-making. To facilitate this process and support the structuring of information, we propose a methodology for the quantitative analysis of card-sorting results in this paper. The suggested approach can be systematically applied to provide clues and support for decisions. These might significantly impact the design and, thus, the final quality of the web application. Therefore, the approach includes proper goodness values that enable comparisons among the results of the methods and techniques used and ensure the suitability of the analyses performed. Two publicly available datasets were used to demonstrate the key issues related to the interpretation of card sorting results and the overall suitability and validity of the proposed methodology.
Headache disorders are an important health burden, having a large health-economic impact worldwide. Current treatment & follow-up processes are often archaic, creating opportunities for ...computer-aided and decision support systems to increase their efficiency. Existing systems are mostly completely data-driven, and the underlying models are a black-box, deteriorating interpretability and transparency, which are key factors in order to be deployed in a clinical setting.
In this paper, a decision support system is proposed, composed of three components: (i) a cross-platform mobile application to capture the required data from patients to formulate a diagnosis, (ii) an automated diagnosis support module that generates an interpretable decision tree, based on data semantically annotated with expert knowledge, in order to support physicians in formulating the correct diagnosis and (iii) a web application such that the physician can efficiently interpret captured data and learned insights by means of visualizations.
We show that decision tree induction techniques achieve competitive accuracy rates, compared to other black- and white-box techniques, on a publicly available dataset, referred to as migbase. Migbase contains aggregated information of headache attacks from 849 patients. Each sample is labeled with one of three possible primary headache disorders. We demonstrate that we are able to reduce the classification error, statistically significant (ρ≤0.05), with more than 10% by balancing the dataset using prior expert knowledge. Furthermore, we achieve high accuracy rates by using features extracted using the Weisfeiler-Lehman kernel, which is completely unsupervised. This makes it an ideal approach to solve a potential cold start problem.
Decision trees are the perfect candidate for the automated diagnosis support module. They achieve predictive performances competitive to other techniques on the migbase dataset and are, foremost, completely interpretable. Moreover, the incorporation of prior knowledge increases both predictive performance as well as transparency of the resulting predictive model on the studied dataset.