In the decremental single-source shortest paths problem, the goal is to maintain distances from a fixed source s to every vertex v in an m-edge graph undergoing edge deletions. In this paper, we ...conclude a long line of research on this problem by showing a near-optimal deterministic data structure that maintains (1 + E) -approximate distance estimates and runs in m 1+o(1) total update time. Our result, in particular, removes the oblivious adversary assumption required by the previous breakthrough result by Henzinger et al. FOCS'14, which leads to our second result: the first almost-linear time algorithm for (1 - E) -approximate min-cost flow in undirected graphs where capacities and costs can be taken over edges and vertices. Previously, algorithms for max flow with vertex capacities, or min-cost flow with any capacities required super-linear time. Our result essentially completes the picture for approximate flow in undirected graphs. The key technique of the first result is a novel framework that allows us to treat low-diameter graphs like expanders. This allows us to harness expander properties while bypassing shortcomings of expander decomposition, which almost all previous expander-based algorithms needed to deal with. For the second result, we break the notorious flow-decomposition barrier from the multiplicative-weight-update framework using randomization.
Recent methodological developments in plant phenotyping, as well as the growing importance of its applications in plant science and breeding, are resulting in a fast accumulation of multidimensional ...data. There is great potential for expediting both discovery and application if these data are made publicly available for analysis. However, collection and storage of phenotypic observations is not yet sufficiently governed by standards that would ensure interoperability among data providers and precisely link specific phenotypes and associated genomic sequence information. This lack of standards is mainly a result of a large variability of phenotyping protocols, the multitude of phenotypic traits that are measured, and the dependence of these traits on the environment. This paper discusses the current situation of standardization in the area of phenomics, points out the problems and shortages, and presents the areas that would benefit from improvement in this field. In addition, the foundations of the work that could revise the situation are proposed, and practical solutions developed by the authors are introduced.
The antigenic diversity of human influenza viruses represents a challenge to the development of vaccines with durable immune protection. In addition, small molecule anti-influenza viral drugs can ...bring clinical relief to influenza patients but the emergence of drug resistant viruses can rapidly limit the effectiveness of such drugs. In the past decade, a number of human monoclonal antibodies have been described that can bind to and neutralize a broad range of influenza A and B viruses. Most of these monoclonal antibodies are directed against the viral hemagglutinin (HA) stalk and some have now been evaluated in early to mid-stage clinical trials. An important conclusion from these clinical studies is that hemagglutinin stalk-specific antibodies are safe and can reduce influenza symptoms. In addition, examples of bi- and multi-specific anti-influenza antibodies are discussed, although such antibodies have not yet progressed into clinical testing. In the future, antibody-based therapies might become part of our arsenal to prevent and treat influenza.
•Many human IgG monoclonal antibodies directed against conserved epitopes of influenza A and B HA have been described.•Broadly neutralizing recombinant HA-specific IgG antibodies are safe in healthy individuals and influenza A virus patients.•Intravenous administered broadly neutralizing human IgG monoclonal antibodies can reduce influenza symptoms and virus loads.•Engineered antibody-derived biologicals against conserved influenza virus antigens may be developed clinically in the future.
•SFINGE 3D, novel benchmark for evaluating online gesture detection and recognition.•13 gestures dictionary, recognition task for detection and recognition with 72 gesture sequences.•Different ...approaches to test the benchmark: visual rendering and convolutional nerual networks.•Different approaches to test the benchmark: geometry-based and dissimilarity-based classifiers.
Display omitted
In recent years gesture recognition has become an increasingly interesting topic for both research and industry. While interaction with a device through a gestural interface is a promising idea in several applications especially in the industrial field, some of the issues related to the task are still considered a challenge. In the scientific literature, a relevant amount of work has been recently presented on the problem of detecting and classifying gestures from 3D hands’ joints trajectories that can be captured by cheap devices installed on head-mounted displays and desktop computers. The methods proposed so far can achieve very good results on benchmarks requiring the offline supervised classification of segmented gestures of a particular kind but are not usually tested on the more realistic task of finding gestures execution within a continuous hand tracking session.
In this paper, we present a novel benchmark, SFINGE 3D, aimed at evaluating online gesture detection and recognition. The dataset is composed of a dictionary of 13 segmented gestures used as a training set and 72 trajectories each containing 3–5 of the 13 gestures, performed in continuous tracking, padded with random hand movements acting as noise. The presented dataset, captured with a head-mounted Leap Motion device, is particularly suitable to evaluate gesture detection methods in a realistic use-case scenario, as it allows the analysis of online detection performance on heterogeneous gestures, characterized by static hand pose, global hand motions, and finger articulation.
We exploited SFINGE 3D to compare two different approaches for the online detection and classification, one based on visual rendering and Convolutional Neural Networks and the other based on geometry-based handcrafted features and dissimilarity-based classifiers. We discuss the results, analyzing strengths and weaknesses of the methods, and deriving useful hints for their improvement.
Software programs can be written in different but functionally equivalent ways. Even though previous research has compared specific formatting elements to find out which alternatives affect code ...legibility, seeing the bigger picture of what makes code more or less legible is challenging.
We aim to find which formatting elements have been investigated in empirical studies and which alternatives were found to be more legible for human subjects.
We conducted a systematic literature review and identified 15 papers containing human-centric studies that directly compared alternative formatting elements. We analyzed and organized these formatting elements using a card-sorting method.
We identified 13 formatting elements (e.g., indentation) and 33 levels of formatting elements (e.g., two-space indentation), which are about formatting styles, spacing, block delimiters, long or complex code lines, and word boundary styles. While some levels were found to be statistically better than other equivalent ones in terms of code legibility, e.g., appropriate use of indentation with blocks, others were not, e.g., formatting layout. For identifier style, we found divergent results, where one study found a significant difference in favor of camel case, while another study found a positive result in favor of snake case.
The number of identified papers, some of which are outdated, and the many null and contradictory results emphasize the relative lack of work in this area and underline the importance of more research. There is much to be understood about how formatting elements influence code legibility before the creation of guidelines and automated aids to help developers make their code more legible.
•This is a systematic literature review to find the most legible formatting elements.•From a set of 4,914 documents, we found and examined 15 scientific papers.•We identified 13 formatting elements and 33 alternative levels.•Researchers found statistically significant results for 9 formatting elements, but some with divergent results.•Our results highlight that the area is immature and many studies are inconclusive.
Patient triage is crucial in emergency departments, ensuring timely and appropriate care based on correctly evaluating the emergency grade of patient conditions. Triage methods are generally ...performed by human operator based on her own experience and information that are gathered from the patient management process. Thus, it is a process that can generate errors in emergency-level associations. Recently, Traditional triage methods heavily rely on human decisions, which can be subjective and prone to errors. A growing interest has recently been focused on leveraging artificial intelligence (AI) to develop algorithms to maximize information gathering and minimize errors in patient triage processing. We define and implement an AI-based module to manage patients' emergency code assignments in emergency departments. It uses historical data from the emergency department to train the medical decision-making process. Data containing relevant patient information, such as vital signs, symptoms, and medical history, accurately classify patients into triage categories. Experimental results demonstrate that the proposed algorithm achieved high accuracy outperforming traditional triage methods. By using the proposed method, we claim that healthcare professionals can predict severity index to guide patient management processing and resource allocation.
Combined with Artificial Neural Network (ANN) and Finite State Machine (FSM), the substation alarm data is processed. Firstly, to reduce the complexity of ANN model construction, the alarm sequence ...is simplified by the signal processing method of homology and complementary events merging. Secondly, the ANN weight matrix model and learning algorithm are constructed, and the logic reasoning and knowledge expression of system fault and abnormalities of four types of circuit breaker, line, bus and transformer, are acquired by training and testing data samples. Thirdly, carry out correlation analysis of fault set and build FSM model for alarm process recording, and finally form comprehensive analysis results. The results show that the method has the characteristics of fast, fault tolerance and strong learning ability, and it is of great significance to solve the online fault diagnosis problem of large-scale power system.
The aim of this study is to identify and evaluate the indicators of the happy city in affordable housing projects. The Aftab town in Tehran, Iran, has been chosen as a case study. The research method ...of this study is descriptive analytic. To collect the research data, the field survey method (including the completion of household questionnaires) has been used. T-tests, factor analysis and multivariable regression, were applied in SPSS-22 software for data analysis. The results showed that the status of indicators of a happy city in the Mehr Housing project of Aftab town of Parand is not favourable. Furthermore, the identified indicators of the happy city, respectively, have a priority effect on the happiness of the inhabitants, including the sense of happiness regarding physical and spatial interactions, the local government's support of local residents, the quality of the business environment, the quality of local services, the quality of the artificial and natural environment, the sense of happiness as a result of social and work relationships. According to the results, the most important indicator on the level of happiness for residents in the Mehr housing projects in Parand city is the physical and spatial interactions.
•Proposed the novel Radial Intersection Count Image descriptor.•Proposed a distance function capable of largely ignoring clutter.•Proposed the clutterbox experiment aimed at quantifying ...clutter.•Described efficient algorithms for generating and comparing RICI descriptors.•Showed that the Spin Image support radius may not improve performance.
Display omitted
A novel shape descriptor for cluttered scenes is presented, the Radial Intersection Count Image (RICI), and is shown to significantly outperform the classic Spin Image (SI) and 3D Shape Context (3DSC) in both uncluttered and, more significantly, cluttered scenes. It is also faster to compute and compare. The clutter resistance of the RICI is mainly due to the design of a novel distance function, capable of disregarding clutter to a great extent. As opposed to the SI and 3DSC, which both count point samples, the RICI uses intersection counts with the mesh surface, and is therefore noise-free. For efficient RICI construction, novel algorithms of general interest were developed. These include an efficient circle-triangle intersection algorithm and an algorithm for projecting a point into SI-like (α, β) coordinates. The ’clutterbox experiment’ is also introduced as a better way of evaluating descriptors’ response to clutter. The SI, 3DSC, and RICI are evaluated in this framework and the advantage of the RICI is clearly demonstrated.