Land use and land cover (LULC) mapping is often undertaken by national mapping agencies, where these LULC products are used for different types of monitoring and reporting applications. Updating of ...LULC databases is often done on a multi-year cycle due to the high costs involved, so changes are only detected when mapping exercises are repeated. Consequently, the information on LULC can quickly become outdated and hence may be incorrect in some areas. In the current era of big data and Earth observation, change detection algorithms can be used to identify changes in urban areas, which can then be used to automatically update LULC databases on a more continuous basis. However, the change detection algorithm must be validated before the changes can be committed to authoritative databases such as those produced by national mapping agencies. This paper outlines a change detection algorithm for identifying construction sites, which represent ongoing changes in LU, developed in the framework of the LandSense project. We then use volunteered geographic information (VGI) captured through the use of mapathons from a range of different groups of contributors to validate these changes. In total, 105 contributors were involved in the mapathons, producing a total of 2778 observations. The 105 contributors were grouped according to six different user-profiles and were analyzed to understand the impact of the experience of the users on the accuracy assessment. Overall, the results show that the change detection algorithm is able to identify changes in residential land use to an adequate level of accuracy (85%) but changes in infrastructure and industrial sites had lower accuracies (57% and 75 %, respectively), requiring further improvements. In terms of user profiles, the experts in LULC from local authorities, researchers in LULC at the French national mapping agency (IGN), and first-year students with a basic knowledge of geographic information systems had the highest overall accuracies (86.2%, 93.2%, and 85.2%, respectively). Differences in how the users approach the task also emerged, e.g., local authorities used knowledge and context to try to identify types of change while those with no knowledge of LULC (i.e., normal citizens) were quicker to choose ‘Unknown’ when the visual interpretation of a class was more difficult.
Introduction
The dual diagnosis among patients with primary psychotic disorders is frequent and causes diagnostic and treatment challenges. In clinical practice, differentiating between ...substance-induced psychoses and independent (primary) psychoses when the patient is actively using drugs of addiction, is difficult, especially in the acute phase of the psychosis.
Objectives
The aim of the study is to identify clinical data relevant for differentiating between primary psychoses triggered by addictive drug misuse and substance-induced psychoses, using psychometric scales.
Methods
The study was conducted on 111 patients divided in four samples: 28 dual diagnosis psychotic patients (DD), 27 bipolar patients (BD), 25 schizoaffective patients (SCA) and 31 patients with schizophrenia (SCZ). The subjects were assessed using scales for the severity of psychiatric symptoms, cognitive functions and social acuity (theory of mind): BPRS-E (Brief Psychiatric Rating Scale – Expanded), MoCA (Montreal Cognitive Assessment), CBS (Cambridge Behavioral Scale), and RMET (Reading the Mind in the Eyes Test). The tests were performed when patients were in the remission phase of the psychosis.
Results
BPRS-E scores showed significant differences between DD subjects and patients from the other three samples (primary psychoses). CBS revealed significant differences between the DD subjects and patients with schizophrenia spectrum psychoses (SCA and SCZ). RMET identified significant differences between DD and BD patients.
Conclusions
Although differentiating between substance-induced and primary psychoses remains a difficult task, social acuity assessment performed in remitted patients may be helpful in guiding the clinician to establish a more accurate diagnosis.
Data quality assessment of OpenStreetMap (OSM) data can be carried out by comparing them with a reference spatial data (e.g authoritative data). However, in case of a lack of reference data, the ...spatial accuracy is unknown. The aim of this work is therefore to propose a framework to infer relative spatial accuracy of OSM data by using machine learning methods. Our approach is based on the hypothesis that there is a relationship between extrinsic and intrinsic quality measures. Thus, starting from a multi-criteria data matching, the process seeks to establish a statistical relationship between measures of extrinsic quality of OSM (i.e. obtained by comparison with reference spatial data) and the measures of intrinsic quality of OSM (i.e. OSM features themselves) in order to estimate extrinsic quality on an unevaluated OSM dataset. The approach was applied on OSM buildings. On our dataset, the resulting regression model predicts the values on the extrinsic quality indicators with 30% less variance than an uninformed predictor.
Importing spatial open data in OpenStreetMap (OSM) project, is a practice that has existed from the beginning of the project. The rapid development and multiplication of collaborative mapping tools ...and open data have led to the growth of the phenomenon of importing massive data into OSM. The goal of this paper is to study the evolution of the massive imports over time. We propose an approach in three steps: classification of the sources used to edit features in the OSM platform including those massively imported, classification of modifications, and identification of evolution patterns. The approach is mixing global analysis (i.e. sources and modifications are classified) and feature based analysis (i.e. imported features are analyzed with respect to their evolution over time). The approach is applied on three datasets coming from OSM considered for their heterogeneity in terms of complexity, imports, and spatial and temporal characteristics. The results show that there is a sustained activity of edition on imported features, with a ratio between geometry editions and semantic editions depending on the type of the features, with roads being the features concentrating the most activity.
In this article, the authors present a case study on the methodology used in the process of continuous improvement of the performance of the quality management system, carried out in a company ...belonging to the field of production of devices and automation systems. To start this process, the internal audit subsystem and the analysis process performed by the management were first streamlined. In order to improve the non-compliant processes, within the quality management system an improvement strategy was established based on the stages of the 6 sigma methodology, namely the DMAIC model. Following the application of the methodology proposed by the authors, improvements of the processes between 44 and 96% were obtained depending on their type.
To evaluate the quality of OSM data, similarities between OSM features and their homologous features represented in a reference database are relevant metrics. However, reference databases do not ...exist everywhere or are not freely available. Thus, having data quality assessment methods that rely only on intrinsic indicators (i.e. based on data itself without considering external information) would be useful in these cases. This article specifically uses the radial distance as a target quality metric to measure the quality of shapes. Its aim is to build a random-forest based classification method that reconstructs whether this distance is higher or lower than a specified threshold, using only intrinsic indicators as inputs. The classification algorithm is evaluated on a first dataset by computing the ROC (Receiver Operating Characteristic) curve and using the AUC (Area Under Curve) as an evaluation metric. The transferability of the resulting algorithm is then evaluated by measuring its performance on a second, distinct dataset. The experiments show that the algorithm performs reasonably well on both the initial and the second dataset, and that intrinsic indicators give relevant information to infer comparison-based shape quality (i.e. the radial distance).
Companies routinely perform life tests on their products. Each of these life tests typically involves testing several units simultaneously with interest in the times to failure. Two aspects often ...associated with lifetime data that make the development of a control-charting procedure more demanding are that the data tend to be nonnormally distributed and censored. In this paper, one-sided lower and upper likelihood-ratio-based cumulative sum (CUSUM) control charting procedures are developed for Type I right-censored Weibull lifetime data to monitor changes in the scale parameter, also known as the characteristic life, for a fixed value of the Weibull shape parameter. Because a decrease in the characteristic life indicates a decrease in the mean lifetime of a product, a one-sided lower CUSUM chart is the main focus. We illustrate the development and implementation of the chart and evaluate its properties through a simulation study. The proposed CUSUM chart is compared with an exponentially weighted moving-average (EWMA) chart using the steady-state average run length (ARL) performance. The CUSUM chart is shown to perform better than the EWMA chart in detecting shifts for which it is designed.
Abstract Purpose To evaluate feasibility of using deformable image co-registration in three-phase adaptive dose-painting-by-numbers (DPBN) for head-and-neck cancer and to report dosimetrical data and ...preliminary clinical results. Material and methods Between November 2010 and October 2011, 10 patients with non-metastatic head-and-neck cancer enrolled in this phase I clinical trial where treatment was adapted every ten fractions. Each patient was treated with three DPBN plans based on: a pretreatment 18F-FDG-PET scan (phase I: fractions 1–10), a per-treatment 18F-FDG-PET/CT scan acquired after 8 fractions (phase II: fractions 11–20) and a per-treatment 18F-FDG-PET/CT scan acquired after 18 fractions (phase III: fractions 21–30). A median prescription dose to the dose-painted target was 70.2 Gy (fractions 1–30) and to elective neck was 40 Gy (fractions 1–20). Deformable image co-registration was used for automatic region-of-interest propagation and dose summation of the three treatment plans. Results All patients (all men, median age 68, range 48–74 years) completed treatment without any break or acute G ⩾ 4 toxicity. Target volume reductions (mean (range)) between pre-treatment CT and CT on the last day of treatment were 72.3% (57.9–98.4) and 46.3% (11.0–73.1) for GTV and PTVhigh_dose , respectively. Acute G3 toxicity was limited to dysphagia in 3/10 patients and mucositis in 2/10 patients; none of the patients lost ⩾20% weight. At median follow-up of 13, range 7–22 months, 9 patients did not have evidence of disease. Conclusions Three-phase adaptive 18F-FDG-PET-guided dose painting by numbers using currently available tools is feasible. Irradiation of smaller target volumes might have contributed to mild acute toxicity with no measurable decrease in tumor response.