LHCb's Experiment Control System will handle the configuration, monitoring, and operation of all experimental equipment involved in the various activities of the experiment. A control framework ...(based on an industrial SCADA system) allowing the integration of the various devices into a coherent hierarchical system is being developed in common for the four Large Hadron Collider (LHC) experiments. The aim of this paper is to demonstrate that the same architecture and tools can be used to control and monitor all the different types of devices, from front-end electronics boards to temperature sensors to algorithms in an event filter farm, thus providing LHCb with a homogeneous control system and a coherent interface to all parts of the experiment.
It is commonly recognized that the observed increase in global mean annual air temperature is strongly related to the increase in global carbon dioxide concentration
C
, and that both these variables ...are related to global development. It remains, however, unclear the degree to which local mean annual urban air temperature
T
is affected by local variables such as annual precipitation depth
P
and urban area extent
A
. This study assumes that
A
is a proxy of local development and
C
is a proxy of global development and investigates the commingled effects of
A
,
P
, and
C
on
T
by using long-term annual data observed over the years 1881–2019 from the Modena Observatory in Italy. Linear relationships between
T
,
C
and
A
are found to be spurious since all these series have a monotonic increasing trend with time. Parametric analytic models like logistic functions are found to lack flexibility. Smoothing splines can only give insights into the strength of the relationships but not on their shape defining the functional relationship between variables. Advanced nonlinear models like generalized additive models, instead, are found to combine flexibility in a parametric form, and appear therefore to be suitable models for explaining the complex relationships between
A
,
P
, and
C
on
T
. The different models are evaluated using traditional goodness of fit statistics like
R
2
, AIC, BIC, and a new index of relation IR which is introduced to jointly evaluate the goodness-of-fit of relationships between variables that may either be dependent or independent.
In the context of the ongoing United Nations Framework Convention on Climate Change (UNFCCC) process, it seems important to focus attention not only on global mean surface air temperature (GSMT) but ...also on the climate of specific regions in order to gain insights into the dynamics of the changes, the timescales of the periodic components, the local trends and the relationships between climatic variables in the region of interest. This is important for scientists as well as for policymakers. This paper provides an analysis of the changes in local air temperature and precipitation depth in exceptionally long observational records and examines the relationships between these two variables. The focus is on monthly values. Temperature maximum, minimum, range, and cumulative precipitation depth are considered. The wavelet analysis shows that the scale of variation is different for temperature and precipitation and that the behavior of the temperature range values diverges from the behavior of the minimum and maximum values. The timescale of important changes in the long-term trend is, however, similar. Results also suggest that the main mode of variability is persistent through time in the series of temperature maximum, minimum, and range but not in precipitation depth. This is a clear evidence of climate change. All series show variances that change over time and are, as expected, nonstationary. The analysis of the wavelet coherence shows that the relationship between precipitation and temperature evolves through time, and its intensity varies considering different time scales. The association between these climatic variables is particularly strong in the last decade. Is it noteworthy that the analysis of the coherence suggests that temperature is leading to rain and not the other way around. This highlights the impact of global warming on the hydrologic cycle and on related human activities.
For clustering objects, we often collect not only continuous variables, but binary attributes as well. This paper proposes a model-based clustering approach with mixed binary and continuous variables ...where each binary attribute is generated by a latent continuous variable that is dichotomized with a suitable threshold value, and where the scores of the latent variables are estimated from the binary data. In economics, such variables are called utility functions and the assumption is that the binary attributes (the presence or the absence of a public service or utility) are determined by low and high values of these functions. In genetics, the latent response is interpreted as the ‘liability’ to develop a qualitative trait or phenotype. The estimated scores of the latent variables, together with the observed continuous ones, allow to use a multivariate Gaussian mixture model for clustering, instead of using a mixture of discrete and continuous distributions. After describing the method, this paper presents the results of both simulated and real-case data and compares the performances of the multivariate Gaussian mixture model and of a mixture of joint multivariate and multinomial distributions. Results show that the former model outperforms the mixture model for variables with different scales, both in terms of classification error rate and reproduction of the clusters means.
Published studies dealing with health promotion activities, such as the improvement of physical activity and healthy eating, for workers and students prove the effectiveness of these preventive ...interventions. The consequent benefits include better prevention of cardiovascular risk and an improvement in quality of life. Considering this, an intervention aimed at promoting healthy eating and non-sedentary lifestyles has been implemented within an Italian university: the aim of the present research is to evaluate its effectiveness. The intervention consisted of a targeted asynchronous e-learning two-hour course on healthy eating and non-sedentary lifestyles. The attendants were 2004 university students and employees. We conducted two surveys before and after the training intervention, and, through the responses obtained, we evaluated the effectiveness of the intervention. We applied different statistical methods, including unpaired t-tests and nonparametric tests, principal components and cluster analysis. Our results indicate that the post-training knowledge has been significantly improved, compared to that pre-training (7.3 vs. 8.7, p < 0.001). Moreover, the whole sample showed an improved awareness of the importance of healthy behaviors, and perception of the University as an institution promoting a healthy lifestyle. Through the principal components analysis, we identified a unidimensional latent factor named “health and behaviors”. The cluster analysis highlighted that the sub-group reporting the lowest scores at the survey before the training was the one with the highest improvement after the intervention. To the best of our knowledge, this is the first Italian study testing, before and after a health promotion intervention, the knowledge and the attitudes and behaviors towards healthy lifestyles of a group of students and workers. Moreover, we also evaluated the pre- and post-intervention perceived health status, as well as the level of engagement of the attendants, with respect to their colleagues and management in an educational institution promoting wellbeing. The conclusions of our study support the need for further adoption of health promotion training interventions, similar to the one we performed, in order to improve healthy eating and non-sedentary behaviors among workers and students.
Neural network modeling for small datasets can be justified from a theoretical point of view according to some of Bartlett's results showing that the generalization performance of a multilayer ...perceptron (MLP) depends more on the L
1
norm ||c||
1
of the weights between the hidden layer and the output layer rather than on the total number of weights. In this article we investigate some geometrical properties of MLPs and drawing on linear projection theory, we propose an equivalent number of degrees of freedom to be used in neural model selection criteria like the Akaike information criterion and the Bayes information criterion and in the unbiased estimation of the error variance. This measure proves to be much smaller than the total number of parameters of the network usually adopted, and it does not depend on the number of input variables. Moreover, this concept is compatible with Bartlett's results and with similar ideas long associated with projection-based models and kernel models. Some numerical studies involving both real and simulated datasets are presented and discussed.
Recent developments of multivariate smoothing methods provide a rich collection of feasible models for nonparametric multivariate data analysis. Among the most interpretable are those with smoothed ...additive terms. Construction of various methods and algorithms for computing the models have been the main concern in literature in this area. Less results are available on the validation of computed fit, instead, and many applications of nonparametric methods end up in computing and comparing the generalized validation error or related indexes. This article reviews the behaviour of some of the best known multivariate nonparametric methods, based on subset selection and on projection, when (exact) collinearity or multicollinearity (near collinearity) is present in the input matrix. It shows the possible aliasing effects in computed fits of some selection methods and explores the properties of the projection spaces reached by projection methods in order to help data analysts to select the best model in case of ill conditioned input matrices. Two simulation studies and a real data set application are presented to illustrate further the effects of collinearity or multicollinearity in the fit. PUBLICATION ABSTRACT
Tools for assessing decoding skill in students attending elementary grades are of fundamental importance for guaranteeing an early identification of reading disabled students and reducing both the ...primary negative effects (on learning) and the secondary negative effects (on the development of the personality) of this disability. This article presents results obtained by administering existing standardized tests of reading and a new screening procedure to about 1,500 students in the elementary grades in Italy. It is found that variables measuring speed and accuracy in all administered reading tests are not Gaussian, and therefore the threshold values used for classifying a student as a normal decoder or as an impaired decoder must be estimated on the basis of the empirical distribution of these variables rather than by using the percentiles of the normal distribution. It is also found that the decoding speed and the decoding accuracy can be measured in either a 1-minute procedure or in much longer standardized tests. The screening procedure and the tests administered are found to be equivalent insofar as they carry the same information. Finally, it is found that speed and accuracy act as complementary effects in the measurement of decoding ability. On the basis of this last finding, the study introduces a new composite indicator aimed at determining the student’s performance, which combines speed and accuracy in the measurement of decoding ability.
We introduce new similarity measures between two subjects, with reference to variables with multiple categories. In contrast to traditionally used similarity indices, they also take into account the ...frequency of the categories of each attribute in the sample. This feature is useful when dealing with rare categories, since it makes sense to differently evaluate the pairwise presence of a rare category from the pairwise presence of a widespread one. A weighting criterion for each category derived from Shannon’s information theory is suggested. There are two versions of the weighted index: one for independent categorical variables and one for dependent variables. The suitability of the proposed indices is shown in this paper using both simulated and real world data sets.