Monitoring driver fatigue, inattention, and lack of sleep is very important in preventing motor vehicles accidents. A visual system for automatic driver vigilance has to address two fundamental ...problems. First of all, it has to analyze the sequence of images and detect if the driver has his eyes open or closed, and then it has to evaluate the temporal occurrence of eyes open to estimate the driver's visual attention level. In this paper we propose a visual approach that solves both problems. A neural classifier is applied to recognize the eyes in the image, selecting two candidate regions that might contain the eyes by using iris geometrical information and symmetry. The novelty of this work is that the algorithm works on complex images without constraints on the background, skin color segmentation and so on. Several experiments were carried out on images of subjects with different eye colors, some of them wearing glasses, in different light conditions. Tests show robustness with respect to situations such as eyes partially occluded, head rotation and so on. In particular, when applied to images where people have eyes closed the proposed algorithm correctly reveals the absence of eyes. Next, the analysis of the eye occurrence in image sequences is carried out with a probabilistic model to recognize anomalous behaviors such as driver inattention or sleepiness. Image sequences acquired in the laboratory and while people were driving a car were used to test the driver behavior analysis and demonstrate the effectiveness of the whole approach.
A large number of methods for circle detection have been studied in the last years for several image processing applications. The context application considered in this work is the soccer game. In ...the sequences of soccer images it is very important to identify the ball in order to verify the goal event. This domain is a challenging one as a great number of problems have to be faced, such as occlusions, shadows, objects similar to the ball, real-time processing and so on. In this work a visual framework trying to solve the above-stated problems, mainly considering real-time computational aspects, has been developed. The ball detection algorithm has to be very simple in terms of time processing and also has to be efficient in terms of false positive rate. Our framework consists of two sequential steps for solving the ball recognition problem: the first step uses a modified version of the directional circle Hough transform to detect the region of the image that is the best candidate to contain the ball; in the second step a neural classifier is applied on the selected region to confirm if the ball has been properly detected or a false positive has been found. Some tricks like background subtraction and ball tracking have been applied in order to maintain the search of the ball only in limited areas of the image. Different light conditions have been considered as they introduce strong modifications on the appearance of the ball in the image: when the image sequences are taken with natural light, as the light source is strictly directional, the ball, due to self-shades, appears as a spherical cap; this case has been taken in account and the search of the ball has been modified in order to manage this situation. A large number of experiments have been carried out showing that the proposed method obtains a high detection score.
Archaeological trace extraction in aerial or satellite data is a difficult issue for automatic algorithms due to the traces similarity to other image artifacts or to their poor boundary information, ...discontinuities and so on. We propose in this paper a modified region based active contour approach for archaeological trace identification that overcomes the limits of standard methods of region uniformity and different consistencies with respect to the background. The proposed approach introduces a directional energy model in the minimization of the conventional energy term used in the existing active contour approaches. The local trace direction is estimated automatically after an initial unconstrained evolution of the region. Then, an iterative block based directional procedure has been introduced to limit the application of the modified method to local and adjacent areas and to allow the processing of large images in which the traces may have complex intersections or follow a curved trajectory. Finally, in order to reduce the initialization dependance problem, we propose the use of one seed point for each trace as the initial curve. Tests on the extraction of archaeological traces such as centuriations and ancient roads, visible as crop marks, have demonstrated that the proposed method and the developed MATLAB-based Graphical User Interface (GUI) facilitate unskilled/semi-skilled users in their archaeologic traces mapping operations and improve their detection precisions.
► The problem of archaeological trace extraction has been considered. ► A direction energy model has been introduced in an active contour model. ► An iterative local procedure has been proposed. ► Simulated and real experiments have been carried out. ► Comparisons with other approaches from the literature have been provided.
The detection of internal defects in composite materials with non-destructive techniques is an important requirement both for quality checks during the production phase and in-service inspection ...during maintenance operations. Visual inspection allows only the analysis of surface characteristics of materials and, then, if internal faults occur inside composite structures, a deeper analysis is required. A comparison between the reactions of different materials to ultrasonic signals can be used to highlight the difference in the internal structures and also to detect the depth position of these anomalies. However, ultrasonic data are difficult to interpret since they require the analysis of a continuous signal for each point of the material under consideration. An automatic procedure is necessary to manage large data sets and to extract significant differences between them.
In this paper, we address the problem of automatic inspection of composite materials using an ultrasonic technique. We consider two main steps for interpreting ultrasonic data: the pre-processing technique necessary to normalize the signals of composite structures with different thicknesses and the classification techniques used to compare the ultrasonic signals and detect classes of similar points.
Breast cancer is the main cause of female malignancy worldwide. Effective early detection by imaging studies remains critical to decrease mortality rates, particularly in women at high risk for ...developing breast cancer. Breast Magnetic Resonance Imaging (MRI) is a common diagnostic tool in the management of breast diseases, especially for high-risk women. However, during this examination, both normal and abnormal breast tissues enhance after contrast material administration. Specifically, the normal breast tissue enhancement is known as background parenchymal enhancement: it may represent breast activity and depends on several factors, varying in degree and distribution in different patients as well as in the same patient over time. While a light degree of normal breast tissue enhancement generally causes no interpretative difficulties, a higher degree may cause difficulty to detect and classify breast lesions at Magnetic Resonance Imaging even for experienced radiologists. In this work, we intend to investigate the exploitation of some statistical measurements to automatically characterize the enhancement trend of the whole breast area in both normal and abnormal tissues independently from the presence of a background parenchymal enhancement thus to provide a diagnostic support tool for radiologists in the MRI analysis.
This paper describes the design and the performance of the timing detector developed by the TOTEM Collaboration for the Roman Pots (RPs) to measure the Time-Of-Flight (TOF) of the protons produced in ...central diffractive interactions at the LHC . The measurement of the TOF of the protons allows the determination of the longitudinal position of the proton interaction vertex and its association with one of the vertices reconstructed by the CMS detectors. The TOF detector is based on single crystal Chemical Vapor Deposition (scCVD) diamond plates and is designed to measure the protons TOF with about 50 ps time precision. This upgrade to the TOTEM apparatus will be used in the LHC run 2 and will tag the central diffractive events up to an interaction pileup of about 1. A dedicated fast and low noise electronics for the signal amplification has been developed. The digitization of the diamond signal is performed by sampling the waveform. In conclusion, after introducing the physics studies that will most profit from the addition of these new detectors, we discuss in detail the optimization and the performance of the first TOF detector installed in the LHC in November 2015.
The scientific progress in artificial intelligence and robotics has enabled precision viticulture to pursue sustainability and improve the final yield. For instance, monitoring the canopy volume of ...each plant can allow the correct ripening of the bunches. In this context, this paper proposes a novel approach for the characterization of biomass volume using images acquired in a vineyard with the low-cost Azure Kinect RGB-D camera. Semantic image segmentation is implemented using three encoder–decoder deep architectures (U-Net, DeepLabV3+, and MANet) to produce accurate masks of the vine leaf structure. In a transfer learning approach, a public dataset acquired with the Intel RealSense D435 depth camera is used to train the segmentation networks. Then, a complete pipeline to estimate possible changes in biomass volume is presented. Experiments are run to analyze the biomass removed during the trimming process of grapevine plants. The best segmentation result is obtained by the U-Net architecture with ResNet50 backbone, showing an accuracy of 92.10%, although the training and test sets consist of images acquired by different cameras. However, the DeepLabV3+ network with ResNeXt50 backbone, which scores an accuracy of 90.25% on the test set, gives the best estimate of the removed biomass, requiring the shortest time for training. These outcomes prove the potential capability of this automatic approach for controlling leaf growth and ensuring sustainable viticulture practices.
•Low-cost RGB-D cameras for Biomass characterization before and after the trimming process.•Deep learning-based semantic segmentation networks designed for high generalization.•Combination of multimodal data (color and depth) to extract only significant features.•Registration of point clouds to allow the exact characterization of the removed biomass.
Safety in aeronautics could be improved if continuous checks were guaranteed during the in-service inspection of aircraft. However, until now, the maintenance costs of so doing have proved ...prohibitive. For this reason there is a great interest for the development of low cost non-destructive inspection techniques that can be applied during normal routine tests. The analysis of the internal defects (not detectable by a visual inspection) of the aircraft composite materials is a difficult task unless invasive techniques are applied. In this paper, we have addressed the problem of inspecting composite materials by using automatic analysis of thermographic techniques. The analysis of the time/space variations in a sequence of thermographic images allows the identification of internal defects in composite materials that otherwise could not be detected. A neural network was trained to extract the information that characterises a range of internal defects in different types of composite materials. After the training phase the same neural network was applied to all the points of a sequence of thermographic images. The experimental results demonstrate the ability of the method to recognize regions containing defects but also to identify the contour regions that cannot be associated either with a defective or with a sound region.
The TOTEM experiment has made a precise measurement of the elastic proton–proton differential cross-section at the centre-of-mass energy s=8 TeV based on a high-statistics data sample obtained with ...the β⁎=90 m optics. Both the statistical and systematic uncertainties remain below 1%, except for the t-independent contribution from the overall normalisation. This unprecedented precision allows to exclude a purely exponential differential cross-section in the range of four-momentum transfer squared 0.027<|t|<0.2 GeV2 with a significance greater than 7 σ. Two extended parametrisations, with quadratic and cubic polynomials in the exponent, are shown to be well compatible with the data. Using them for the differential cross-section extrapolation to t=0, and further applying the optical theorem, yields total cross-section estimates of (101.5±2.1) mb and (101.9±2.1) mb, respectively, in agreement with previous TOTEM measurements.
This paper is concerned with modeling earthquake-induced ground accelerations and the simulation of the dynamic response of linear structures through the principles of stochastic dynamics. A fully ...evolutionary approach, with nonstationarity both in amplitude and in frequency content, is proposed in order to define the seismic action, based on seismological information in the form of a small number of input parameters commonly available in deterministic or probabilistic seismic design situations. The signal is obtained by filtering a Gaussian white-noise. The finite duration and time-varying amplitude properties are obtained by using a suitable envelope function. By utilizing a subset of the records from the PEER-NGA strong-motion database, and time-series analysis tools extended to nonstationary processes, the key transfer-function properties, in terms of circular frequency, damping ratio and spectral intensity factor, are identified. A regression analysis is conducted for practical and flexible application of this model, in order to empirically relate the identified time-varying parameters of the filter to the characteristics defining earthquake scenarios such as magnitude, rupture distance and soil type. A validation study and a parametric investigation using elastic response spectra is also included. Results show that the final seismic model can reproduce, with satisfactory accuracy, the characteristics of acceleration records in a region, over a broad range of response periods.
► Simulation of earthquake ground motions at a given site. ► Stochastic approach for generating acceleration time-history. ► Probabilistic response spectra. ► NGA database. ► Seismological scenario.