Developing better agricultural monitoring capabilities based on Earth Observation data is critical for strengthening food production information and market transparency. The Sentinel-2 mission has ...the optimal capacity for regional to global agriculture monitoring in terms of resolution (10–20 meter), revisit frequency (five days) and coverage (global). In this context, the European Space Agency launched in 2014 the “Sentinel2 for Agriculture” project, which aims to prepare the exploitation of Sentinel-2 data for agriculture monitoring through the development of open source processing chains for relevant products. The project generated an unprecedented data set, made of “Sentinel-2 like” time series and in situ data acquired in 2013 over 12 globally distributed sites. Earth Observation time series were mostly built on the SPOT4 (Take 5) data set, which was specifically designed to simulate Sentinel-2. They also included Landsat 8 and RapidEye imagery as complementary data sources. Images were pre-processed to Level 2A and the quality of the resulting time series was assessed. In situ data about cropland, crop type and biophysical variables were shared by site managers, most of them belonging to the “Joint Experiment for Crop Assessment and Monitoring” network. This data set allowed testing and comparing across sites the methodologies that will be at the core of the future “Sentinel2 for Agriculture” system.
The convergence of new EO data flows, new methodological developments and cloud computing infrastructure calls for a paradigm shift in operational agriculture monitoring. The Copernicus Sentinel-2 ...mission providing a systematic 5-day revisit cycle and free data access opens a completely new avenue for near real-time crop specific monitoring at parcel level over large countries. This research investigated the feasibility to propose methods and to develop an open source system able to generate, at national scale, cloud-free composites, dynamic cropland masks, crop type maps and vegetation status indicators suitable for most cropping systems. The so-called Sen2-Agri system automatically ingests and processes Sentinel-2 and Landsat 8 time series in a seamless way to derive these four products, thanks to streamlined processes based on machine learning algorithms and quality controlled in situ data. It embeds a set of key principles proposed to address the new challenges arising from countrywide 10 m resolution agriculture monitoring. The full-scale demonstration of this system for three entire countries (Ukraine, Mali, South Africa) and five local sites distributed across the world was a major challenge met successfully despite the availability of only one Sentinel-2 satellite in orbit. In situ data were collected for calibration and validation in a timely manner allowing the production of the four Sen2-Agri products over all the demonstration sites. The independent validation of the monthly cropland masks provided for most sites overall accuracy values higher than 90%, and already higher than 80% as early as the mid-season. The crop type maps depicting the 5 main crops for the considered study sites were also successfully validated: overall accuracy values higher than 80% and F1 Scores of the different crop type classes were most often higher than 0.65. These respective results pave the way for countrywide crop specific monitoring system at parcel level bridging the gap between parcel visits and national scale assessment. These full-scale demonstration results clearly highlight the operational agriculture monitoring capacity of the Sen2-Agri system to exploit in near real-time the observation acquired by the Sentinel-2 mission over very large areas. Scaling this open source system on cloud computing infrastructure becomes instrumental to support market transparency while building national monitoring capacity as requested by the AMIS and GEOGLAM G-20 initiatives.
Display omitted
•First ever national crop mapping at 10 m for Mali, Ukraine and South-Africa•Near real time agriculture monitoring at parcel level made operational nationwide•Demonstration across the world of multi-sensor EO exploitation for crop monitoring•Sentinel-2 time series mapping crop type at 10 m resolution along the growing season•Sen2-Agri: an innovative system to monitor crops in any country around the globe
Bioluminescence imaging (BLI) offers the possibility to study and image biology at molecular scale in small animals with applications in oncology or gene expression studies. Here we present a novel ...model-based approach to 3D animal tracking from monocular video which allows the quantification of bioluminescence signal on freely moving animals. The 3D animal pose and the illumination are dynamically estimated through minimization of an objective function with constraints on the bioluminescence signal position. Derived from an inverse problem formulation, the objective function enables explicit use of temporal continuity and shading information, while handling important self-occlusions and time-varying illumination. In this model-based framework, we include a constraint on the 3D position of bioluminescence signal to enforce tracking of the biologically produced signal. The minimization is done efficiently using a quasi-Newton method, with a rigorous derivation of the objective function gradient. Promising experimental results demonstrate the potentials of our approach for 3D accurate measurement with freely moving animal.
Optical imaging is an efficient mean to measure biological signal. However, it can suffer from low spatial and temporal resolution while animal deformable displacements could also degrade ...significantly the localization of the measurements. In this paper, we propose a novel approach to perform fusion of cinematic flow and optical imaging towards enhancement of the biological signal. To this end, fusion is reformulated as a population (all vs. all) registration problem where the two (being spatially aligned) signals are registered in time using the same deformation field. Implicit silhouette and landmark matching are considered for the cinematic images and are combined with global statistical congealing-type measurements of the optical one. The problem is reformulated using a discrete MRF, where optical imaging costs are expressed in singleton (global) potentials, while smoothness constraints as well as cinematic measurements through pair-wise potentials. Promising experimental results demonstrate the potentials of our approach.
Bioluminescence imaging (BLI) allows detection of biological functions in genetically modified cells, bacteria, or animals expressing a luciferase (i.e., firefly, Renilla, or aequorin). Given the ...high sensitivity and minimal toxicity of BLI,
studies on molecular events can be performed noninvasively in living rodents. To date, detection of bioluminescence in living animals has required long exposure times that are incompatible with studies on dynamic signaling pathways or nonanaesthetised freely moving animals. Here we develop an imaging system that allows: 1. bioluminescence to be recorded at a rate of
using a third generation intensified charge-coupled device (CCD) camera running in a photon counting mode, and 2. coregistration of a video image from a second CCD camera under infrared lighting. The sensitivity of this instrument permits studies with subsecond temporal resolution in nonanaesthetized and unrestrained mice expressing firefly luciferase and imaging of calcium signaling in transgenic mice expressing green fluorescent protein (GFP) aequorin. This imaging system enables studies on signal transduction, tumor growth, gene expression, or infectious processes in nonanaesthetized and freely moving animals.
This article describes a practical Earth Observation use case that would benefit from quantum computing. We analyze three quantum neural network algorithms. We implemented one of the algorithms on ...the EuroSAT dataset. We compare the algorithms with respect to complexity and degree of quantization. We believe that the algorithms we propose would be useful for the remote sensing community when quantum computing technologies become widely available. 1
Supervised machine learning techniques are widely used for hyper-spectral images segmentation. A typical simple scheme of classification of such images probabilistically assigns a label to each ...individual pixel omitting information about pixel surroundings. In order to achieve better classification results for real world images one has to agree the local label obtained from the classifier with the classes of pixel neighborhood. A popular way to do it is through a probabilistic graphical model, where label distributions for individual pixels are mapped into a graph of neighborhood relations. One way to realize this approach is to use Ising models, where class probability is mapped to spin energy and class-class interaction is mapped to the spins coupling. By finding low energy states of such an Ising model we can perform post-processing of segmented images. In this work we present how this postprocessing can be implemented using a quantum annealer.
Motion-based enhancement of optical imaging Savinaud, M.; Paragios, N.; Maitrejean, S.
2009 IEEE International Symposium on Biomedical Imaging: From Nano to Macro,
2009-June
Conference Proceeding
Optical imaging offers the possibility to study and image biology at molecular scale in small animal with applications such as oncology or gene expression studies. New device enable cinematic ...acquisitions but in some cases signals could suffer from weak spatial information. The aim of this paper is to improve localization of the luminescent data for a freely moving animal using motion information obtained with an additional camera. To this end, we propose an approach that performs a non-rigid registration in the scene video and use the corresponding motion vectors to filter the optical one. Due to the lack of contrast and texture, our method introduces silhouette constraints and landmarks on the mouse skin within a variation framework. Motion is represented using thin-plate-splines and the objective function is optimized using a gradient conjugate descent approach. Experimental results demonstrate the potentials of the proposed framework.
Orfeo ToolBox is an open-source project for state-of-the-art remote sensing, including a fast image viewer, applications callable from command-line, Python or QGIS, and a powerful C++ API. This ...article is an introduction to the Orfeo ToolBox’s flagship features from the point of view of the two communities it brings together: remote sensing and software engineering.
Parmi les approches d'imagerie préclinique, les techniques optiques sur petit animal fournissent une information fonctionnelle sur un phénomène biologique ainsi que sur sa localisation. De récents ...développements permettent d'exploiter ces méthodes dans le cadre de l'imagerie sur animal vigile. Les conditions physiologiques se rapprochent alors de celles du fonctionnement normal de l'organisme. Les travaux de cette thèse ont porté sur l'utilisation optimale de cette modalité via des méthodes originales d'analyse et de traitement.Les problèmes soulevés par la fusion des flux cinématiques et de données de bioluminescence nous ont amené à proposer des approches complémentaires d’estimationde mouvement de l’animal. La représentation sous forme implicite des informations issuesde la vidéo de l’animal permettent de construire un critère robuste à minimiser. L’ajout d’uncritère global mesurant la compacité du signal optique permet de considérer dans sa totalité les données multicanaux acquises pour augmenter la précision du recalage. Finalement ces deux modélisations offrent des résultats pertinents et validés expérimentalement.Dans le but de s'affranchir des contraintes de l'observation planaire de nos données nous avons conçu une méthode d’estimation du mouvement 3D de l’animal à partir d’un modèle pré-calculé. Grâce à un système d'acquisition multi-vues et simultanée de la scène, il est possible d’ajouter une contrainte sur l'estimation de la position de la source pour rendre robuste le suivi des poses issues de la vidéo. Les résultats expérimentaux montrent le potentiel de cette méthode pour fournir des mesures 3D précises sur l'animal vigile.
Optical imaging techniques, have taken, since many years, a great part in the preclinicalstudies. The luminescence signal could be now recorded with a short time resolution whichenables studies with freely moving animals. This is an improvement because several studieshighlighted the impact of anesthetics agent and animal handling to perform studies inphysiological conditions. In this thesis, we define the tools, based on computer visionmethods, which offer the possibility to express the potential of this modality.In some cases, animal movement and low signal produce weak localization of the signal.Therefore we propose to improve localization of the optical data for a freely moving animal byusing motion field obtained from the multi-channel data. First, we introduce silhouetteconstraints and landmarks on the mouse skin within a variation framework. To take intoaccount all data in the registration framework, we combine the previously defined criteria,with global ones which measure compactness of signal distribution. Fusion is formulated as adiscrete population framework which produces strong experimental results in comparison topairwise method.In the last part, we propose an original approach to enable 3D optical imaging in case offreely moving animal. Therefore, we present a novel model-based method to animal trackingfrom monocular video which allows the 3D measurement of the signal. The 3D animal poseand the illumination are dynamically estimated through minimization of an objective functionwith constraints on the signal position. Experimental results demonstrate the potential of ourapproach for 3D accurate measurement with freely moving animal.