•We designed an automated framework to attribute forest loss to specific drivers.•We used expert-labeled Landsat 8 imagery to train a deep learning model in Indonesia.•The model achieved high ...agreement with experts on data from 2012 to 2019.•We used it to attribute drivers of 2+ million loss events in Indonesia from 2012 to 2019.•We show key driver trends which align with corporate pledges and regulatory measures.
Deforestation is a leading contributor to greenhouse gas emissions globally. Understanding the direct drivers of forest loss is essential for developing targeted forest conservation and management policies. However, this data is hard to collect at scale due to the complexity of forest loss drivers and expertise required for accurately identifying them. To address this challenge, we developed a deep learning model called ForestNet which uses publicly available satellite imagery to automatically classify the drivers of primary forest loss. We validated ForestNet on a test set of expert-annotated forest loss events and showed that ForestNet achieved high performance across four major driver classes. We used ForestNet to identify these drivers on over 2 million forest loss events in Indonesia between 2012 and 2019, with significant improvement in spatial and temporal resolution over previously available data. We found that plantations and smallholder agriculture were the primary direct drivers of deforestation in Indonesia during this period, accounting for 64 % of total forest loss. Deforestation has decreased steadily since 2012 after increasing steadily from 2001 to 2009 and peaking from 2009 to 2012, trends that we found are primarily due to changes in plantation-driven deforestation. Our approach can serve as a general framework for scalably attributing deforestation to specific drivers and can be extended to other regions of interest, providing a flexible and cost-effective way for countries to regularly monitor, understand, and address their unique and dynamic drivers of deforestation.
Over the past two decades, measurements of the cosmic microwave background (CMB) have provided profound insight into the nature of the universe. Detailed information about the composition and ...evolution of the universe is encoded in the temperature and polarization anisotropy of the CMB, the measurements of which have enabled powerful tests of cosmo- logical theory. In this thesis we present two studies of the CMB using data from the South Pole Telescope (SPT). We first present a measurement of the temperature power spectrum of CMB from the 2500 square degree SPT-SZ survey using data from the first camera mounted on the SPT. This measurement and the cosmological interpretation was published in a pair of papers. Relative to all previous experiments at the time of publication, this analysis improved the precision of the power spectrum measurement over the entire range of angular multipoles reported (650 < ℓ < 3000). In combination with large angular scale measurements from WMAP7, these data provided several important constraints: the most significant detection of gravitational lensing of the CMB at the time (8.1σ), the first > 5σ detection of dark energy from CMB data alone, the first > 5σ detection of a scalar spectra index below unity (in combination with external data sets), the tightest constraints on tensor modes at the time (r < 0.11 at 95% C.L., in combination with external data sets), and interesting constraints on neutrino physics and other extensions to the LCDM cosmological model. Second, we measure the CMB gravitational lensing potential and its power spectrum using data from the polarization-sensitive camera SPTpol, the second camera installed on SPT. We use a quadratic estimator technique, which takes advantage of the statistical anisotropy induced by lensing in the CMB temperature and polarization fields to reconstruct the lensing potential. We measure the power spectrum of the lensing potential, and find that it is well fit by a fiducial ΛCDM model. This measurement rejects the no-lensing hypothesis at 14σ. Restricting ourselves to polarization data only, we reject the no-lensing hypothesis at 5.9σ. This is the highest signal-to-noise map of the CMB lensing potential to date. The quadratic estimator analysis developed here sets the path for future analyses from SPTpol and the third generation experiment SPT-3G.
In the application of machine learning to remote sensing, labeled data is often scarce or expensive, which impedes the training of powerful models like deep convolutional neural networks. Although ...unlabeled data is abundant, recent self-supervised learning approaches are ill-suited to the remote sensing domain. In addition, most remote sensing applications currently use only a small subset of the multi-sensor, multi-channel information available, motivating the need for fused multi-sensor representations. We propose a new self-supervised training objective, Contrastive Sensor Fusion, which exploits coterminous data from multiple sources to learn useful representations of every possible combination of those sources. This method uses information common across multiple sensors and bands by training a single model to produce a representation that remains similar when any subset of its input channels is used. Using a dataset of 47 million unlabeled coterminous image triplets, we train an encoder to produce semantically meaningful representations from any possible combination of channels from the input sensors. These representations outperform fully supervised ImageNet weights on a remote sensing classification task and improve as more sensors are fused. Our code is available at https://storage.cloud.google.com/public-published-datasets/csf_code.zip.
Next-generation cosmic microwave background (CMB) experiments will have lower noise and therefore increased sensitivity, enabling improved constraints on fundamental physics parameters such as the ...sum of neutrino masses and the tensor-to-scalar ratio r. Achieving competitive constraints on these parameters requires high signal-to-noise extraction of the projected gravitational potential from the CMB maps. Standard methods for reconstructing the lensing potential employ the quadratic estimator (QE). However, the QE performs suboptimally at the low noise levels expected in upcoming experiments. Other methods, like maximum likelihood estimators (MLE), are under active development. In this work, we demonstrate reconstruction of the CMB lensing potential with deep convolutional neural networks (CNN) - ie, a ResUNet. The network is trained and tested on simulated data, and otherwise has no physical parametrization related to the physical processes of the CMB and gravitational lensing. We show that, over a wide range of angular scales, ResUNets recover the input gravitational potential with a higher signal-to-noise ratio than the QE method, reaching levels comparable to analytic approximations of MLE methods. We demonstrate that the network outputs quantifiably different lensing maps when given input CMB maps generated with different cosmologies. We also show we can use the reconstructed lensing map for cosmological parameter estimation. This application of CNN provides a few innovations at the intersection of cosmology and machine learning. First, while training and regressing on images, we predict a continuous-variable field rather than discrete classes. Second, we are able to establish uncertainty measures for the network output that are analogous to standard methods. We expect this approach to excel in capturing hard-to-model non-Gaussian astrophysical foreground and noise contributions.
Characterizing the processes leading to deforestation is critical to the development and implementation of targeted forest conservation and management policies. In this work, we develop a deep ...learning model called ForestNet to classify the drivers of primary forest loss in Indonesia, a country with one of the highest deforestation rates in the world. Using satellite imagery, ForestNet identifies the direct drivers of deforestation in forest loss patches of any size. We curate a dataset of Landsat 8 satellite images of known forest loss events paired with driver annotations from expert interpreters. We use the dataset to train and validate the models and demonstrate that ForestNet substantially outperforms other standard driver classification approaches. In order to support future research on automated approaches to deforestation driver classification, the dataset curated in this study is publicly available at https://stanfordmlgroup.github.io/projects/forestnet .