Augmented Lagrangian method (ALM) has been popularly used for solving constrained optimization problems. Practically, subproblems for updating primal variables in the framework of ALM usually can ...only be solved inexactly. The convergence and local convergence speed of ALM have been extensively studied. However, the global convergence rate of the inexact ALM is still open for problems with nonlinear inequality constraints. In this paper, we work on general convex programs with both equality and inequality constraints. For these problems, we establish the global convergence rate of the inexact ALM and estimate its iteration complexity in terms of the number of gradient evaluations to produce a primal and/or primal-dual solution with a specified accuracy. We first establish an ergodic convergence rate result of the inexact ALM that uses constant penalty parameters or geometrically increasing penalty parameters. Based on the convergence rate result, we then apply Nesterov’s optimal first-order method on each primal subproblem and estimate the iteration complexity of the inexact ALM. We show that if the objective is convex, then
O
(
ε
-
1
)
gradient evaluations are sufficient to guarantee a primal
ε
-solution in terms of both primal objective and feasibility violation. If the objective is strongly convex, the result can be improved to
O
(
ε
-
1
2
|
log
ε
|
)
. To produce a primal-dual
ε
-solution, more gradient evaluations are needed for convex case, and the number is
O
(
ε
-
4
3
)
, while for strongly convex case, the number is still
O
(
ε
-
1
2
|
log
ε
|
)
. Finally, we establish a nonergodic convergence rate result of the inexact ALM that uses geometrically increasing penalty parameters. This result is established only for the primal problem. We show that the nonergodic iteration complexity result is in the same order as that for the ergodic result. Numerical experiments on quadratically constrained quadratic programming are conducted to compare the performance of the inexact ALM with different settings.
On solving a convex-concave bilinear saddle-point problem (SPP), there have been many works studying the complexity results of first-order methods. These results are all about upper complexity ...bounds, which can determine at most how many iterations would guarantee a solution of desired accuracy. In this paper, we pursue the opposite direction by deriving lower complexity bounds of first-order methods on large-scale SPPs. Our results apply to the methods whose iterates are in the linear span of past first-order information, as well as more general methods that produce their iterates in an arbitrary manner based on first-order information. We first work on the affinely constrained smooth convex optimization that is a special case of SPP. Different from gradient method on unconstrained problems, we show that first-order methods on affinely constrained problems generally cannot be accelerated from the known convergence rate
O
(1 /
t
) to
O
(
1
/
t
2
)
, and in addition,
O
(1 /
t
) is optimal for convex problems. Moreover, we prove that for strongly convex problems,
O
(
1
/
t
2
)
is the best possible convergence rate, while it is known that gradient methods can have linear convergence on unconstrained problems. Then we extend these results to general SPPs. It turns out that our lower complexity bounds match with several established upper complexity bounds in the literature, and thus they are tight and indicate the optimality of several existing first-order methods.
Well below 2 °C Xu, Yangyang; Ramanathan, Veerabhadran
Proceedings of the National Academy of Sciences - PNAS,
09/2017, Letnik:
114, Številka:
39
Journal Article
Recenzirano
Odprti dostop
The historic Paris Agreement calls for limiting global temperature rise to “well below 2 °C.” Because of uncertainties in emission scenarios, climate, and carbon cycle feedback, we interpret the ...Paris Agreement in terms of three climate risk categories and bring in considerations of low-probability (5%) high-impact (LPHI) warming in addition to the central (∼50% probability) value. The current risk category of dangerous warming is extended to more categories, which are defined by us here as follows: >1.5 °C as dangerous; >3 °C as catastrophic; and >5 °C as unknown, implying beyond catastrophic, including existential threats. With unchecked emissions, the central warming can reach the dangerous level within three decades, with the LPHI warming becoming catastrophic by 2050. We outline a three-lever strategy to limit the central warming below the dangerous level and the LPHI below the catastrophic level, both in the near term (<2050) and in the long term (2100): the carbon neutral (CN) lever to achieve zero net emissions of CO₂, the super pollutant (SP) lever to mitigate short-lived climate pollutants, and the carbon extraction and sequestration (CES) lever to thin the atmospheric CO₂ blanket. Pulling on both CN and SP levers and bending the emissions curve by 2020 can keep the central warming below dangerous levels. To limit the LPHI warming below dangerous levels, the CES lever must be pulled as well to extract as much as 1 trillion tons of CO₂ before 2100 to both limit the preindustrial to 2100 cumulative net CO₂ emissions to 2.2 trillion tons and bend the warming curve to a cooling trend.
Though highly motivated to slow the climate crisis, governments may struggle to impose costly polices on entrenched interest groups, resulting in a greater need for negative emissions. Here, we model ...wartime-like crash deployment of direct air capture (DAC) as a policy response to the climate crisis, calculating funding, net CO
removal, and climate impacts. An emergency DAC program, with investment of 1.2-1.9% of global GDP annually, removes 2.2-2.3 GtCO
yr
in 2050, 13-20 GtCO
yr
in 2075, and 570-840 GtCO
cumulatively over 2025-2100. Compared to a future in which policy efforts to control emissions follow current trends (SSP2-4.5), DAC substantially hastens the onset of net-zero CO
emissions (to 2085-2095) and peak warming (to 2090-2095); yet warming still reaches 2.4-2.5 °C in 2100. Such massive CO
removals hinge on near-term investment to boost the future capacity for upscaling. DAC is most cost-effective when using electricity sources already available today: hydropower and natural gas with renewables; fully renewable systems are more expensive because their low load factors do not allow efficient amortization of capital-intensive DAC plants.
Nonconvex optimization arises in many areas of computational science and engineering. However, most nonconvex optimization algorithms are only known to have local convergence or subsequence ...convergence properties. In this paper, we propose an algorithm for nonconvex optimization and establish its global convergence (of the whole sequence) to a critical point. In addition, we give its asymptotic convergence rate and numerically demonstrate its efficiency. In our algorithm, the variables of the underlying problem are either treated as one block or multiple disjoint blocks. It is assumed that each non-differentiable component of the objective function, or each constraint, applies only to one block of variables. The differentiable components of the objective function, however, can involve multiple blocks of variables together. Our algorithm updates one block of variables at a time by minimizing a certain prox-linear surrogate, along with an extrapolation to accelerate its convergence. The order of update can be either deterministically cyclic or randomly shuffled for each cycle. In fact, our convergence analysis only needs that each block be updated at least once in every fixed number of iterations. We show its global convergence (of the whole sequence) to a critical point under fairly loose conditions including, in particular, the Kurdyka–Łojasiewicz condition, which is satisfied by a broad class of nonconvex/nonsmooth applications. These results, of course, remain valid when the underlying problem is convex. We apply our convergence results to the coordinate descent iteration for non-convex regularized linear regression, as well as a modified rank-one residue iteration for nonnegative matrix factorization. We show that both applications have global convergence. Numerically, we tested our algorithm on nonnegative matrix and tensor factorization problems, where random shuffling clearly improves the chance to avoid low-quality local solutions.
Multi-way data arises in many applications such as electroencephalography classification, face recognition, text mining and hyperspectral data analysis. Tensor decomposition has been commonly used to ...find the hidden factors and elicit the intrinsic structures of the multi-way data. This paper considers sparse nonnegative Tucker decomposition (NTD), which is to decompose a given tensor into the product of a core tensor and several factor matrices with sparsity and nonnegativity constraints. An alternating proximal gradient method is applied to solve the problem. The algorithm is then modified to sparse NTD with missing values. Per-iteration cost of the algorithm is estimated scalable about the data size, and global convergence is established under fairly loose conditions. Numerical experiments on both synthetic and real world data demonstrate its superiority over a few state-of-the-art methods for (sparse) NTD from partial and/or full observations. The MATLAB code along with demos are accessible from the author’s homepage.
The Asian monsoon (AM) played an important role in the dynastic history of China, yet it remains unknown whether AM-mediated shifts in Chinese societies affect earth surface processes to the point of ...exceeding natural variability. Here, we present a dust storm intensity record dating back to the first unified dynasty of China (the Qin Dynasty, 221-207 B.C.E.). Marked increases in dust storm activity coincided with unified dynasties with large populations during strong AM periods. By contrast, reduced dust storm activity corresponded to decreased population sizes and periods of civil unrest, which was co-eval with a weakened AM. The strengthened AM may have facilitated the development of Chinese civilizations, destabilizing the topsoil and thereby increasing the dust storm frequency. Beginning at least 2000 years ago, human activities might have started to overtake natural climatic variability as the dominant controls of dust storm activity in eastern China.
Abstract
As the largest emitter in the world, China recently pledged to reach a carbon peak before 2030 and carbon neutrality before 2060, which could accelerate the progress of mitigating negative ...climate change effects. In this study, we used the Minimum Complexity Earth Simulator and a semi-empirical statistical model to quantify the global mean temperature and sea-level rise (SLR) response under a suite of emission pathways that are constructed to cover various carbon peak and carbon neutrality years in China. The results show that China will require a carbon emission reduction rate of no less than 6%/year and a growth rate of more than 10%/year for carbon capture capacity to achieve carbon neutrality by 2060. Carbon peak years and peak emissions contribute significantly to mitigating climate change in the near term, while carbon neutrality years are more influential in the long term. Mitigation due to recent China’s pledge alone will contribute a 0.16 °C–0.21 °C avoided warming at 2100 and also lessen the cumulative warming above 1.5 °C level. When accompanied by coordinated international efforts to reach global carbon neutrality before 2070, the 2 °C target can be achieved. However, the 1.5 °C target requires additional efforts, such as global scale adoption of negative emission technology for CO
2
, as well as a deep cut in non-CO
2
GHG emissions. Collectively, the efforts of adopting negative emission technolgy and curbing all greenhouse gas emissions will reduce global warming by 0.9 °C −1.2 °C at 2100, and also reduce SLR by 49–59 cm in 2200, compared to a baseline mitigation pathway already aiming at 2 °C. Our findings suggest that while China’s ambitious carbon-neutral pledge contributes to Paris Agreement’s targets, additional major efforts will be needed, such as reaching an earlier and lower CO
2
emission peak, developing negative emission technology for CO
2
, and cutting other non-CO
2
GHGs such as N
2
O, CH
4
, O
3
, and HFCs.
We have developed a miniature two-photon microscope equipped with an axial scanning mechanism and a long-working-distance miniature objective to enable multi-plane imaging over a volume of 420 × 420 ...× 180 μm
at a lateral resolution of ~1 μm. Together with the detachable design that permits long-term recurring imaging, our miniature two-photon microscope can help decipher neuronal mechanisms in freely behaving animals.
Thermal stress poses a major public health threat in a warming world, especially to disadvantaged communities. At the population group level, human thermal stress is heavily affected by landscape ...heterogeneities such as terrain, surface water, and vegetation. High-spatial-resolution thermal-stress indices, containing more detailed spatial information, are greatly needed to characterize the spatial pattern of thermal stress to enable a better understanding of its impacts on public health, tourism, and study and work performance. Here, we present a 0.1° × 0.1° gridded dataset of multiple thermal stress indices derived from the newly available ECMWF ERA5-Land and ERA5 reanalysis products over South and East Asia from 1981 to 2019. This high-spatial-resolution database of human thermal stress indices over South and East Asia (HiTiSEA), which contains the daily mean, maximum, and minimum values of UTCI, MRT, and eight other widely adopted indices, is suitable for both indoor and outdoor applications and allows researchers and practitioners to investigate the spatial and temporal evolution of human thermal stress and its impacts on densely populated regions over South and East Asia at a finer scale.