In this paper, we consider the problem of controlling a diffusion process pertaining to an opioid epidemic dynamical model with random perturbation so as to prevent it from leaving a given bounded ...open domain. In particular, we assume that the random perturbation enters only through the dynamics of the susceptible group in the compartmental model of the opioid epidemic dynamics and, as a result of this, the corresponding diffusion is degenerate, for which we further assume that the associated diffusion operator is hypoelliptic, i.e., such a hypoellipticity assumption also implies that the corresponding diffusion process has a transition probability density function with strong Feller property. Here, we minimize the asymptotic exit rate of such a controlled-diffusion process from the given bounded open domain and we derive the Hamilton–Jacobi–Bellman equation for the corresponding optimal control problem, which is closely related to a nonlinear eigenvalue problem. Finally, we also prove a verification theorem that provides a sufficient condition for optimal control.
Full text
Available for:
EMUNI, FIS, FZAB, GEOZS, GIS, IJS, IMTLJ, KILJ, KISLJ, MFDPS, NLZOH, NUK, OBVAL, OILJ, PNG, SAZU, SBCE, SBJE, SBMB, SBNM, UKNU, UL, UM, UPUK, VKSCE, ZAGLJ
In this paper, we consider the problem of asymptotic exit control for a prescription opioid epidemic that describes the interaction between the regular prescription or addictive use of opioid drugs, ...and the process of rehabilitation and that of relapsing into the opioid drug use. In particular, our interest is in the situation, when the optimal control effort appearing linearly in the opioid epidemic model is interpreted as the rate at which the susceptible individuals are effectively removed from the population due to an opioid-related intervention policy, while a small perturbing noise enters through the dynamics of the susceptible group in the population compartmental model. To this end, we introduce a mathematical apparatus that minimizes the asymptotic exit-rate with which the solution for such stochastically perturbed prescription opioid epidemics exits from a given bounded open domain. Moreover, under certain conditions, we provide an admissible optimal control for the corresponding optimal control problem that optimally effected removal of the susceptible or recovered individuals from the population dynamics.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
Given a solution of a controlled martingale problem it is shown under general conditions that there exists a solution having Markov controls which has the same cost as the original solution. This ...result is then used to show that the original stochastic control problem is equivalent to a linear program over a space of measures under a variety of optimality criteria. Existence and characterization of optimal Markov controls then follows. An extension of Echeverria's theorem characterizing stationary distributions for (uncontrolled) Markov processes is obtained as a corollary. In particular, this extension covers diffusion processes with discontinuous drift and diffusion coefficients.
Full text
Available for:
CEKLJ, DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, UILJ, UKNU, UL, UM, UPUK
It is proved that the finite dimensional marginal distribution of a controlled nondegenerate diffusion at a prescribed set of time instants can also be attained by using a control from a much smaller ...class of controls called ‘nearly Markov controls’.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
This paper considers a non‐Markov control problem arising in a financial market where asset returns depend on hidden factors. The problem is non‐Markov because nonlinear filtering is required to make ...inference on these factors, and hence the associated dynamic program effectively takes the filtering distribution as one of its state variables. This is of significant difficulty because the filtering distribution is a stochastic probability measure of infinite dimension, and therefore the dynamic program has a state that cannot be differentiated in the traditional sense. This lack of differentiability means that the problem cannot be solved using a Hamilton–Jacobi–Bellman equation. This paper will show how the problem can be analyzed and solved using backward stochastic differential equations, with a key tool being the problem's dual formulation.
Full text
Available for:
BFBNIB, FZAB, GIS, IJS, IZUM, KILJ, NLZOH, NUK, OILJ, PILJ, SAZU, SBCE, SBMB, UL, UM, UPUK
Studies of dynamic economic models often rely on each agent having a smooth value function and a well-defined optimal strategy. For time-homogeneous optimal control problems with a one-dimensional ...diffusion, we prove that the corresponding value function must be twice continuously differentiable under Lipschitz, growth, and non-vanishing-volatility conditions. Under similar conditions, the value function of any optimal stopping problem is shown to be (once) continuously differentiable. We also provide sufficient conditions, based on comparative statics and differential methods, for the existence of an optimal control in the sense of strong solutions. The results are applied to growth, experimentation, and dynamic contracting settings.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UL, UM, UPCLJ, UPUK
Risk-Sensitive Markov Control Processes Shen, Yun; Stannat, Wilhelm; Obermayer, Klaus
SIAM journal on control and optimization,
01/2013, Volume:
51, Issue:
5
Journal Article
Peer reviewed
We introduce a general framework for measuring risk in the context of Markov control processes with risk maps on general Borel spaces that generalize known concepts of risk measures in mathematical ...finance, operations research, and behavioral economics. Within the framework, applying weighted norm spaces to incorporate unbounded costs also, we study two types of infinite-horizon risk-sensitive criteria, discounted total risk and average risk, and solve the associated optimization problems by dynamic programming. For the discounted case, we propose a new discount scheme, which is different from the conventional form but consistent with the existing literature, while for the average risk criterion, we state Lyapunov-like stability conditions that generalize known conditions for Markov chains to ensure the existence of solutions to the optimality equation. PUBLICATION ABSTRACT