The Vanishing Discount Method for Stochastic Control: A Linear Programming Approach

File(s)
Date
2023-08-01Author
Hospital, Brian
Department
Mathematics
Advisor(s)
Richard H Stockbridge
Metadata
Show full item recordAbstract
Under consideration are convergence results between optimality criteria for two infinite-horizon stochastic control problems: the long-term average problem and the $\alpha$-discounted problem, where $\alpha \in (0,1]$ is a given discount rate. The objects under control are those stochastic processes that arise as (relaxed) solutions to a controlled martingale problem; and such controlled processes, subject to a given budget constraint, comprise the feasible sets for the two stochastic control problems. In this dissertation, we define and characterize the expected occupation measures associated with each of these stochastic control problems, and then reformulate each problem as an equivalent linear program over a space of such measures. We then establish sufficient conditions under which the long-term average linear program can be ``asymptotically approximated'' by the $\alpha$-parameterized family of (suitably normalized) $\alpha$-discounted linear programs as $\alpha \downarrow 0$. This approach is what can be referred to as the vanishing discount method. To state these conditions precisely, our analysis turns to set-valued mappings called correspondences. In particular, once we establish the appropriate framework, we see that our main results can be stated in a manner similar to that of Berge's Theorem.
Subject
control
probability
stochastic
Permanent Link
http://digital.library.wisc.edu/1793/93309Type
dissertation
