Continuous Time Markov Decision Processes

Continuous Time Markov Decision Processes
Author: Xianping Guo,Onésimo Hernández-Lerma
Publsiher: Springer Science & Business Media
Total Pages: 240
Release: 2009-09-18
Genre: Mathematics
ISBN: 9783642025471

Download Continuous Time Markov Decision Processes Book in PDF, Epub and Kindle

Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. This volume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.

Continuous Time Markov Decision Processes

Continuous Time Markov Decision Processes
Author: Xianping Guo,Onesimo Hernandez-Lerma
Publsiher: Springer
Total Pages: 0
Release: 2012-03-14
Genre: Mathematics
ISBN: 3642260721

Download Continuous Time Markov Decision Processes Book in PDF, Epub and Kindle

Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. This volume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.

Continuous Time Markov Decision Processes

Continuous Time Markov Decision Processes
Author: Xianping Guo,Onesimo Hernandez-Lerma
Publsiher: Springer
Total Pages: 234
Release: 2010-04-29
Genre: Mathematics
ISBN: 364202548X

Download Continuous Time Markov Decision Processes Book in PDF, Epub and Kindle

Continuous-time Markov decision processes (MDPs), also known as controlled Markov chains, are used for modeling decision-making problems that arise in operations research (for instance, inventory, manufacturing, and queueing systems), computer science, communications engineering, control of populations (such as fisheries and epidemics), and management science, among many other fields. This volume provides a unified, systematic, self-contained presentation of recent developments on the theory and applications of continuous-time MDPs. The MDPs in this volume include most of the cases that arise in applications, because they allow unbounded transition and reward/cost rates. Much of the material appears for the first time in book form.

Selected Topics on Continuous Time Controlled Markov Chains and Markov Games

Selected Topics on Continuous Time Controlled Markov Chains and Markov Games
Author: Tomás Prieto-Rumeau,Onésimo Hernández-Lerma
Publsiher: World Scientific
Total Pages: 292
Release: 2012-03-16
Genre: Mathematics
ISBN: 9781908977632

Download Selected Topics on Continuous Time Controlled Markov Chains and Markov Games Book in PDF, Epub and Kindle

This book concerns continuous-time controlled Markov chains, also known as continuous-time Markov decision processes. They form a class of stochastic control problems in which a single decision-maker wishes to optimize a given objective function. This book is also concerned with Markov games, where two decision-makers (or players) try to optimize their own objective function. Both decision-making processes appear in a large number of applications in economics, operations research, engineering, and computer science, among other areas. An extensive, self-contained, up-to-date analysis of basic optimality criteria (such as discounted and average reward), and advanced optimality criteria (e.g., bias, overtaking, sensitive discount, and Blackwell optimality) is presented. A particular emphasis is made on the application of the results herein: algorithmic and computational issues are discussed, and applications to population models and epidemic processes are shown. This book is addressed to students and researchers in the fields of stochastic control and stochastic games. Moreover, it could be of interest also to undergraduate and beginning graduate students because the reader is not supposed to have a high mathematical background: a working knowledge of calculus, linear algebra, probability, and continuous-time Markov chains should suffice to understand the contents of the book. Contents:IntroductionControlled Markov ChainsBasic Optimality CriteriaPolicy Iteration and Approximation TheoremsOvertaking, Bias, and Variance OptimalitySensitive Discount OptimalityBlackwell OptimalityConstrained Controlled Markov ChainsApplicationsZero-Sum Markov GamesBias and Overtaking Equilibria for Markov Games Readership: Graduate students and researchers in the fields of stochastic control and stochastic analysis. Keywords:Markov Decision Processes;Continuous-Time Controlled Markov Chains;Stochastic Dynamic Programming;Stochastic GamesKey Features:This book presents a reader-friendly, extensive, self-contained, and up-to-date analysis of advanced optimality criteria for continuous-time controlled Markov chains and Markov games. Most of the material herein is quite recent (it has been published in high-impact journals during the last five years) and it appears in book form for the first timeThis book introduces approximation theorems which, in particular, allow the reader to obtain numerical approximations of the solution to several control problems of practical interest. To the best of our knowledge, this is the first time that such computational issues are studied for denumerable state continuous-time controlled Markov chains. Hence, the book has an adequate balance between, on the one hand, theoretical results and, on the other hand, applications and computational issuesThe books that analyze continuous-time controlled Markov chains usually restrict themselves to the case of bounded transition and reward rates, which can be reduced to discrete-time models by using the uniformization technique. In our case, however, the transition and the reward rates might be unbounded, and so the uniformization technique cannot be used. By the way, let us mention that in models of practical interest the transition and the reward rates are, typically, unboundedReviews:“The book contains a large number of recent research results on CMCs and Markov games and puts them in perspective. It is written in a very conscious manner, contains detailed proofs of all main results, as well as extensive bibliographic remarks. The book is a very valuable piece of work for researchers on continuous-time CMCs and Markov games.”Zentralblatt MATH

Markov Decision Processes

Markov Decision Processes
Author: Martin L. Puterman
Publsiher: John Wiley & Sons
Total Pages: 684
Release: 2014-08-28
Genre: Mathematics
ISBN: 9781118625873

Download Markov Decision Processes Book in PDF, Epub and Kindle

The Wiley-Interscience Paperback Series consists of selected booksthat have been made more accessible to consumers in an effort toincrease global appeal and general circulation. With these newunabridged softcover volumes, Wiley hopes to extend the lives ofthese works by making them available to future generations ofstatisticians, mathematicians, and scientists. "This text is unique in bringing together so many resultshitherto found only in part in other texts and papers. . . . Thetext is fairly self-contained, inclusive of some basic mathematicalresults needed, and provides a rich diet of examples, applications,and exercises. The bibliographical material at the end of eachchapter is excellent, not only from a historical perspective, butbecause it is valuable for researchers in acquiring a goodperspective of the MDP research potential." —Zentralblatt fur Mathematik ". . . it is of great value to advanced-level students,researchers, and professional practitioners of this field to havenow a complete volume (with more than 600 pages) devoted to thistopic. . . . Markov Decision Processes: Discrete Stochastic DynamicProgramming represents an up-to-date, unified, and rigoroustreatment of theoretical and computational aspects of discrete-timeMarkov decision processes." —Journal of the American Statistical Association

Continuous Time Markov Decision Processes

Continuous Time Markov Decision Processes
Author: Alexey Piunovskiy,Yi Zhang
Publsiher: Springer Nature
Total Pages: 605
Release: 2020-11-09
Genre: Mathematics
ISBN: 9783030549879

Download Continuous Time Markov Decision Processes Book in PDF, Epub and Kindle

This book offers a systematic and rigorous treatment of continuous-time Markov decision processes, covering both theory and possible applications to queueing systems, epidemiology, finance, and other fields. Unlike most books on the subject, much attention is paid to problems with functional constraints and the realizability of strategies. Three major methods of investigations are presented, based on dynamic programming, linear programming, and reduction to discrete-time problems. Although the main focus is on models with total (discounted or undiscounted) cost criteria, models with average cost criteria and with impulsive controls are also discussed in depth. The book is self-contained. A separate chapter is devoted to Markov pure jump processes and the appendices collect the requisite background on real analysis and applied probability. All the statements in the main text are proved in detail. Researchers and graduate students in applied probability, operational research, statistics and engineering will find this monograph interesting, useful and valuable.

Markov Decision Processes with Their Applications

Markov Decision Processes with Their Applications
Author: Qiying Hu,Wuyi Yue
Publsiher: Springer Science & Business Media
Total Pages: 305
Release: 2007-09-14
Genre: Business & Economics
ISBN: 9780387369518

Download Markov Decision Processes with Their Applications Book in PDF, Epub and Kindle

Put together by two top researchers in the Far East, this text examines Markov Decision Processes - also called stochastic dynamic programming - and their applications in the optimal control of discrete event systems, optimal replacement, and optimal allocations in sequential online auctions. This dynamic new book offers fresh applications of MDPs in areas such as the control of discrete event systems and the optimal allocations in sequential online auctions.

Continuous Time Markov Chains and Applications

Continuous Time Markov Chains and Applications
Author: G. George Yin,Qing Zhang
Publsiher: Springer Science & Business Media
Total Pages: 442
Release: 2012-11-14
Genre: Mathematics
ISBN: 9781461443469

Download Continuous Time Markov Chains and Applications Book in PDF, Epub and Kindle

This book gives a systematic treatment of singularly perturbed systems that naturally arise in control and optimization, queueing networks, manufacturing systems, and financial engineering. It presents results on asymptotic expansions of solutions of Komogorov forward and backward equations, properties of functional occupation measures, exponential upper bounds, and functional limit results for Markov chains with weak and strong interactions. To bridge the gap between theory and applications, a large portion of the book is devoted to applications in controlled dynamic systems, production planning, and numerical methods for controlled Markovian systems with large-scale and complex structures in the real-world problems. This second edition has been updated throughout and includes two new chapters on asymptotic expansions of solutions for backward equations and hybrid LQG problems. The chapters on analytic and probabilistic properties of two-time-scale Markov chains have been almost completely rewritten and the notation has been streamlined and simplified. This book is written for applied mathematicians, engineers, operations researchers, and applied scientists. Selected material from the book can also be used for a one semester advanced graduate-level course in applied probability and stochastic processes.