Controlled Markov Chains Graphs and Hamiltonicity

Controlled Markov Chains  Graphs and Hamiltonicity
Author: Jerzy A. Filar
Publsiher: Now Publishers Inc
Total Pages: 95
Release: 2007
Genre: Mathematics
ISBN: 9781601980885

Download Controlled Markov Chains Graphs and Hamiltonicity Book in PDF, Epub and Kindle

"Controlled Markov Chains, Graphs & Hamiltonicity" summarizes a line of research that maps certain classical problems of discrete mathematics--such as the Hamiltonian cycle and the Traveling Salesman problems--into convex domains where continuum analysis can be carried out. (Mathematics)

Hamiltonian Cycle Problem and Markov Chains

Hamiltonian Cycle Problem and Markov Chains
Author: Vivek S. Borkar,Vladimir Ejov,Jerzy A. Filar,Giang T. Nguyen
Publsiher: Springer Science & Business Media
Total Pages: 205
Release: 2012-04-23
Genre: Business & Economics
ISBN: 9781461432326

Download Hamiltonian Cycle Problem and Markov Chains Book in PDF, Epub and Kindle

This research monograph summarizes a line of research that maps certain classical problems of discrete mathematics and operations research - such as the Hamiltonian Cycle and the Travelling Salesman Problems - into convex domains where continuum analysis can be carried out. Arguably, the inherent difficulty of these, now classical, problems stems precisely from the discrete nature of domains in which these problems are posed. The convexification of domains underpinning these results is achieved by assigning probabilistic interpretation to key elements of the original deterministic problems. In particular, the approaches summarized here build on a technique that embeds Hamiltonian Cycle and Travelling Salesman Problems in a structured singularly perturbed Markov decision process. The unifying idea is to interpret subgraphs traced out by deterministic policies (including Hamiltonian cycles, if any) as extreme points of a convex polyhedron in a space filled with randomized policies. The above innovative approach has now evolved to the point where there are many, both theoretical and algorithmic, results that exploit the nexus between graph theoretic structures and both probabilistic and algebraic entities of related Markov chains. The latter include moments of first return times, limiting frequencies of visits to nodes, or the spectra of certain matrices traditionally associated with the analysis of Markov chains. However, these results and algorithms are dispersed over many research papers appearing in journals catering to disparate audiences. As a result, the published manuscripts are often written in a very terse manner and use disparate notation, thereby making it difficult for new researchers to make use of the many reported advances. Hence the main purpose of this book is to present a concise and yet easily accessible synthesis of the majority of the theoretical and algorithmic results obtained so far. In addition, the book discusses numerous open questions and problems that arise from this body of work and which are yet to be fully solved. The approach casts the Hamiltonian Cycle Problem in a mathematical framework that permits analytical concepts and techniques, not used hitherto in this context, to be brought to bear to further clarify both the underlying difficulty of NP-completeness of this problem and the relative exceptionality of truly difficult instances. Finally, the material is arranged in such a manner that the introductory chapters require very little mathematical background and discuss instances of graphs with interesting structures that motivated a lot of the research in this topic. More difficult results are introduced later and are illustrated with numerous examples.

Markov Processes and Controlled Markov Chains

Markov Processes and Controlled Markov Chains
Author: Zhenting Hou,Jerzy A. Filar,Anyue Chen
Publsiher: Springer Science & Business Media
Total Pages: 501
Release: 2013-12-01
Genre: Mathematics
ISBN: 9781461302650

Download Markov Processes and Controlled Markov Chains Book in PDF, Epub and Kindle

The general theory of stochastic processes and the more specialized theory of Markov processes evolved enormously in the second half of the last century. In parallel, the theory of controlled Markov chains (or Markov decision processes) was being pioneered by control engineers and operations researchers. Researchers in Markov processes and controlled Markov chains have been, for a long time, aware of the synergies between these two subject areas. However, this may be the first volume dedicated to highlighting these synergies and, almost certainly, it is the first volume that emphasizes the contributions of the vibrant and growing Chinese school of probability. The chapters that appear in this book reflect both the maturity and the vitality of modern day Markov processes and controlled Markov chains. They also will provide an opportunity to trace the connections that have emerged between the work done by members of the Chinese school of probability and the work done by the European, US, Central and South American and Asian scholars.

Analytic Perturbation Theory and Its Applications

Analytic Perturbation Theory and Its Applications
Author: Konstantin E. Avrachenkov,Jerzy A. Filar,Phil G. Howlett
Publsiher: SIAM
Total Pages: 384
Release: 2013-12-11
Genre: Mathematics
ISBN: 9781611973136

Download Analytic Perturbation Theory and Its Applications Book in PDF, Epub and Kindle

Mathematical models are often used to describe complex phenomena such as climate change dynamics, stock market fluctuations, and the Internet. These models typically depend on estimated values of key parameters that determine system behavior. Hence it is important to know what happens when these values are changed. The study of single-parameter deviations provides a natural starting point for this analysis in many special settings in the sciences, engineering, and economics. The difference between the actual and nominal values of the perturbation parameter is small but unknown, and it is important to understand the asymptotic behavior of the system as the perturbation tends to zero. This is particularly true in applications with an apparent discontinuity in the limiting behavior?the so-called singularly perturbed problems. Analytic Perturbation Theory and Its Applications includes a comprehensive treatment of analytic perturbations of matrices, linear operators, and polynomial systems, particularly the singular perturbation of inverses and generalized inverses. It also offers original applications in Markov chains, Markov decision processes, optimization, and applications to Google PageRank? and the Hamiltonian cycle problem as well as input retrieval in linear control systems and a problem section in every chapter to aid in course preparation.

Selected Topics on Continuous Time Controlled Markov Chains and Markov Games

Selected Topics on Continuous Time Controlled Markov Chains and Markov Games
Author: Tomás Prieto-Rumeau,Onésimo Hernández-Lerma
Publsiher: World Scientific
Total Pages: 292
Release: 2012-03-16
Genre: Mathematics
ISBN: 9781908977632

Download Selected Topics on Continuous Time Controlled Markov Chains and Markov Games Book in PDF, Epub and Kindle

This book concerns continuous-time controlled Markov chains, also known as continuous-time Markov decision processes. They form a class of stochastic control problems in which a single decision-maker wishes to optimize a given objective function. This book is also concerned with Markov games, where two decision-makers (or players) try to optimize their own objective function. Both decision-making processes appear in a large number of applications in economics, operations research, engineering, and computer science, among other areas. An extensive, self-contained, up-to-date analysis of basic optimality criteria (such as discounted and average reward), and advanced optimality criteria (e.g., bias, overtaking, sensitive discount, and Blackwell optimality) is presented. A particular emphasis is made on the application of the results herein: algorithmic and computational issues are discussed, and applications to population models and epidemic processes are shown. This book is addressed to students and researchers in the fields of stochastic control and stochastic games. Moreover, it could be of interest also to undergraduate and beginning graduate students because the reader is not supposed to have a high mathematical background: a working knowledge of calculus, linear algebra, probability, and continuous-time Markov chains should suffice to understand the contents of the book. Contents:IntroductionControlled Markov ChainsBasic Optimality CriteriaPolicy Iteration and Approximation TheoremsOvertaking, Bias, and Variance OptimalitySensitive Discount OptimalityBlackwell OptimalityConstrained Controlled Markov ChainsApplicationsZero-Sum Markov GamesBias and Overtaking Equilibria for Markov Games Readership: Graduate students and researchers in the fields of stochastic control and stochastic analysis. Keywords:Markov Decision Processes;Continuous-Time Controlled Markov Chains;Stochastic Dynamic Programming;Stochastic GamesKey Features:This book presents a reader-friendly, extensive, self-contained, and up-to-date analysis of advanced optimality criteria for continuous-time controlled Markov chains and Markov games. Most of the material herein is quite recent (it has been published in high-impact journals during the last five years) and it appears in book form for the first timeThis book introduces approximation theorems which, in particular, allow the reader to obtain numerical approximations of the solution to several control problems of practical interest. To the best of our knowledge, this is the first time that such computational issues are studied for denumerable state continuous-time controlled Markov chains. Hence, the book has an adequate balance between, on the one hand, theoretical results and, on the other hand, applications and computational issuesThe books that analyze continuous-time controlled Markov chains usually restrict themselves to the case of bounded transition and reward rates, which can be reduced to discrete-time models by using the uniformization technique. In our case, however, the transition and the reward rates might be unbounded, and so the uniformization technique cannot be used. By the way, let us mention that in models of practical interest the transition and the reward rates are, typically, unboundedReviews:“The book contains a large number of recent research results on CMCs and Markov games and puts them in perspective. It is written in a very conscious manner, contains detailed proofs of all main results, as well as extensive bibliographic remarks. The book is a very valuable piece of work for researchers on continuous-time CMCs and Markov games.”Zentralblatt MATH

Controlled Markov Processes

Controlled Markov Processes
Author: Evgeniĭ Borisovich Dynkin,Alexander Adolph Yushkevich
Publsiher: Springer
Total Pages: 320
Release: 1979
Genre: Mathematics
ISBN: UOM:39015013837011

Download Controlled Markov Processes Book in PDF, Epub and Kindle

This book is devoted to the systematic exposition of the contemporary theory of controlled Markov processes with discrete time parameter or in another termi nology multistage Markovian decision processes. We discuss the applications of this theory to various concrete problems. Particular attention is paid to mathe matical models of economic planning, taking account of stochastic factors. The authors strove to construct the exposition in such a way that a reader interested in the applications can get through the book with a minimal mathe matical apparatus. On the other hand, a mathematician will find, in the appropriate chapters, a rigorous theory of general control models, based on advanced measure theory, analytic set theory, measurable selection theorems, and so forth. We have abstained from the manner of presentation of many mathematical monographs, in which one presents immediately the most general situation and only then discusses simpler special cases and examples. Wishing to separate out difficulties, we introduce new concepts and ideas in the simplest setting, where they already begin to work. Thus, before considering control problems on an infinite time interval, we investigate in detail the case of the finite interval. Here we first study in detail models with finite state and action spaces-a case not requiring a departure from the realm of elementary mathematics, and at the same time illustrating the most important principles of the theory.

Markov Chains and Stochastic Stability

Markov Chains and Stochastic Stability
Author: Sean Meyn,Richard L. Tweedie
Publsiher: Cambridge University Press
Total Pages: 595
Release: 2009-04-02
Genre: Mathematics
ISBN: 9781139477970

Download Markov Chains and Stochastic Stability Book in PDF, Epub and Kindle

Meyn and Tweedie is back! The bible on Markov chains in general state spaces has been brought up to date to reflect developments in the field since 1996 - many of them sparked by publication of the first edition. The pursuit of more efficient simulation algorithms for complex Markovian models, or algorithms for computation of optimal policies for controlled Markov models, has opened new directions for research on Markov chains. As a result, new applications have emerged across a wide range of topics including optimisation, statistics, and economics. New commentary and an epilogue by Sean Meyn summarise recent developments and references have been fully updated. This second edition reflects the same discipline and style that marked out the original and helped it to become a classic: proofs are rigorous and concise, the range of applications is broad and knowledgeable, and key ideas are accessible to practitioners with limited mathematical background.

Discrete Time Markov Chains

Discrete Time Markov Chains
Author: George Yin,Qing Zhang
Publsiher: Springer Science & Business Media
Total Pages: 372
Release: 2005
Genre: Business & Economics
ISBN: 038721948X

Download Discrete Time Markov Chains Book in PDF, Epub and Kindle

Focusing on discrete-time-scale Markov chains, the contents of this book are an outgrowth of some of the authors' recent research. The motivation stems from existing and emerging applications in optimization and control of complex hybrid Markovian systems in manufacturing, wireless communication, and financial engineering. Much effort in this book is devoted to designing system models arising from these applications, analyzing them via analytic and probabilistic techniques, and developing feasible computational algorithms so as to reduce the inherent complexity. This book presents results including asymptotic expansions of probability vectors, structural properties of occupation measures, exponential bounds, aggregation and decomposition and associated limit processes, and interface of discrete-time and continuous-time systems. One of the salient features is that it contains a diverse range of applications on filtering, estimation, control, optimization, and Markov decision processes, and financial engineering. This book will be an important reference for researchers in the areas of applied probability, control theory, operations research, as well as for practitioners who use optimization techniques. Part of the book can also be used in a graduate course of applied probability, stochastic processes, and applications.