ISBN-13: 9788876422423 / Angielski / Miękka / 1996 / 185 str.
Mathematical theory of discrete time decision processes, also known as stochastic control, is based on two major ideas: backward induction and conditioning. It has a large number of applications in almost all branches of the natural sciences. The aim of these notes is to give a self-contained introduction to this theory and its applications. Our intention was to give a global and mathematically precise picture of the subject and present well motivated examples. We cover systems with complete or partial information as well as with complete or partial observation. We have tried to present in a unified way several topics such as dynamic programming equations, stopping problems, stabilization, Kalman-Bucy filter, linear regulator, adaptive control and option pricing. The notes discuss a large variety of models rather than concentrate on general existence theorems.