Download Free Optimal Control Problems Arising In Mathematical Economics Book in PDF and EPUB Free Download. You can read online Optimal Control Problems Arising In Mathematical Economics and write the review.

This book is devoted to the study of two large classes of discrete-time optimal control problems arising in mathematical economics. Nonautonomous optimal control problems of the first class are determined by a sequence of objective functions and sequence of constraint maps. They correspond to a general model of economic growth. We are interested in turnpike properties of approximate solutions and in the stability of the turnpike phenomenon under small perturbations of objective functions and constraint maps. The second class of autonomous optimal control problems corresponds to another general class of models of economic dynamics which includes the Robinson–Solow–Srinivasan model as a particular case. In Chap. 1 we discuss turnpike properties for a large class of discrete-time optimal control problems studied in the literature and for the Robinson–Solow–Srinivasan model. In Chap. 2 we introduce the first class of optimal control problems and study its turnpike property. This class of problems is also discussed in Chaps. 3–6. In Chap. 3 we study the stability of the turnpike phenomenon under small perturbations of the objective functions. Analogous results for problems with discounting are considered in Chap. 4. In Chap. 5 we study the stability of the turnpike phenomenon under small perturbations of the objective functions and the constraint maps. Analogous results for problems with discounting are established in Chap. 6. The results of Chaps. 5 and 6 are new. The second class of problems is studied in Chaps. 7–9. In Chap. 7 we study the turnpike properties. The stability of the turnpike phenomenon under small perturbations of the objective functions is established in Chap. 8. In Chap. 9 we establish the stability of the turnpike phenomenon under small perturbations of the objective functions and the constraint maps. The results of Chaps. 8 and 9 are new. In Chap. 10 we study optimal control problems related to a model of knowledge-based endogenous economic growth and show the existence of trajectories of unbounded economic growth and provide estimates for the growth rate.
A rigorous introduction to optimal control theory, with an emphasis on applications in economics. This book bridges optimal control theory and economics, discussing ordinary differential equations, optimal control, game theory, and mechanism design in one volume. Technically rigorous and largely self-contained, it provides an introduction to the use of optimal control theory for deterministic continuous-time systems in economics. The theory of ordinary differential equations (ODEs) is the backbone of the theory developed in the book, and chapter 2 offers a detailed review of basic concepts in the theory of ODEs, including the solution of systems of linear ODEs, state-space analysis, potential functions, and stability analysis. Following this, the book covers the main results of optimal control theory, in particular necessary and sufficient optimality conditions; game theory, with an emphasis on differential games; and the application of control-theoretic concepts to the design of economic mechanisms. Appendixes provide a mathematical review and full solutions to all end-of-chapter problems. The material is presented at three levels: single-person decision making; games, in which a group of decision makers interact strategically; and mechanism design, which is concerned with a designer's creation of an environment in which players interact to maximize the designer's objective. The book focuses on applications; the problems are an integral part of the text. It is intended for use as a textbook or reference for graduate students, teachers, and researchers interested in applications of control theory beyond its classical use in economic growth. The book will also appeal to readers interested in a modeling approach to certain practical problems involving dynamic continuous-time models.
Since its initial publication, this text has defined courses in dynamic optimization taught to economics and management science students. The two-part treatment covers the calculus of variations and optimal control. 1998 edition.
As is well known, Pontryagin's maximum principle and Bellman's dynamic programming are the two principal and most commonly used approaches in solving stochastic optimal control problems. * An interesting phenomenon one can observe from the literature is that these two approaches have been developed separately and independently. Since both methods are used to investigate the same problems, a natural question one will ask is the fol lowing: (Q) What is the relationship betwccn the maximum principlc and dy namic programming in stochastic optimal controls? There did exist some researches (prior to the 1980s) on the relationship between these two. Nevertheless, the results usually werestated in heuristic terms and proved under rather restrictive assumptions, which were not satisfied in most cases. In the statement of a Pontryagin-type maximum principle there is an adjoint equation, which is an ordinary differential equation (ODE) in the (finite-dimensional) deterministic case and a stochastic differential equation (SDE) in the stochastic case. The system consisting of the adjoint equa tion, the original state equation, and the maximum condition is referred to as an (extended) Hamiltonian system. On the other hand, in Bellman's dynamic programming, there is a partial differential equation (PDE), of first order in the (finite-dimensional) deterministic case and of second or der in the stochastic case. This is known as a Hamilton-Jacobi-Bellman (HJB) equation.
This book is devoted to the study of optimal control problems arising in forest management, an important and fascinating topic in mathematical economics studied by many researchers over the years. The volume studies the forest management problem by analyzing a class of optimal control problems that contains it and showing the existence of optimal solutions over infinite horizon. It also studies the structure of approximate solutions on finite intervals and their turnpike properties, as well as the stability of the turnpike phenomenon and the structure of approximate solutions on finite intervals in the regions close to the end points. The book is intended for mathematicians interested in the optimization theory, optimal control and their applications to the economic theory.
Optimal control methods are used to determine optimal ways to control a dynamic system. The theoretical work in this field serves as a foundation for the book, which the authors have applied to business management problems developed from their research and classroom instruction. Sethi and Thompson have provided management science and economics communities with a thoroughly revised edition of their classic text on Optimal Control Theory. The new edition has been completely refined with careful attention to the text and graphic material presentation. Chapters cover a range of topics including finance, production and inventory problems, marketing problems, machine maintenance and replacement, problems of optimal consumption of natural resources, and applications of control theory to economics. The book contains new results that were not available when the first edition was published, as well as an expansion of the material on stochastic optimal control theory.
Providing an introduction to stochastic optimal control in infinite dimension, this book gives a complete account of the theory of second-order HJB equations in infinite-dimensional Hilbert spaces, focusing on its applicability to associated stochastic optimal control problems. It features a general introduction to optimal stochastic control, including basic results (e.g. the dynamic programming principle) with proofs, and provides examples of applications. A complete and up-to-date exposition of the existing theory of viscosity solutions and regular solutions of second-order HJB equations in Hilbert spaces is given, together with an extensive survey of other methods, with a full bibliography. In particular, Chapter 6, written by M. Fuhrman and G. Tessitore, surveys the theory of regular solutions of HJB equations arising in infinite-dimensional stochastic control, via BSDEs. The book is of interest to both pure and applied researchers working in the control theory of stochastic PDEs, and in PDEs in infinite dimension. Readers from other fields who want to learn the basic theory will also find it useful. The prerequisites are: standard functional analysis, the theory of semigroups of operators and its use in the study of PDEs, some knowledge of the dynamic programming approach to stochastic optimal control problems in finite dimension, and the basics of stochastic analysis and stochastic equations in infinite-dimensional spaces.
This book presents applications of geometric optimal control to real life biomedical problems with an emphasis on cancer treatments. A number of mathematical models for both classical and novel cancer treatments are presented as optimal control problems with the goal of constructing optimal protocols. The power of geometric methods is illustrated with fully worked out complete global solutions to these mathematically challenging problems. Elaborate constructions of optimal controls and corresponding system responses provide great examples of applications of the tools of geometric optimal control and the outcomes aid the design of simpler, practically realizable suboptimal protocols. The book blends mathematical rigor with practically important topics in an easily readable tutorial style. Graduate students and researchers in science and engineering, particularly biomathematics and more mathematical aspects of biomedical engineering, would find this book particularly useful.
This book is devoted to the study of classes of optimal control problems arising in economic growth theory, related to the Robinson–Solow–Srinivasan (RSS) model. The model was introduced in the 1960s by economists Joan Robinson, Robert Solow, and Thirukodikaval Nilakanta Srinivasan and was further studied by Robinson, Nobuo Okishio, and Joseph Stiglitz. Since then, the study of the RSS model has become an important element of economic dynamics. In this book, two large general classes of optimal control problems, both of them containing the RSS model as a particular case, are presented for study. For these two classes, a turnpike theory is developed and the existence of solutions to the corresponding infinite horizon optimal control problems is established. The book contains 9 chapters. Chapter 1 discusses turnpike properties for some optimal control problems that are known in the literature, including problems corresponding to the RSS model. The first class of optimal control problems is studied in Chaps. 2–6. In Chap. 2, infinite horizon optimal control problems with nonautonomous optimality criteria are considered. The utility functions, which determine the optimality criterion, are nonconcave. This class of models contains the RSS model as a particular case. The stability of the turnpike phenomenon of the one-dimensional nonautonomous concave RSS model is analyzed in Chap. 3. The following chapter takes up the study of a class of autonomous nonconcave optimal control problems, a subclass of problems considered in Chap. 2. The equivalence of the turnpike property and the asymptotic turnpike property, as well as the stability of the turnpike phenomenon, is established. Turnpike conditions and the stability of the turnpike phenomenon for nonautonomous problems are examined in Chap. 5, with Chap. 6 devoted to the study of the turnpike properties for the one-dimensional nonautonomous nonconcave RSS model. The utility functions, which determine the optimality criterion, are nonconcave. The class of RSS models is identified with a complete metric space of utility functions. Using the Baire category approach, the turnpike phenomenon is shown to hold for most of the models. Chapter 7 begins the study of the second large class of autonomous optimal control problems, and turnpike conditions are established. The stability of the turnpike phenomenon for this class of problems is investigated further in Chaps. 8 and 9.
Optimal control theory is a technique being used increasingly by academic economists to study problems involving optimal decisions in a multi-period framework. This textbook is designed to make the difficult subject of optimal control theory easily accessible to economists while at the same time maintaining rigour. Economic intuitions are emphasized, and examples and problem sets covering a wide range of applications in economics are provided to assist in the learning process. Theorems are clearly stated and their proofs are carefully explained. The development of the text is gradual and fully integrated, beginning with simple formulations and progressing to advanced topics such as control parameters, jumps in state variables, and bounded state space. For greater economy and elegance, optimal control theory is introduced directly, without recourse to the calculus of variations. The connection with the latter and with dynamic programming is explained in a separate chapter. A second purpose of the book is to draw the parallel between optimal control theory and static optimization. Chapter 1 provides an extensive treatment of constrained and unconstrained maximization, with emphasis on economic insight and applications. Starting from basic concepts, it derives and explains important results, including the envelope theorem and the method of comparative statics. This chapter may be used for a course in static optimization. The book is largely self-contained. No previous knowledge of differential equations is required.