This syllabus provides an overview, prerequisites, format, and policies for the course. The policies are intended to guide students enrolled in the course.
6.231: Dynamic Programming and Stochastic Control - Fall 2002
Help support MIT OpenCourseWare by shopping at Amazon.com! MIT OpenCourseWare offers direct links to Amazon.com to purchase the books cited in this course. Click on the book titles and purchase the book from Amazon.com, and MIT OpenCourseWare will receive up to 10% of all purchases you make. Your support will enable MIT to continue offering open access to MIT courses. |
Prerequisites: A good introductory probability course (including basic knowledge of Markov chains) and mathematical maturity.
Textbook: Bertsekas, D. P. Dynamic Programming and Optimal Control. 2nd ed. Belmont, MA: Athena Scientific, 2000. ISBN: 1886529094.
- Homework, approximately every week (30%). A small number of homeworks will include a computational component.
- Two quizzes (35% each)
The course covers the basic models and solution techniques for problems of sequential decision making under uncertainty (stochastic control). We will also discuss some approximation methods for problems involving large state spaces.
- Late homeworks will be accepted only in extraordinary circumstances, and may in any case be penalized.
- You may work on homework problems in groups of 2-3 people. However you must always write up the solutions on your own. Similarly, you may use references or other sources to help solve homework problems, but you must write up the solution on your own and cite your sources. Copying solutions or code, in whole or in part, from other students or any other source without acknowledgment will be considered a case of academic dishonesty.