Optimal Control Theory
Optimal Control Theory
Optimal control theory is a mathematical discipline for studying the neural control of movement. This chapter presents a mathematical introduction to optimal control theory and discusses the following topics: Bellman equations, Hamilton-Jacobi-Bellman equations, Ricatti equations, and Kalman filter. It also examines the duality of optimal control and optimal estimation, and, finally, describes optimal control models and suggests future research directions.
Keywords: optimal control theory, neural control, movement, Bellman equations, Ricatti equations, Kalman filter, optimal estimation
MIT Press Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter.
Please, subscribe or login to access full text content.
If you think you should have access to this title, please contact your librarian.
To troubleshoot, please check our FAQs, and if you can't find the answer there, please contact us.