Research on Control and Decision Making using Markov Decision Processes
Prof. Steve Marcus
Dr. Steve Marcus
Control and decision-making methods are used whenever some quantity, such as temperature, speed, throughput, or delay, must be made to behave in some desirable way over time. In particular, Markov Decision Process (MDP) models are widely used for modeling sequential decision-making under uncertainty. Such problems inevitably involve randomness and arise in engineering, economics, computer science, and the social sciences. MDP (and other control systems) models achieve better performance by taking measurements and using feedback, which incorporates these measurements and looks ahead to anticipate the future evolution of the system.
At the University of Maryland, Prof. Marcus and his colleagues are studying a wide range of theoretical and applied problems of control and decision making using MDP models. Problems of practical importance are often very large and difficult to solve, even using significant computing resources. In particular, Marcus and his team have been exploring sampling-, simulation-, and population-based numerical algorithms in order to overcome the computational difficulties of calculating optimal solutions. The team has been investigating the application of such algorithms to problems of communication networks, semiconductor manufacturing, finance, and statistics.
Marcus' research has been supported by the National Science Foundation (NSF), the Air Force Office of Scientific Research, the Semiconductor Research Corporation, SEMATECH International, and the Department of Defense.
For further details, please see www.isr.umd.edu/~marcus
ECE Faculty and Alumni Co-Author Book on Markov Decision Processes
return to spotlight on research