Learn about decision making -- optimal decisions, gathering information, dealing with adversaries, policies and plans, reinforcement learning -- to complement your data science / machine learning skills.
Data science has tended to focus on understanding, analyzing, and learning from data sets. But in addition to better understanding the world, an intelligent agent needs to know how to act optimally given what it knows. That is the domain of decision sciences.
There are many interesting frameworks and techniques for deciding what best to do, and we will explore a few of them.
Decision theory: how an individual decides what to do given a model of the world, a set of actions it might take, and preferences over how the world might react to its actions
Actions and information: how does an agent reason about what it does and does not know, and when is it optimal for an agent to act to improve its state of information?
Adversaries: deciding what to do in environments where your actions affects how other agents behave
Sequential decisions, plans, and policies. In real life, agents rarely takes a single decision; instead it builds plans or policies to achieve its objectives. What do these look like, how are they built, and what does it mean for them to be optimal?
Reinforcement learning: taking action, getting feedback, and using that feedback to act better in the future -- it’s learning and decision making at the same time!
Understand how concepts from the decision sciences to complement the analytic and learning parts of data sciences.
Understand basic concepts of optimal decision making, acting to gather information, acting in the presence of adversaries, and building policies and how they are applied in practice.
Know where to look for deeper understanding of key decision science concepts and technologies.
Be familiar with basic concepts and notation of probability theory. The course will not be deep mathematically, but familiarity with the language and notation will make it easier to follow the lecture and discussion.