Seminar on History and Philosophy of Science
Abstract: Many philosophers have recently applied a decision-theoretic approach to problems in epistemology. This framework is sometimes referred to as Epistemic Utility Theory. Since risk and risk sensitivity play a substantial role in ordinary rational choice, it is natural to consider what role these concepts will play in the normative assessment of an agent's beliefs. To date, there is very little literature on this topic and no agreed upon understanding of what epistemic risk consists in and how, if at all, it differs from ordinary risk. The broader project I am interested in aims to answer these questions by developing a general theory of epistemic risk according to which an agent's risk function reflects their relative sensitivity to false positive errors against false negative errors. In this talk, I consider a subset of this project: given a prior set of beliefs, what is the least risky set of beliefs for an agent to hold, after undergoing some learning experience? In other words, what is dynamic epistemic risk? And what role does it/should it play in epistemic utility theory?