Can getting more information ever make a rational agent worse off, not better, once you factor in real-world costs, group disagreement, and the way inquiry can change the very decision you face?
My links: https://linktr.ee/frictionphilosophy.
1. Guest
Ted Seidenfeld is Herbert A. Simon University Professor of Philosophy and Statistics at Carnegie Mellon University, and his work focuses on decision theory, statistics, and related topics.
2. Interview Summary
Seidenfeld frames the conversation around a classic decision-theoretic result associated with Jack Good: in a simplified Bayesian/expected-utility setting, if there is an experiment whose possible outcomes would rationally lead you to act differently, then (given a “free” chance to learn) it is instrumentally rational to delay and gather that information, since it has expected value precisely by guiding action. But his main aim is to stress that this is a mathematical theorem with substantive assumptions, and once you relax them the “value of information” conclusion can flip: additional information can, by your current lights, predictably make decision-making worse rather than better.
He then walks through several ways those assumptions fail. One route is social: if the “agent” is really a group with multiple probability/utility perspectives, new evidence can surface latent disagreements and turn prior unanimity into polarization, forcing compromises that both parties regard as worse than the pre-inquiry choice, which raises the question of whether inquiry is worth it when it predictably destabilizes collective action. Another route concerns what counts as ‘cost-free’ information: if your utilities include valuing uncertainty (the theater mystery example), information can be costly simply by spoiling an experience. He also emphasizes ‘moral hazard’ and ‘act–state dependence’, where the very act of setting up or pursuing inquiry changes the relevant state of the world (or your future dispositions), so dominance-style reasoning breaks down and the Good-style theorem no longer applies.
The discussion later uses Newcomb’s paradox as a case study: Seidenfeld notes that two-boxing looks dominant unless you are prepared to endorse choice-dependent conditional probabilities (the “reverse” conditionals), and he argues that a predictor’s track record by itself does not automatically justify those probabilities. He presses the point with a market/auction thought experiment about selling ownership of the “one-box” outcome, meant to test whether the purported conditionals genuinely guide action. From there he pivots to a deeper worry about agency: too much self-knowledge (for example, knowing you are an expected-utility maximizer with fixed probabilities/utilities) threatens the idea that you face live options at all, and he is skeptical about assigning probabilities to your own acts in a decision problem. He finally situates these tensions historically via Kenneth Arrow and Leonard J. Savage, suggesting that attempts to generalize Bayesian rationality from individuals to cooperative groups (even under a unanimity constraint) run into impossibility-style pressures that leave “compromise” looking unstable unless you relax parts of the Savage framework.
3. Interview Chapters
00:00 - Introduction
00:45 - Is ignorance bliss?
11:22 - Cost-free information
16:22 - Sample case
19:27 - Moral hazard
27:20 - Newcomb’s problem
38:25 - Dominance argument
44:07 - Deliberation and prediction
48:32 - Causalist rejoinder
50:32 - Variation
55:27 - Paying to avoid cost-free information
59:19 - Simpson’s paradox
1:07:57 - Group decisions
1:22:58 - Imprecise preferences
1:25:21 - Imprecise credences
1:27:52 - Other models
1:39:01 - Causal bayesian networks
1:43:49 - What does caustion add
1:48:57 - Relevance
1:51:27 - Backward intervention
1:54:02 - Application to groups
1:59:27 - Sleeping beauty problem
2:10:02 - Thirder argument
2:14:12 - Halfer solution
2:15:55 - Ambiguity of ‘’now’‘
2:18:00 - Betting odds
2:26:00 - Value of philosophy
2:29:57 - Conclusion