Nicolle and I Are Moving to London
I’m very excited to start as Lecturer at KCL in April 2022!
Conference and Memorial in Honor of Isaac Levi
Workshop on Induction and Probability
September 17, 10am-5pm
Ludwigstr. 31, Room 021
A small workshop on recent work on issues related to induction and probability.
Rush Stewart (MCMP) - On the Possibility of Testimonial Justice, 10:30 AM
Abstract. Recent impossibility theorems for fair risk assessment extend to the domain of epistemic justice. We translate the relevant model, demonstrating that the problems of fair risk assessment and just credibility assessment are structurally the same. We motivate the fairness criteria involved in the theorems as appropriate in the setting of testimonial justice. Any account of testimonial justice that implies the fairness/justice criteria must be abandoned on pain of triviality.
Michael Nielsen (Columbia University) - Speed Optimal Induction and Dynamic Coherence, 12:30 PM
Abstract. A standard way to challenge convergence-based accounts of inductive success is to claim that they are too weak to constrain inductive inferences in the short-term. We respond to such a challenge by answering some questions raised by Juhl (1994). When it comes to predicting limiting relative frequencies in the framework of Reichenbach, we show that speed-optimal convergence---a long-run success condition---induces dynamic coherence in the short-term.
Gregory Wheeler (Frankfurt School of Finance & Management) - Full Conditional Probabilities Lead to Indeterminacy in Probability Values, 2:00 PM
Abstract. The purpose of this paper is to show that if one adopts conditional probabilities as the primitive concept of probability, one must accept that at least some probability values may be indeterminate, and that some probability questions may fail to have numerically precise answers.
MCMP Conference: Computational Modeling in Philosophy
June 22-23, 2018. More information here.
Columbia Workshop on Probability and Learning
On Saturday, April 8th, the Formal Philosophy Group at Columbia will host a workshop on issues related to probability and learning. Other announcements are here and here. The lineup, abstracts, and schedule are below.
Gordon Belot (Michigan) - Typical!, 10am
Abstract. This talk falls into three short stories. The over-arching themes are: (i) that the notion of typicality is protean; (ii) that Bayesian technology is both more and less rigid than is sometimes thought.
Simon Huttegger (Irvine LPS) - Schnorr Randomness and Lévi's Martingale Convergence Theorem, 11:45am
Abstract. Much recent work in algorithmic randomness concerns characterizations of randomness in terms of the almost everywhere behavior of suitably effectivized versions of functions from analysis or probability. In this talk, we take a look at Lévi's Martingale Convergence Theorem from this perspective. Levi's theorem is of fundamental importance to Bayesian epistemology. We note that much of Pathak, Rojas, and Simpson's work on Schnorr randomness and the Lebesgue Differentiation Theorem in the Euclidean context carries over to Lévi's Martingale Convergence Theorem in the Cantor space context. We discuss the methodological choices one faces in choosing the appropriate mode of effectivization and the potential bearing of these results on Schnorr's critique of Martin-Löf. We also discuss the consequences of our result for the Bayesian model of learning.
Deborah Mayo (VA Tech) - Probing With Severity: Beyond Bayesian Probabilism and Frequentist Performance, 2:45pm
Abstract. Getting beyond today’s most pressing controversies revolving around statistical methods and irreproducible findings requires scrutinizing underlying statistical philosophies. Two main philosophies about the roles of probability in statistical inference are probabilism and performance (in the long-run). The first assumes that we need a method of assigning probabilities to hypotheses; the second assumes that the main function of statistical method is to control long-run performance. I offer a third goal: controlling and evaluating the probativeness of methods. A statistical inference, in this conception, takes the form of inferring hypotheses to the extent that they have been well or severely tested. A report of poorly tested claims must also be part of an adequate inference. I show how the “severe testing” philosophy clarifies and avoids familiar criticisms and abuses of significance tests and cognate methods (e.g., confidence intervals). Severity may be threatened in three main ways: fallacies of rejection and non-rejection, unwarranted links between statistical and substantive claims, and violations of model assumptions. I illustrate with some controversies surrounding the use of significance tests in the discovery of the Higgs particle in high energy physics.
Teddy Seidenfeld (CMU) - Radically Elementary Imprecise Probability Based on Extensive Measurement, 4:30pm
Abstract. This presentation begins with motivation for "precise" non-standard probability. Using two old challenges -- involving (i) symmetry of probabilistic relevance and (ii) respect for weak dominance -- I contrast the following three approaches to conditional probability given a (non-empty) "null" event and their three associated decision theories.
Approach #1 – Full Conditional Probability Distributions (Dubins, 1975) conjoined with Expected Utility.
Approach #2 – Lexicographic Probability conjoined with Lexicographic Expected Value (e.g., Blume et al., 1991).
Approach #3 – Non-standard Probability and Expected Utility based on Non-Archimedean Extensive Measurement (Narens, 1974).
The second part of the presentation discusses progress we've made using Approach #3 within a context of Imprecise Probability.