Priors from Language Models
Using foundation models as a source of Bayesian prior information.
Summary
This project studies whether large language models can be used to elicit structured prior information when expert priors are expensive, unavailable, or inconsistent.
Research Question
Can foundation models provide priors that are useful enough to support downstream Bayesian decision-making and experimentation?
Methodology
- Prior elicitation from large language models
- Evaluation against expert-derived priors
- Focus on practical downstream usefulness rather than novelty of prompting alone
Key Contributions
- Recasts language models as tools for prior construction
- Connects foundation models with Bayesian workflow questions
- Supports future work on structured priors for sequential learning
Outputs
- Publication: (Selby et al., 2025)