Priors from Language Models

Using foundation models as a source of Bayesian prior information.

Summary

This project studies whether large language models can be used to elicit structured prior information when expert priors are expensive, unavailable, or inconsistent.

Research Question

Can foundation models provide priors that are useful enough to support downstream Bayesian decision-making and experimentation?

Methodology

  • Prior elicitation from large language models
  • Evaluation against expert-derived priors
  • Focus on practical downstream usefulness rather than novelty of prompting alone

Key Contributions

  • Recasts language models as tools for prior construction
  • Connects foundation models with Bayesian workflow questions
  • Supports future work on structured priors for sequential learning

Outputs

References

2025

  1. Had enough of experts? Elicitation and evaluation of Bayesian priors from large language models
    David Selby, Kai Spriestersbach, Yuichiro Iwashita, and 6 more authors
    STAT, 2025
    2025; earlier workshop version at NeurIPS BDU 2024