### Finally, a promising alternative approach
## Foundation Models
_trained on broad data at scale and are adaptable to a wide range of downstream tasks_
* promise to store the knowledge about the world in text form
* have a high capacity for learning
* can be trained on all data available in text form
* one day a general abstraction for everything?
https://arxiv.org/abs/2108.07258
### What do you gain by adding priors to your model?
* Fairness
* by fighting unwanted bias
* Explainability / Interpretability
* by stating your priors (global interpretability)
* without necessarily sacrificing accuracy
* sometimes even improving on that
* Trust
* by making better predictions