Where does [[induct|inductive]] bias in [[machine learning]] come from?
![[induct#^inductive-bias]]
![[tradeoffs in model selection.png|500]]
deep learning theory is also relevant for [[AI safety]]:
If we can understand training processes at a mechanistic level,
we can make testable interventions to address issues like generalization ([[covariate shift|inner alignment]] failures).
# [[philosophy]]
[[2022BarakUneasyRelationshipDeep|The uneasy relationship between deep learning and (classical) statistics]]
[[2001BreimanStatisticalModelingTwo|Statistical modeling: The two cultures]]
generally big [[operational definition|theory gap]] between practice and proofs; very little is known
want to find [[Pareto frontier]] between expressivity of function class and generality of distributional assumption
# [[algorithm]]ic perspective
exists analogous estimation problems to decision problems in [[computational complexity theory]]
Modes of thinking: [[worst-case analysis|upper bound]]s and [[lower bound]]
## modelling
(formulating algorithmic questions that are mathematiclaly tractable and practically meaningful)
- formalize algorithms
- from upper bounds: articulate properties of data that make ml solvable
- test robustness of existing models: find good tradeoff between generality and algorithmic results
algorithmic lens on data science is transferable to other domains
# sources
[[COMPSCI 224]]
https://en.wikipedia.org/wiki/Upper_and_lower_bounds