# [[latent variable]]
a [[statistical model]]:
we model the [[covariate|observed data]] from a ([[parameter]]ized) [[conditional density]] $\boldsymbol{x} \sim f[\theta_{\text{X}}, \boldsymbol{z}]$
where $\boldsymbol{z} \sim \pi[\theta_{\text{Z}}]$ are the unobserved [[dimensionality reduction|latent variable]]s that generate the observation.
*perceptual aliasing* issue:
case when two distinct $\boldsymbol{z}_{0}, \boldsymbol{z}_{1}$ generate the same observed data $\boldsymbol{x}$
(ie $f$ is not [[injective]] in deterministic case)
# cf [[supervised]] learning
[[latent variable]]s are *unobserved*.
instead, to learn the relationship between *observed* [[covariate]] variables
and *observed* [[response]] variables,
see [[supervised]] learning;
[[signal noise decomposition]] and [[additive noise]]
for an example assumption of the [[probability density function|likelihood]]
# details
[[Bayes rule]] gives the [[posterior]] (for a single data point $\boldsymbol{x}$)
$
\tilde{\pi}[\theta, \boldsymbol{x}](\boldsymbol{z}) = \frac{f[\theta_{\text{X}}, \boldsymbol{z}](\boldsymbol{x}) \, \pi[\theta_{\text{Z}}](\boldsymbol{z})}{\Pr(\boldsymbol{x} \mid \theta)}
$
^posterior
note this nice relationship between
1. [[covariate|observed data]] [[probability density function|log likelihood]],
2. [[Kullback-Leibler divergence|kld]] from the [[variational inference|variational approximation]] to the [[posterior]],
3. the [[evidence lower bound|elbo]]:
![[evidence lower bound#^elbo-decomposition]]
# observation
in context of [[stochastic process]]:
often there is some "underlying" [[stochastic process]] $s_{t}$
but we only observe some function $o_{t} = O(s_{t}, \omega_{t+1})$
where $\omega_{t+1}$ is [[exogeneous]] information (typically [[noise]]).
such a process is called a [[hidden Markov model]]
or a [[partially observed Markov decision process]]
in the [[sequential decision making]] case.
otherwise (if we observe the [[state]])
we just have a [[Markov chain]] / [[Markov decision process]].