贝叶斯推导
Bayesian Inference
Let X1X1 be the vector of observable random variables. Let X2X2 be the vector of latent random variables. Let ΘΘ be the vector of parameters. f(x2,θ|x1)=f(x1|x2,θ)f(x2|θ)f(θ)f(x1)
极大似然估计Maximum Likelihood Estimation
假设有一堆独立同分布数据
Maximum A Posterior Estimation | 极大后验估计
极大后验估计中加入了一些先验知识,它最大化的是一个后验函数。具体来说,因为贝叶斯定律
$$ p(\theta|x)=\frac{p(x|\theta)p(\theta)}{p(x)} $$
那么极大后验估计就是要求
$$ \hat{\theta}{MAP}=\underset{\theta}{argmax}~ p(x|\theta)p(\theta)=\underset{\theta}{argmax}{\sum\limits{X_i} log ~p(X_i|\theta) + log~p(\theta)} $$
可见,极大后验估计中相对于最大似然估计,多了
贝叶斯推断:Bayesian Inference
前面的
Bayesian inference extends the MAP approach by allowing a distribution over the parameter set θ instead of making a direct estimate. Not only encodes this the maximum(a posteriori) value of the data-generated parameters, but it also incorporates expectation as another parameter estimate as well as variance information as a measure of estimation quality or confidence.
具体来说,给定数据
$X$: 观测数据$\theta$: 潜变量
这里和