Talk:Statistics/Probability/Bayesian

Better Explanation
"The opposite of "Bayesian" is sometimes referred to as "Classical Statistics"

I think this ought to be better explained.

What?
It would be nice if we explained WHY we're doing the things we're doing in this section. Where did that formula come from? Thin air? I saw a lot of this is the standard deviation article also.

Bayesian?
Well, this example uses Bayes' theorem but it isn't really Bayesian. In Bayesian analysis one typically considers an unknown parameter, say &theta;, as being a random variable characterised by a continuous distribution. The prior distribution p(&theta;) is often chosen to be quite flat, but it can be peaked to reflect a priori information. The posterior distribution p(&theta; | x), where x is the data, is more sharply peaked. If there's sufficient data to overwhelm the prior, the posterior is mainly dependent on the data, not the prior.

Bayes' theorem in this context is $$p(\theta \mid x) \propto p(x \mid \theta) \, p(\theta)$$. The normalising factor (to bring the total probability mass to 1) can be found by integration (marginalisation), so Bayes' theorem in full is:

$$p(\theta \mid x) = \frac{p(x \mid \theta) \, p(\theta)} {\int p(x \mid \theta) \, p(\theta) \, d\theta }$$

where the integral is taken over all possible values of &theta;.

For instance, if you were trying to determine whether a coin is fair, &theta; would be the parameter of the binomial distribution. But things can get a lot more complicated than this, with multiple parameters and hierarchical models.

--84.9.67.105 (talk) 15:10, 11 April 2008 (UTC)