Professional Documents
Culture Documents
Chapter 4 Bayesian Machinery - Bayesian Hierarchical Models in Ecology
Chapter 4 Bayesian Machinery - Bayesian Hierarchical Models in Ecology
Chapter 4 Bayesian Machinery - Bayesian Hierarchical Models in Ecology
where
The posterior distribution (often abbreviated as the posterior) is simply the way of
saying the result of computing Bayes’ Rule for a set of data and parameters.
Because we don’t get point estimates for answers, we correctly call it a distribution,
and we add the term posterior because this is the distrution produced at the end.
You can think of the posterior as a astatement about the probability of the
parameter value given the data you observed.
https://bookdown.org/steve_midway/BHME/Ch3.html 1/10
4/5/24, 6:40 PM Chapter 4 Bayesian Machinery | Bayesian Hierarchical Models in Ecology
https://bookdown.org/steve_midway/BHME/Ch3.html 2/10
4/5/24, 6:40 PM Chapter 4 Bayesian Machinery | Bayesian Hierarchical Models in Ecology
https://bookdown.org/steve_midway/BHME/Ch3.html 3/10
4/5/24, 6:40 PM Chapter 4 Bayesian Machinery | Bayesian Hierarchical Models in Ecology
Simple models may not need informative priors; complex models may need
priors
You may not use informative priors when starting to model. Regardless, always
think about your priors, explore how they work, and be prepared to defend them to
reviewers and other peers.
“So far there are only few articles in ecological journals that have actually used this
asset of Bayesian statistics.” - Marc Kery (2010)
https://bookdown.org/steve_midway/BHME/Ch3.html 4/10
4/5/24, 6:40 PM Chapter 4 Bayesian Machinery | Bayesian Hierarchical Models in Ecology
The normalizing constant is a function that converts the area under the curve to 1.
While this may seem technical—and it is—this is what allows us to interpret
Bayesian output probabilistically. The normalizing constant is a high dimension
integral that in most cases cannot be analytically solved. But we need it, so we
have to simulate it. To do this, we use Markov Chain Monte Carlo, MCMC.
Markov Chain: transitions from one state to another (dependency) Monte Carlo:
chance applied to the transition (randomness)
https://bookdown.org/steve_midway/BHME/Ch3.html 5/10
4/5/24, 6:40 PM Chapter 4 Bayesian Machinery | Bayesian Hierarchical Models in Ecology
Figure 4.4: MCMC samplers are designed to sample parameter space with a
combination of dependency and randomness.
2. But she doesn’t know overall population—can ask current islanders their
population and population of adjacent islands
https://bookdown.org/steve_midway/BHME/Ch3.html 6/10
4/5/24, 6:40 PM Chapter 4 Bayesian Machinery | Bayesian Hierarchical Models in Ecology
One iteration includes as many random draws as there are parameters in the
model; in other words, the chain for each parameter is updated by using the last
value sampled for each of the other parameters, which is referred to as full
conditional sampling.
Although the nuts and bolts of MCMC can get very detailed and may go beyond the
operational knowledge you need to run models, there are some practical issues
that you will need to be comfortable handling, including initial values, burn-in,
convergence, thinning,
4.3.4 Burn-in
https://bookdown.org/steve_midway/BHME/Ch3.html 7/10
4/5/24, 6:40 PM Chapter 4 Bayesian Machinery | Bayesian Hierarchical Models in Ecology
Figure 4.6: Burn-in is the initial MCMC sampling that may take place outside of the
highest probability region for a parameter.
4.3.5 Convergence
https://bookdown.org/steve_midway/BHME/Ch3.html 8/10
4/5/24, 6:40 PM Chapter 4 Bayesian Machinery | Bayesian Hierarchical Models in Ecology
Figure 4.8: Non-convergence is not always obvious. These chains are not
converging despite overlapping.
Convergence Diagnostics
4. Others
Figure 4.9: Histograms and density plots are a good way to visualize convergence.
4.3.6 Thinning
There is some artistry in MCMC, or at least some decision that need to be made by
the modeler. Your final number of samples in your posterior is often much less than
your total iterations, because in handling the MCMC iterations you will need to
eliminate some samples (e.g., burn-in and thinning). Many MCMC adjustments you
make will not result in major changes, and this is typically a good thing because it
means you are in the parameter space you need to be in. Other times, you will
https://bookdown.org/steve_midway/BHME/Ch3.html 9/10
4/5/24, 6:40 PM Chapter 4 Bayesian Machinery | Bayesian Hierarchical Models in Ecology
have a model issue and some MCMC adjustment will make a (good) difference.
Because computation is cheap—especially for simple models—it is common to
over-do the iterations a little. This is OK.
And here is a great simulator to play with to evaluate how changes in MCMC
settings play out visually. https://chi-feng.github.io/mcmc-demo/app.html
https://bookdown.org/steve_midway/BHME/Ch3.html 10/10