Convergence and Efficiency Evaluation
Mauricio Garnier-Villarreal
Source:vignettes/convergence_efficiency.Rmd
convergence_efficiency.Rmd
Introduction
When Bayesian models are estimated with a Markov-Chain Monte Carlo
(MCMC) sampler, the model estimation doesn’t stop when it has achieved
some convergence criteria. It will run as long as desired (determined by
the burnin
and sample
arguments), and then you
need to evaluate the convergence and efficiency of the estimated
posterior distributions. You should only analyze the results if
convergence has been achieved, as judged by the metrics described
below.
For this example we will use the Industrialization and Political Democracy example (Bollen 1989).
model <- '
# latent variable definitions
ind60 =~ x1 + x2 + x3
dem60 =~ a*y1 + b*y2 + c*y3 + d*y4
dem65 =~ a*y5 + b*y6 + c*y7 + d*y8
# regressions
dem60 ~ ind60
dem65 ~ ind60 + dem60
# residual correlations
y1 ~~ y5
y2 ~~ y4 + y6
y3 ~~ y7
y4 ~~ y8
y6 ~~ y8
'
fit <- bsem(model, data=PoliticalDemocracy,
std.lv=T, meanstructure=T, n.chains=3,
burnin=500, sample=1000)
Convergence
The primary convergence diagnostic is \(\hat{R}\), which compares the between- and within-chain samples of model parameters and other univariate quantities of interest (Vehtari et al. 2021). If chains have not mixed well (ie, the between- and within-chain estimates don’t agree), \(\hat{R}\) is larger than 1. We recommend running at least three chains by default and only using the posterior samples if \(\hat{R} < 1.05\) for all the parameters.
blavaan
presents the \(\hat{R}\) reported by the underlying MCMC
program, either Stan or JAGS (Stan by default). We can obtain the \(\hat{R}\) from the summary()
function, and we can also extract it with the blavInspect()
function
blavInspect(fit, "rhat")
## ind60=~x1 ind60=~x2 ind60=~x3 a b c
## 1.0018012 1.0010839 1.0012642 0.9997710 0.9994897 1.0009188
## d a b c d dem60~ind60
## 0.9999628 0.9997710 0.9994897 1.0009188 0.9999628 1.0004135
## dem65~ind60 dem65~dem60 y1~~y5 y2~~y4 y2~~y6 y3~~y7
## 1.0012209 1.0003844 1.0005313 1.0003462 0.9997678 1.0002566
## y4~~y8 y6~~y8 x1~~x1 x2~~x2 x3~~x3 y1~~y1
## 1.0007237 1.0034017 0.9999348 1.0010513 0.9990210 0.9997839
## y2~~y2 y3~~y3 y4~~y4 y5~~y5 y6~~y6 y7~~y7
## 1.0004307 0.9993354 0.9999113 1.0001222 1.0010015 1.0013247
## y8~~y8 x1~1 x2~1 x3~1 y1~1 y2~1
## 1.0042064 1.0022089 1.0019341 1.0013299 1.0005310 1.0005599
## y3~1 y4~1 y5~1 y6~1 y7~1 y8~1
## 1.0007579 1.0011991 1.0003426 1.0001690 1.0007672 1.0002924
With large models it can be cumbersome to look over all of these entries. We can instead find the largest \(\hat{R}\) to see if they are all less than \(1.05\)
max(blavInspect(fit, "psrf"))
## [1] 1.004206
If all \(\hat{R} < 1.05\) then we
can establish that the MCMC chains have converged to a stable solution.
If the model has not converged, you might increase the number of
burnin
iterations
fit <- bsem(model, data=PoliticalDemocracy,
std.lv=T, meanstructure=T, n.chains=3,
burnin=1000, sample=1000)
and/or change the model priors with the dpriors()
function. These address issues where the model failed to converge due to
needing more iterations or due to a model misspecification (such as bad
priors). As a rule of thumb, we seldom see a model require more than
1,000 burnin samples in Stan. If your model is not converging after
1,000 burnin samples, it is likely that the default prior distributions
clash with your data. This can happen, e.g., if your variables contain
values in the 100s or 1000s.
Efficiency
We should also evaluate the efficiency of the posterior samples. Effective sample size (ESS) is a useful measure for sampling efficiency, and is well defined even if the chains do not have finite mean or variance (Vehtari et al. 2021).
In short, the posterior samples produced by MCMC are autocorrelated.
This means that, if you draw 500 posterior samples, you do not have 500
independent pieces of information about the posterior distribution,
because the samples are autocorlated. The ESS metric is
like
a currency conversion, telling you how much your autocorrelated
samples are worth if we were to convert them to independent samples. In
blavaan
we can print it from the summary
function with the neff
argument
summary(fit, neff=T)
We can also extract only those with the blavInspect()
function
blavInspect(fit, "neff")
## ind60=~x1 ind60=~x2 ind60=~x3 a b c
## 1916.348 1772.486 2074.781 1465.364 1725.618 1610.086
## d a b c d dem60~ind60
## 1710.565 1465.364 1725.618 1610.086 1710.565 2672.235
## dem65~ind60 dem65~dem60 y1~~y5 y2~~y4 y2~~y6 y3~~y7
## 2791.356 2820.097 1811.400 2280.915 2736.333 2531.033
## y4~~y8 y6~~y8 x1~~x1 x2~~x2 x3~~x3 y1~~y1
## 2091.584 1126.891 2136.111 1679.905 3783.613 2439.939
## y2~~y2 y3~~y3 y4~~y4 y5~~y5 y6~~y6 y7~~y7
## 2545.773 3355.568 2242.123 1697.890 1843.425 1979.426
## y8~~y8 x1~1 x2~1 x3~1 y1~1 y2~1
## 1059.274 1114.143 1065.650 1189.917 1228.787 1793.128
## y3~1 y4~1 y5~1 y6~1 y7~1 y8~1
## 1571.198 1340.890 1310.634 1268.696 1192.775 1165.473
ESS is a sample size, so it should be at least 100 (optimally, much
more than 100) times the number of chains in order to be reliable and to
indicate that estimates of the posterior quantiles are reliable. In this
example, because we have 3 chains, we would want to see at least
neff=300
for every parameter.
And we can easily find the lowest ESS with the min()
function:
min(blavInspect(fit, "neff"))
## [1] 1059.274