Skip to contents

Introduction

An advantage of BSEM is that we can use priors to set up soft constraints in the model, by estimating a parameter with a strong prior. This way the parameter is estimated, but the prior will restrict the possible values.

This was suggested by Muthén and Asparouhov (2012), as a way to estimate all possible cross-loadings in a CFA. This way, if the posterior distribution of the restricted parameters includes values outside of the strong prior, it can be interpreted as a model modification. This means that the parameters should be less restricted, or that the prior distribution should be relaxed.

In this tutorial we present how to estimate a CFA where all possible cross-loadings are restricted by strong priors.

Cross-loadings

We will show an example with the Holzinger and Swineford (1939) data. First we will estimate the regular model with no cross-loadings and default priors.

HS.model <- ' visual  =~ x1 + x2 + x3
              textual =~ x4 + x5 + x6
              speed   =~ x7 + x8 + x9 '

fit_df <- bcfa(HS.model, data=HolzingerSwineford1939, 
            std.lv=TRUE, meanstructure=T)

We can see the overall model results with the summary() function, looking at the posterior distribution for the factor loadings, correlations, intercepts and variances.

summary(fit_df)
## blavaan 0.5.6.1316 ended normally after 1000 iterations
## 
##   Estimator                                      BAYES
##   Optimization method                             MCMC
##   Number of model parameters                        30
## 
##   Number of observations                           301
## 
##   Statistic                                 MargLogLik         PPP
##   Value                                      -3871.046       0.000
## 
## Parameter Estimates:
## 
## 
## Latent Variables:
##                    Estimate  Post.SD pi.lower pi.upper     Rhat    Prior       
##   visual =~                                                                    
##     x1                0.912    0.086    0.744    1.086    1.001    normal(0,10)
##     x2                0.500    0.082    0.340    0.660    1.000    normal(0,10)
##     x3                0.661    0.078    0.512    0.815    1.000    normal(0,10)
##   textual =~                                                                   
##     x4                1.002    0.058    0.891    1.119    1.000    normal(0,10)
##     x5                1.116    0.064    0.995    1.242    1.000    normal(0,10)
##     x6                0.928    0.056    0.823    1.042    1.001    normal(0,10)
##   speed =~                                                                     
##     x7                0.619    0.077    0.463    0.765    1.001    normal(0,10)
##     x8                0.733    0.079    0.573    0.887    1.000    normal(0,10)
##     x9                0.681    0.079    0.530    0.840    1.000    normal(0,10)
## 
## Covariances:
##                    Estimate  Post.SD pi.lower pi.upper     Rhat    Prior       
##   visual ~~                                                                    
##     textual           0.449    0.067    0.311    0.576    1.000     lkj_corr(1)
##     speed             0.462    0.086    0.292    0.622    1.000     lkj_corr(1)
##   textual ~~                                                                   
##     speed             0.278    0.071    0.135    0.417    1.000     lkj_corr(1)
## 
## Intercepts:
##                    Estimate  Post.SD pi.lower pi.upper     Rhat    Prior       
##    .x1                4.936    0.067    4.803    5.064    0.999    normal(0,32)
##    .x2                6.089    0.069    5.961    6.230    0.999    normal(0,32)
##    .x3                2.251    0.065    2.123    2.385    1.000    normal(0,32)
##    .x4                3.061    0.067    2.929    3.193    1.000    normal(0,32)
##    .x5                4.340    0.074    4.194    4.481    1.000    normal(0,32)
##    .x6                2.186    0.064    2.061    2.312    1.000    normal(0,32)
##    .x7                4.185    0.062    4.062    4.309    1.000    normal(0,32)
##    .x8                5.527    0.058    5.416    5.638    0.999    normal(0,32)
##    .x9                5.374    0.057    5.261    5.488    1.000    normal(0,32)
##     visual            0.000                                                    
##     textual           0.000                                                    
##     speed             0.000                                                    
## 
## Variances:
##                    Estimate  Post.SD pi.lower pi.upper     Rhat    Prior       
##    .x1                0.551    0.127    0.290    0.791    1.000 gamma(1,.5)[sd]
##    .x2                1.153    0.107    0.954    1.383    1.000 gamma(1,.5)[sd]
##    .x3                0.859    0.100    0.674    1.062    1.000 gamma(1,.5)[sd]
##    .x4                0.378    0.050    0.284    0.484    1.000 gamma(1,.5)[sd]
##    .x5                0.453    0.061    0.337    0.578    1.000 gamma(1,.5)[sd]
##    .x6                0.365    0.045    0.281    0.458    1.000 gamma(1,.5)[sd]
##    .x7                0.820    0.092    0.653    1.016    1.000 gamma(1,.5)[sd]
##    .x8                0.502    0.095    0.317    0.697    1.001 gamma(1,.5)[sd]
##    .x9                0.567    0.093    0.374    0.738    1.002 gamma(1,.5)[sd]
##     visual            1.000                                                    
##     textual           1.000                                                    
##     speed             1.000

Next, we will add all possible cross-loadings with a strong prior of N(0,σ=0.08)N(0, \sigma = 0.08). The prior centers the loadings around 0 and allows them little space to move.

HS.model.cl<-' visual  =~ x1 + x2 + x3
              textual =~ x4 + x5 + x6
              speed   =~ x7 + x8 + x9 
    
              ## Cross-loadings
              visual =~  prior("normal(0,.08)")*x4 + prior("normal(0,.08)")*x5 + prior("normal(0,.08)")*x6 + prior("normal(0,.08)")*x7 + prior("normal(0,.08)")*x8 + prior("normal(0,.08)")*x9
              textual =~ prior("normal(0,.08)")*x1 + prior("normal(0,.08)")*x2 + prior("normal(0,.08)")*x3 + prior("normal(0,.08)")*x7 + prior("normal(0,.08)")*x8 + prior("normal(0,.08)")*x9 
              speed =~ prior("normal(0,.08)")*x1 + prior("normal(0,.08)")*x2 + prior("normal(0,.08)")*x3 + prior("normal(0,.08)")*x4 + prior("normal(0,.08)")*x5 + prior("normal(0,.08)")*x6'

fit_cl <- bcfa(HS.model.cl, data=HolzingerSwineford1939, 
            std.lv=TRUE, meanstructure=T)

It is important that, for each factor, the first variable after =~ is one whose loading we expect to be far from 0. So, in the above model, we specified the regular cfa first (whose loadings we expect to be larger), then the loadings with small-variance priors on a separate line. This is important because, in blavaan, the first loading is either constrained to be positive or fixed to 1 (depending on std.lv). If the posterior distribution of that constrained loading is centered near 0, we may experience identification problems. Reverse-coded variables can also be problematic here, because a positive constraint on a reverse-coded loading can lead other loadings to assume negative values. If you use informative priors in this situation, then you should verify that the prior density is on the correct side of 0.

After estimation, you can look at the summary() of this model and evaluate the cross-loadings. You can specifically see whether any of the cross-loadings seem large enough to suggest that they should be kept in the model, by looking at the posterior mean (Estimate) and credible interval.

summary(fit_cl)
## blavaan 0.5.6.1316 ended normally after 1000 iterations
## 
##   Estimator                                      BAYES
##   Optimization method                             MCMC
##   Number of model parameters                        48
## 
##   Number of observations                           301
## 
##   Statistic                                 MargLogLik         PPP
##   Value                                      -3858.912       0.138
## 
## Parameter Estimates:
## 
## 
## Latent Variables:
##                    Estimate  Post.SD pi.lower pi.upper     Rhat    Prior       
##   visual =~                                                                    
##     x1                0.759    0.097    0.581    0.963    1.000    normal(0,10)
##     x2                0.567    0.089    0.394    0.741    1.000    normal(0,10)
##     x3                0.771    0.095    0.586    0.955    1.001    normal(0,10)
##   textual =~                                                                   
##     x4                0.982    0.066    0.860    1.115    1.000    normal(0,10)
##     x5                1.156    0.072    1.020    1.302    1.000    normal(0,10)
##     x6                0.893    0.061    0.779    1.018    1.000    normal(0,10)
##   speed =~                                                                     
##     x7                0.727    0.085    0.571    0.900    1.000    normal(0,10)
##     x8                0.793    0.084    0.635    0.961    1.000    normal(0,10)
##     x9                0.542    0.073    0.400    0.687    0.999    normal(0,10)
##   visual =~                                                                    
##     x4                0.032    0.057   -0.080    0.139    1.000   normal(0,.08)
##     x5               -0.073    0.063   -0.199    0.048    1.000   normal(0,.08)
##     x6                0.063    0.055   -0.046    0.166    0.999   normal(0,.08)
##     x7               -0.130    0.063   -0.259   -0.010    1.000   normal(0,.08)
##     x8               -0.007    0.066   -0.139    0.119    1.000   normal(0,.08)
##     x9                0.194    0.058    0.073    0.309    1.000   normal(0,.08)
##   textual =~                                                                   
##     x1                0.110    0.064   -0.013    0.234    0.999   normal(0,.08)
##     x2                0.007    0.059   -0.110    0.123    1.000   normal(0,.08)
##     x3               -0.085    0.062   -0.213    0.031    1.000   normal(0,.08)
##     x7                0.016    0.061   -0.107    0.129    1.000   normal(0,.08)
##     x8               -0.038    0.061   -0.161    0.081    0.999   normal(0,.08)
##     x9                0.032    0.056   -0.079    0.141    0.999   normal(0,.08)
##   speed =~                                                                     
##     x1                0.045    0.066   -0.085    0.174    0.999   normal(0,.08)
##     x2               -0.048    0.064   -0.173    0.073    0.999   normal(0,.08)
##     x3                0.029    0.066   -0.099    0.161    1.000   normal(0,.08)
##     x4               -0.005    0.057   -0.116    0.104    0.999   normal(0,.08)
##     x5                0.008    0.062   -0.114    0.127    1.001   normal(0,.08)
##     x6                0.001    0.056   -0.112    0.110    1.001   normal(0,.08)
## 
## Covariances:
##                    Estimate  Post.SD pi.lower pi.upper     Rhat    Prior       
##   visual ~~                                                                    
##     textual           0.373    0.091    0.190    0.543    0.999     lkj_corr(1)
##     speed             0.351    0.109    0.120    0.545    1.000     lkj_corr(1)
##   textual ~~                                                                   
##     speed             0.255    0.104    0.049    0.453    0.999     lkj_corr(1)
## 
## Intercepts:
##                    Estimate  Post.SD pi.lower pi.upper     Rhat    Prior       
##    .x1                4.935    0.068    4.803    5.065    1.000    normal(0,32)
##    .x2                6.089    0.070    5.946    6.226    0.999    normal(0,32)
##    .x3                2.250    0.066    2.121    2.384    0.999    normal(0,32)
##    .x4                3.061    0.066    2.928    3.185    1.000    normal(0,32)
##    .x5                4.340    0.075    4.197    4.487    1.000    normal(0,32)
##    .x6                2.186    0.064    2.059    2.308    1.002    normal(0,32)
##    .x7                4.187    0.062    4.066    4.306    1.000    normal(0,32)
##    .x8                5.528    0.060    5.410    5.645    1.000    normal(0,32)
##    .x9                5.375    0.056    5.266    5.484    0.999    normal(0,32)
##     visual            0.000                                                    
##     textual           0.000                                                    
##     speed             0.000                                                    
## 
## Variances:
##                    Estimate  Post.SD pi.lower pi.upper     Rhat    Prior       
##    .x1                0.680    0.107    0.469    0.891    1.000 gamma(1,.5)[sd]
##    .x2                1.091    0.109    0.885    1.321    0.999 gamma(1,.5)[sd]
##    .x3                0.713    0.112    0.497    0.936    1.001 gamma(1,.5)[sd]
##    .x4                0.389    0.051    0.297    0.491    1.000 gamma(1,.5)[sd]
##    .x5                0.411    0.066    0.289    0.543    0.999 gamma(1,.5)[sd]
##    .x6                0.374    0.043    0.295    0.465    1.000 gamma(1,.5)[sd]
##    .x7                0.712    0.094    0.524    0.900    0.999 gamma(1,.5)[sd]
##    .x8                0.434    0.092    0.243    0.610    1.000 gamma(1,.5)[sd]
##    .x9                0.589    0.063    0.475    0.723    0.999 gamma(1,.5)[sd]
##     visual            1.000                                                    
##     textual           1.000                                                    
##     speed             1.000

We suggest to not simply look at whether the CI excludes 0 (similar to the null hypothesis), but to evaluate whether the minimum value of the CI (the value closer to 0) is far enough away from 0 to be relavant instead of just different from 0.

Caveats

The model with all possible cross-loadings should not be kept as the final analysis model, but should be used as a step to make decisions about model changes. This for two main reasons, (1) this model is overfitted and would present good overall fit just due to the inclusion of a lot of nuisance parameters. In this example the posterior predictive p-value goes from ppp = 0 to ppp = 0.138, and is not that the model is better theoretically but that we are inflating the model fit. And (2), the addition of small-variance priors can prevent detection of important misspecifications in Bayesian confirmatory factor analysis, as it can obscure underlying problems in the model by diluting it through a large number of nuisance parameters (Jorgensen et al. 2019).

References

Holzinger, K. J., and F. A. Swineford. 1939. A Study of Factor Analysis: The Stability of a Bi-Factor Solution. Supplementary Educational Monograph 48. Chicago: University of Chicago Press.
Jorgensen, Terrence D, Mauricio Garnier-Villarreal, Sunthud Pornprasertmanit, and Jaehoon Lee. 2019. “Small-Variance Priors Can Prevent Detecting Important Misspecifications in Bayesian Confirmatory Factor Analysis.” In Quantitative Psychology: The 83rd Annual Meeting of the Psychometric Society, New York, NY, 2018, edited by Marie Wiberg, Steven Culpepper, Rianne Janssen, Jorge González, and Dylan Molenaar, 265:255–63. Springer Proceedings in Mathematics & Statistics. New York, NY, US: Springer. https://doi.org/10.1007/978-3-030-01310-3_23.
Muthén, Bengt, and Tihomir Asparouhov. 2012. “Bayesian Structural Equation Modeling: A More Flexible Representation of Substantive Theory.” Psychological Methods 17 (3): 313–35. https://doi.org/10.1037/a0026802.