Skip to contents

Introduction

An advantage of BSEM is that we can use priors to set up soft constraints in the model, by estimating a parameter with a strong prior. This way the parameter is estimated, but the prior will restrict the possible values.

This was suggested by Muthén and Asparouhov (2012), as a way to estimate all possible cross-loadings in a CFA. This way, if the posterior distribution of the restricted parameters includes values outside of the strong prior, it can be interpreted as a model modification. This means that the parameters should be less restricted, or that the prior distribution should be relaxed.

In this tutorial we present how to estimate a CFA where all possible cross-loadings are restricted by strong priors.

Cross-loadings

We will show an example with the Holzinger and Swineford (1939) data. First we will estimate the regular model with no cross-loadings and default priors.

HS.model <- ' visual  =~ x1 + x2 + x3
              textual =~ x4 + x5 + x6
              speed   =~ x7 + x8 + x9 '

fit_df <- bcfa(HS.model, data=HolzingerSwineford1939, 
            std.lv=TRUE, meanstructure=T)

We can see the overall model results with the summary() function, looking at the posterior distribution for the factor loadings, correlations, intercepts and variances.

summary(fit_df)
## blavaan 0.5.4.1266 ended normally after 1000 iterations
## 
##   Estimator                                      BAYES
##   Optimization method                             MCMC
##   Number of model parameters                        30
## 
##   Number of observations                           301
## 
##   Statistic                                 MargLogLik         PPP
##   Value                                      -3871.184       0.000
## 
## Parameter Estimates:
## 
## 
## Latent Variables:
##                    Estimate  Post.SD pi.lower pi.upper     Rhat    Prior       
##   visual =~                                                                    
##     x1                0.909    0.085    0.740    1.082    1.000    normal(0,10)
##     x2                0.500    0.082    0.336    0.658    0.999    normal(0,10)
##     x3                0.663    0.078    0.517    0.818    0.999    normal(0,10)
##   textual =~                                                                   
##     x4                1.000    0.058    0.891    1.120    1.000    normal(0,10)
##     x5                1.113    0.064    0.995    1.244    1.000    normal(0,10)
##     x6                0.926    0.054    0.827    1.034    1.000    normal(0,10)
##   speed =~                                                                     
##     x7                0.616    0.075    0.466    0.762    0.999    normal(0,10)
##     x8                0.728    0.077    0.567    0.881    1.000    normal(0,10)
##     x9                0.685    0.079    0.538    0.844    1.000    normal(0,10)
## 
## Covariances:
##                    Estimate  Post.SD pi.lower pi.upper     Rhat    Prior       
##   visual ~~                                                                    
##     textual           0.447    0.063    0.320    0.567    1.000     lkj_corr(1)
##     speed             0.465    0.084    0.295    0.626    1.000     lkj_corr(1)
##   textual ~~                                                                   
##     speed             0.280    0.069    0.137    0.406    1.000     lkj_corr(1)
## 
## Intercepts:
##                    Estimate  Post.SD pi.lower pi.upper     Rhat    Prior       
##    .x1                4.936    0.068    4.808    5.068    1.001    normal(0,32)
##    .x2                6.088    0.069    5.948    6.222    0.999    normal(0,32)
##    .x3                2.250    0.066    2.122    2.378    1.001    normal(0,32)
##    .x4                3.062    0.067    2.928    3.191    1.004    normal(0,32)
##    .x5                4.341    0.075    4.193    4.485    1.002    normal(0,32)
##    .x6                2.187    0.063    2.063    2.313    1.002    normal(0,32)
##    .x7                4.185    0.064    4.060    4.309    0.999    normal(0,32)
##    .x8                5.526    0.058    5.412    5.639    1.000    normal(0,32)
##    .x9                5.372    0.059    5.261    5.485    1.001    normal(0,32)
##     visual            0.000                                                    
##     textual           0.000                                                    
##     speed             0.000                                                    
## 
## Variances:
##                    Estimate  Post.SD pi.lower pi.upper     Rhat    Prior       
##    .x1                0.553    0.125    0.298    0.784    1.000 gamma(1,.5)[sd]
##    .x2                1.152    0.109    0.955    1.381    1.000 gamma(1,.5)[sd]
##    .x3                0.854    0.097    0.673    1.051    1.000 gamma(1,.5)[sd]
##    .x4                0.379    0.049    0.286    0.478    1.001 gamma(1,.5)[sd]
##    .x5                0.454    0.059    0.343    0.574    1.000 gamma(1,.5)[sd]
##    .x6                0.362    0.044    0.280    0.452    1.000 gamma(1,.5)[sd]
##    .x7                0.822    0.091    0.654    1.017    1.000 gamma(1,.5)[sd]
##    .x8                0.508    0.093    0.332    0.697    1.000 gamma(1,.5)[sd]
##    .x9                0.563    0.096    0.359    0.742    1.001 gamma(1,.5)[sd]
##     visual            1.000                                                    
##     textual           1.000                                                    
##     speed             1.000

Next, we will add all possible cross-loadings with a strong prior of \(N(0, \sigma = 0.08)\). The prior centers the loadings around 0 and allows them little space to move.

HS.model.cl<-' visual  =~ x1 + x2 + x3
              textual =~ x4 + x5 + x6
              speed   =~ x7 + x8 + x9 
    
              ## Cross-loadings
              visual =~  prior("normal(0,.08)")*x4 + prior("normal(0,.08)")*x5 + prior("normal(0,.08)")*x6 + prior("normal(0,.08)")*x7 + prior("normal(0,.08)")*x8 + prior("normal(0,.08)")*x9
              textual =~ prior("normal(0,.08)")*x1 + prior("normal(0,.08)")*x2 + prior("normal(0,.08)")*x3 + prior("normal(0,.08)")*x7 + prior("normal(0,.08)")*x8 + prior("normal(0,.08)")*x9 
              speed =~ prior("normal(0,.08)")*x1 + prior("normal(0,.08)")*x2 + prior("normal(0,.08)")*x3 + prior("normal(0,.08)")*x4 + prior("normal(0,.08)")*x5 + prior("normal(0,.08)")*x6'

fit_cl <- bcfa(HS.model.cl, data=HolzingerSwineford1939, 
            std.lv=TRUE, meanstructure=T)

It is important that, for each factor, the first variable after =~ is one whose loading we expect to be far from 0. So, in the above model, we specified the regular cfa first (whose loadings we expect to be larger), then the loadings with small-variance priors on a separate line. This is important because, in blavaan, the first loading is either constrained to be positive or fixed to 1 (depending on std.lv). If the posterior distribution of that constrained loading is centered near 0, we may experience identification problems. Reverse-coded variables can also be problematic here, because a positive constraint on a reverse-coded loading can lead other loadings to assume negative values. If you use informative priors in this situation, then you should verify that the prior density is on the correct side of 0.

After estimation, you can look at the summary() of this model and evaluate the cross-loadings. You can specifically see whether any of the cross-loadings seem large enough to suggest that they should be kept in the model, by looking at the posterior mean (Estimate) and credible interval.

summary(fit_cl)
## blavaan 0.5.4.1266 ended normally after 1000 iterations
## 
##   Estimator                                      BAYES
##   Optimization method                             MCMC
##   Number of model parameters                        48
## 
##   Number of observations                           301
## 
##   Statistic                                 MargLogLik         PPP
##   Value                                      -3858.824       0.130
## 
## Parameter Estimates:
## 
## 
## Latent Variables:
##                    Estimate  Post.SD pi.lower pi.upper     Rhat    Prior       
##   visual =~                                                                    
##     x1                0.764    0.099    0.576    0.982    1.001    normal(0,10)
##     x2                0.566    0.091    0.393    0.746    0.999    normal(0,10)
##     x3                0.770    0.096    0.588    0.962    1.000    normal(0,10)
##   textual =~                                                                   
##     x4                0.984    0.064    0.862    1.114    1.000    normal(0,10)
##     x5                1.155    0.073    1.018    1.302    1.000    normal(0,10)
##     x6                0.894    0.062    0.777    1.019    1.000    normal(0,10)
##   speed =~                                                                     
##     x7                0.729    0.086    0.565    0.898    1.000    normal(0,10)
##     x8                0.794    0.087    0.624    0.970    1.002    normal(0,10)
##     x9                0.545    0.073    0.407    0.687    0.999    normal(0,10)
##   visual =~                                                                    
##     x4                0.031    0.057   -0.081    0.143    1.000   normal(0,.08)
##     x5               -0.073    0.063   -0.193    0.050    1.000   normal(0,.08)
##     x6                0.062    0.054   -0.043    0.165    1.000   normal(0,.08)
##     x7               -0.132    0.064   -0.255   -0.003    1.000   normal(0,.08)
##     x8               -0.009    0.069   -0.141    0.127    1.001   normal(0,.08)
##     x9                0.190    0.061    0.072    0.312    1.000   normal(0,.08)
##   textual =~                                                                   
##     x1                0.108    0.065   -0.021    0.230    1.000   normal(0,.08)
##     x2                0.007    0.060   -0.113    0.124    1.000   normal(0,.08)
##     x3               -0.086    0.062   -0.208    0.031    0.999   normal(0,.08)
##     x7                0.016    0.063   -0.108    0.136    1.000   normal(0,.08)
##     x8               -0.039    0.063   -0.168    0.082    1.000   normal(0,.08)
##     x9                0.031    0.054   -0.078    0.134    1.000   normal(0,.08)
##   speed =~                                                                     
##     x1                0.042    0.065   -0.083    0.169    1.000   normal(0,.08)
##     x2               -0.049    0.062   -0.173    0.069    1.000   normal(0,.08)
##     x3                0.025    0.065   -0.108    0.151    0.999   normal(0,.08)
##     x4               -0.006    0.056   -0.113    0.108    0.999   normal(0,.08)
##     x5                0.006    0.061   -0.118    0.126    0.999   normal(0,.08)
##     x6               -0.000    0.054   -0.108    0.101    0.999   normal(0,.08)
## 
## Covariances:
##                    Estimate  Post.SD pi.lower pi.upper     Rhat    Prior       
##   visual ~~                                                                    
##     textual           0.378    0.092    0.194    0.546    0.999     lkj_corr(1)
##     speed             0.359    0.112    0.131    0.561    1.001     lkj_corr(1)
##   textual ~~                                                                   
##     speed             0.258    0.105    0.043    0.451    0.999     lkj_corr(1)
## 
## Intercepts:
##                    Estimate  Post.SD pi.lower pi.upper     Rhat    Prior       
##    .x1                4.936    0.067    4.807    5.062    1.000    normal(0,32)
##    .x2                6.089    0.070    5.954    6.226    1.000    normal(0,32)
##    .x3                2.252    0.064    2.123    2.376    0.999    normal(0,32)
##    .x4                3.062    0.068    2.928    3.198    1.000    normal(0,32)
##    .x5                4.342    0.075    4.198    4.490    1.000    normal(0,32)
##    .x6                2.188    0.064    2.060    2.314    1.000    normal(0,32)
##    .x7                4.186    0.062    4.064    4.308    0.999    normal(0,32)
##    .x8                5.528    0.058    5.416    5.644    1.001    normal(0,32)
##    .x9                5.375    0.057    5.266    5.487    1.000    normal(0,32)
##     visual            0.000                                                    
##     textual           0.000                                                    
##     speed             0.000                                                    
## 
## Variances:
##                    Estimate  Post.SD pi.lower pi.upper     Rhat    Prior       
##    .x1                0.676    0.109    0.455    0.879    0.999 gamma(1,.5)[sd]
##    .x2                1.093    0.108    0.900    1.321    1.000 gamma(1,.5)[sd]
##    .x3                0.719    0.113    0.496    0.943    1.000 gamma(1,.5)[sd]
##    .x4                0.387    0.049    0.293    0.488    1.000 gamma(1,.5)[sd]
##    .x5                0.411    0.064    0.290    0.539    0.999 gamma(1,.5)[sd]
##    .x6                0.373    0.044    0.292    0.462    1.000 gamma(1,.5)[sd]
##    .x7                0.716    0.096    0.531    0.908    1.000 gamma(1,.5)[sd]
##    .x8                0.436    0.093    0.242    0.617    1.001 gamma(1,.5)[sd]
##    .x9                0.588    0.067    0.466    0.726    0.999 gamma(1,.5)[sd]
##     visual            1.000                                                    
##     textual           1.000                                                    
##     speed             1.000

We suggest to not simply look at whether the CI excludes 0 (similar to the null hypothesis), but to evaluate whether the minimum value of the CI (the value closer to 0) is far enough away from 0 to be relavant instead of just different from 0.

Caveats

The model with all possible cross-loadings should not be kept as the final analysis model, but should be used as a step to make decisions about model changes. This for two main reasons, (1) this model is overfitted and would present good overall fit just due to the inclusion of a lot of nuisance parameters. In this example the posterior predictive p-value goes from ppp = 0 to ppp = 0.13, and is not that the model is better theoretically but that we are inflating the model fit. And (2), the addition of small-variance priors can prevent detection of important misspecifications in Bayesian confirmatory factor analysis, as it can obscure underlying problems in the model by diluting it through a large number of nuisance parameters (Jorgensen et al. 2019).

References

Holzinger, K. J., and F. A. Swineford. 1939. A Study of Factor Analysis: The Stability of a Bi-Factor Solution. Supplementary Educational Monograph 48. Chicago: University of Chicago Press.
Jorgensen, Terrence D, Mauricio Garnier-Villarreal, Sunthud Pornprasertmanit, and Jaehoon Lee. 2019. “Small-Variance Priors Can Prevent Detecting Important Misspecifications in Bayesian Confirmatory Factor Analysis.” In Quantitative Psychology: The 83rd Annual Meeting of the Psychometric Society, New York, NY, 2018, edited by Marie Wiberg, Steven Culpepper, Rianne Janssen, Jorge González, and Dylan Molenaar, 265:255–63. Springer Proceedings in Mathematics & Statistics. New York, NY, US: Springer. https://doi.org/10.1007/978-3-030-01310-3_23.
Muthén, Bengt, and Tihomir Asparouhov. 2012. “Bayesian Structural Equation Modeling: A More Flexible Representation of Substantive Theory.” Psychological Methods 17 (3): 313–35. https://doi.org/10.1037/a0026802.