Skip to contents

Introduction

The Probability of Direction (pd) is an index of effect existence, ranging from 0% to 100%, representing the certainty with which an effect goes in a particular direction (i.e., is positive or negative) (Makowski et al. 2019). Beyond its simplicity of interpretation, understanding and computation, this index also presents other interesting properties: It is independent from the model: It is solely based on the posterior distributions and does not require any additional information from the data or the model. It is robust to the scale of both the response variable and the predictors. *It is strongly correlated with the frequentist p-value, and can thus be used to draw parallels and give some reference to readers non-familiar with Bayesian statistics.

Can be interpreted as the probability that a parameter (described by its posterior distribution) is above or below a chosen cutoff, an explicit hypothesis. It is mathematically defined as the proportion of the posterior distribution that satisfies the specified hypothesis. Although differently expressed, this index is fairly similar (i.e., is strongly correlated) to the frequentist p-value.

Probability of Direction (pd)

For this example we will use the Industrialization and Political Democracy example (Bollen 1989). We will first estimate the latent regression model

model <- '
  # latent variable definitions
     ind60 =~ x1 + x2 + x3
     dem60 =~ a*y1 + b*y2 + c*y3 + d*y4
     dem65 =~ a*y5 + b*y6 + c*y7 + d*y8

  # regressions
    dem60 ~ ind60
    dem65 ~ ind60 + dem60

  # residual correlations
    y1 ~~ y5
    y2 ~~ y4 + y6
    y3 ~~ y7
    y4 ~~ y8
    y6 ~~ y8
'

fit <- bsem(model, data=PoliticalDemocracy,
            std.lv=T, meanstructure=T)

We can then look at the overall model results with the summary function, in this case we are also asking for the standardized estimates, and \(R^2\)

summary(fit, standardize=T, rsquare=T)
## blavaan 0.5.5.1289 ended normally after 1000 iterations
## 
##   Estimator                                      BAYES
##   Optimization method                             MCMC
##   Number of model parameters                        42
##   Number of equality constraints                     4
## 
##   Number of observations                            75
## 
##   Statistic                                 MargLogLik         PPP
##   Value                                             NA       0.022
## 
## Parameter Estimates:
## 
## 
## Latent Variables:
##                    Estimate  Post.SD pi.lower pi.upper   Std.lv  Std.all
##   ind60 =~                                                              
##     x1                0.705    0.071    0.579    0.859    0.705    0.922
##     x2                1.533    0.139    1.285    1.831    1.533    0.972
##     x3                1.273    0.139    1.018    1.575    1.273    0.872
##   dem60 =~                                                              
##     y1         (a)    1.460    0.172    1.139    1.807    1.786    0.762
##     y2         (b)    1.729    0.222    1.312    2.198    2.115    0.584
##     y3         (c)    1.815    0.200    1.429    2.222    2.220    0.701
##     y4         (d)    1.946    0.192    1.599    2.333    2.380    0.789
##   dem65 =~                                                              
##     y5         (a)    1.460    0.172    1.139    1.807    2.300    0.811
##     y6         (b)    1.729    0.222    1.312    2.198    2.724    0.769
##     y7         (c)    1.815    0.200    1.429    2.222    2.859    0.837
##     y8         (d)    1.946    0.192    1.599    2.333    3.065    0.870
##      Rhat    Prior       
##                          
##     1.000    normal(0,10)
##     1.001    normal(0,10)
##     1.001    normal(0,10)
##                          
##     1.000    normal(0,10)
##     1.000    normal(0,10)
##     1.000    normal(0,10)
##     1.000    normal(0,10)
##                          
##     1.000                
##     1.000                
##     1.000                
##     1.000                
## 
## Regressions:
##                    Estimate  Post.SD pi.lower pi.upper   Std.lv  Std.all
##   dem60 ~                                                               
##     ind60             0.704    0.169    0.389    1.051    0.576    0.576
##   dem65 ~                                                               
##     ind60             0.248    0.177   -0.106    0.588    0.158    0.158
##     dem60             0.865    0.127    0.623    1.114    0.671    0.671
##      Rhat    Prior       
##                          
##     1.000    normal(0,10)
##                          
##     0.999    normal(0,10)
##     1.000    normal(0,10)
## 
## Covariances:
##                    Estimate  Post.SD pi.lower pi.upper   Std.lv  Std.all
##  .y1 ~~                                                                 
##    .y5                0.757    0.441   -0.013    1.720    0.757    0.300
##  .y2 ~~                                                                 
##    .y4                1.773    0.844    0.342    3.733    1.773    0.325
##    .y6                2.220    0.785    0.810    3.918    2.220    0.333
##  .y3 ~~                                                                 
##    .y7                1.309    0.705    0.040    2.888    1.309    0.310
##  .y4 ~~                                                                 
##    .y8                0.372    0.495   -0.534    1.448    0.372    0.116
##  .y6 ~~                                                                 
##    .y8                1.094    0.712   -0.209    2.613    1.094    0.278
##      Rhat    Prior       
##                          
##     1.000       beta(1,1)
##                          
##     1.000       beta(1,1)
##     0.999       beta(1,1)
##                          
##     0.999       beta(1,1)
##                          
##     1.000       beta(1,1)
##                          
##     0.999       beta(1,1)
## 
## Intercepts:
##                    Estimate  Post.SD pi.lower pi.upper   Std.lv  Std.all
##    .x1                5.056    0.089    4.881    5.230    5.056    6.615
##    .x2                4.798    0.182    4.447    5.155    4.798    3.043
##    .x3                3.559    0.171    3.237    3.900    3.559    2.438
##    .y1                5.465    0.276    4.913    5.988    5.465    2.331
##    .y2                4.252    0.420    3.427    5.104    4.252    1.173
##    .y3                6.562    0.365    5.837    7.253    6.562    2.072
##    .y4                4.448    0.346    3.782    5.116    4.448    1.475
##    .y5                5.137    0.328    4.477    5.754    5.137    1.811
##    .y6                2.981    0.406    2.178    3.752    2.981    0.842
##    .y7                6.198    0.404    5.389    6.983    6.198    1.814
##    .y8                4.044    0.409    3.252    4.853    4.044    1.148
##     ind60             0.000                               0.000    0.000
##    .dem60             0.000                               0.000    0.000
##    .dem65             0.000                               0.000    0.000
##      Rhat    Prior       
##     1.006    normal(0,32)
##     1.006    normal(0,32)
##     1.005    normal(0,32)
##     1.003    normal(0,32)
##     1.003    normal(0,32)
##     1.001    normal(0,32)
##     1.004    normal(0,32)
##     1.002    normal(0,32)
##     1.004    normal(0,32)
##     1.005    normal(0,32)
##     1.003    normal(0,32)
##                          
##                          
##                          
## 
## Variances:
##                    Estimate  Post.SD pi.lower pi.upper   Std.lv  Std.all
##    .x1                0.087    0.023    0.047    0.136    0.087    0.149
##    .x2                0.137    0.082    0.004    0.321    0.137    0.055
##    .x3                0.511    0.100    0.346    0.734    0.511    0.240
##    .y1                2.310    0.589    1.307    3.602    2.310    0.420
##    .y2                8.664    1.575    5.980   12.231    8.664    0.660
##    .y3                5.101    1.076    3.277    7.473    5.101    0.509
##    .y4                3.427    0.923    1.814    5.455    3.427    0.377
##    .y5                2.754    0.654    1.669    4.222    2.754    0.342
##    .y6                5.122    1.037    3.333    7.340    5.122    0.408
##    .y7                3.506    0.876    2.083    5.461    3.506    0.300
##    .y8                3.020    0.877    1.425    4.968    3.020    0.243
##     ind60             1.000                               1.000    1.000
##    .dem60             1.000                               0.669    0.669
##    .dem65             1.000                               0.403    0.403
##      Rhat    Prior       
##     1.000 gamma(1,.5)[sd]
##     0.999 gamma(1,.5)[sd]
##     1.000 gamma(1,.5)[sd]
##     1.000 gamma(1,.5)[sd]
##     1.000 gamma(1,.5)[sd]
##     0.999 gamma(1,.5)[sd]
##     1.000 gamma(1,.5)[sd]
##     1.000 gamma(1,.5)[sd]
##     0.999 gamma(1,.5)[sd]
##     0.999 gamma(1,.5)[sd]
##     1.000 gamma(1,.5)[sd]
##                          
##                          
##                          
## 
## R-Square:
##                    Estimate
##     x1                0.851
##     x2                0.945
##     x3                0.760
##     y1                0.580
##     y2                0.340
##     y3                0.491
##     y4                0.623
##     y5                0.658
##     y6                0.592
##     y7                0.700
##     y8                0.757
##     dem60             0.331
##     dem65             0.597

To calculate the probability of direction we will use a function from the package brms (Bürkner 2017)

And we will need to extract the posterior draws as a matrix,

mc_out <- as.matrix(blavInspect(fit, "mcmc", add.labels = FALSE))
dim(mc_out)
## [1] 3000   42
colnames(mc_out)
##  [1] "ly_sign[1]"    "ly_sign[2]"    "ly_sign[3]"    "ly_sign[4]"   
##  [5] "ly_sign[5]"    "ly_sign[6]"    "ly_sign[7]"    "ly_sign[4]"   
##  [9] "ly_sign[5]"    "ly_sign[6]"    "ly_sign[7]"    "bet_sign[1]"  
## [13] "bet_sign[2]"   "bet_sign[3]"   "Theta_cov[1]"  "Theta_cov[2]" 
## [17] "Theta_cov[3]"  "Theta_cov[4]"  "Theta_cov[5]"  "Theta_cov[6]" 
## [21] "Theta_var[1]"  "Theta_var[2]"  "Theta_var[3]"  "Theta_var[4]" 
## [25] "Theta_var[5]"  "Theta_var[6]"  "Theta_var[7]"  "Theta_var[8]" 
## [29] "Theta_var[9]"  "Theta_var[10]" "Theta_var[11]" "Nu_free[1]"   
## [33] "Nu_free[2]"    "Nu_free[3]"    "Nu_free[4]"    "Nu_free[5]"   
## [37] "Nu_free[6]"    "Nu_free[7]"    "Nu_free[8]"    "Nu_free[9]"   
## [41] "Nu_free[10]"   "Nu_free[11]"

It is also important to note that the parameters in the posterior draws are named after the Stan underlying object names, instead of the (b)lavaan parameter names. This is due to the argument add.labels = FALSE and is used here to avoid trouble with parameter names that have tildes or equal signs in them. You can see what each parameter name equates to with the partable() function, as follows

pt <- partable(fit)[,c("lhs","op","rhs","pxnames")]
pt
##      lhs op   rhs       pxnames
## 1  ind60 =~    x1    ly_sign[1]
## 2  ind60 =~    x2    ly_sign[2]
## 3  ind60 =~    x3    ly_sign[3]
## 4  dem60 =~    y1    ly_sign[4]
## 5  dem60 =~    y2    ly_sign[5]
## 6  dem60 =~    y3    ly_sign[6]
## 7  dem60 =~    y4    ly_sign[7]
## 8  dem65 =~    y5    ly_sign[4]
## 9  dem65 =~    y6    ly_sign[5]
## 10 dem65 =~    y7    ly_sign[6]
## 11 dem65 =~    y8    ly_sign[7]
## 12 dem60  ~ ind60   bet_sign[1]
## 13 dem65  ~ ind60   bet_sign[2]
## 14 dem65  ~ dem60   bet_sign[3]
## 15    y1 ~~    y5  Theta_cov[1]
## 16    y2 ~~    y4  Theta_cov[2]
## 17    y2 ~~    y6  Theta_cov[3]
## 18    y3 ~~    y7  Theta_cov[4]
## 19    y4 ~~    y8  Theta_cov[5]
## 20    y6 ~~    y8  Theta_cov[6]
## 21    x1 ~~    x1  Theta_var[1]
## 22    x2 ~~    x2  Theta_var[2]
## 23    x3 ~~    x3  Theta_var[3]
## 24    y1 ~~    y1  Theta_var[4]
## 25    y2 ~~    y2  Theta_var[5]
## 26    y3 ~~    y3  Theta_var[6]
## 27    y4 ~~    y4  Theta_var[7]
## 28    y5 ~~    y5  Theta_var[8]
## 29    y6 ~~    y6  Theta_var[9]
## 30    y7 ~~    y7 Theta_var[10]
## 31    y8 ~~    y8 Theta_var[11]
## 32 ind60 ~~ ind60          <NA>
## 33 dem60 ~~ dem60          <NA>
## 34 dem65 ~~ dem65          <NA>
## 35    x1 ~1          Nu_free[1]
## 36    x2 ~1          Nu_free[2]
## 37    x3 ~1          Nu_free[3]
## 38    y1 ~1          Nu_free[4]
## 39    y2 ~1          Nu_free[5]
## 40    y3 ~1          Nu_free[6]
## 41    y4 ~1          Nu_free[7]
## 42    y5 ~1          Nu_free[8]
## 43    y6 ~1          Nu_free[9]
## 44    y7 ~1         Nu_free[10]
## 45    y8 ~1         Nu_free[11]
## 46 ind60 ~1                <NA>
## 47 dem60 ~1                <NA>
## 48 dem65 ~1                <NA>
## 49  .p4. ==  .p8.          <NA>
## 50  .p5. ==  .p9.          <NA>
## 51  .p6. == .p10.          <NA>
## 52  .p7. == .p11.          <NA>

For this example we will focus on the regressions between factors

pt[pt$op=="~",]
##      lhs op   rhs     pxnames
## 12 dem60  ~ ind60 bet_sign[1]
## 13 dem65  ~ ind60 bet_sign[2]
## 14 dem65  ~ dem60 bet_sign[3]

Now, we can calculate pd, with the hypothesis() function from brms. We can ask specific questions of the posterior distributions, for example if we want to know what proportion of the regression dem65~ind60 is higher than 0. The function requires 2 arguments, the posterior draws (mc_out) and a hypothesis (bet_sign[2] > 0). We are also adding the ``alpha``` argument that specifies the size for the credible intervals

hypothesis(mc_out, "bet_sign[2] > 0", alpha = 0.05)
## New names:
##  `ly_sign[4]` -> `ly_sign[4]...4`
##  `ly_sign[5]` -> `ly_sign[5]...5`
##  `ly_sign[6]` -> `ly_sign[6]...6`
##  `ly_sign[7]` -> `ly_sign[7]...7`
##  `ly_sign[4]` -> `ly_sign[4]...8`
##  `ly_sign[5]` -> `ly_sign[5]...9`
##  `ly_sign[6]` -> `ly_sign[6]...10`
##  `ly_sign[7]` -> `ly_sign[7]...11`
## Hypothesis Tests for class :
##          Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
## 1 (bet_sign[2]) > 0     0.25      0.18    -0.05     0.54       11.1      0.92
##   Star
## 1     
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.

The estimate presents the mean of the posterior distribution, and the respective measures of variability (deviation and credible interval). Post.Prob is the pd under the stated hypothesis, so in this example we can say that 91% of the posterior distribution of dem65~ind60 is lower than 0. This is equivalent to the one-tail test. And Evid.Ratio is the evidence ratio for the hypothesis, when the hypothesis is of the form \(a > b\), the evidence ratio is the ratio of the posterior probability of \(a > b\) and the posterior probability of \(a < b\)

In another example, we want to know what proportion of the regression dem60~ind60 is higher than 0. Here we can see that 100% of the posterior probability is higher than 0, in such a case Evid.Ratio = Inf, this will happens when the whole distribution fulfills the hypothesis.

hypothesis(mc_out, "bet_sign[1] > 0", alpha = 0.05)
## New names:
##  `ly_sign[4]` -> `ly_sign[4]...4`
##  `ly_sign[5]` -> `ly_sign[5]...5`
##  `ly_sign[6]` -> `ly_sign[6]...6`
##  `ly_sign[7]` -> `ly_sign[7]...7`
##  `ly_sign[4]` -> `ly_sign[4]...8`
##  `ly_sign[5]` -> `ly_sign[5]...9`
##  `ly_sign[6]` -> `ly_sign[6]...10`
##  `ly_sign[7]` -> `ly_sign[7]...11`
## Hypothesis Tests for class :
##          Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio Post.Prob
## 1 (bet_sign[1]) > 0      0.7      0.17     0.44     0.99        Inf         1
##   Star
## 1    *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.

In another possible case of interest, you could use this to test equalities between parameters, for example we can test if dem60~ind60 is higher than dem65~ind60. Here we see 97% of the posteriors state that dem60~ind60 is higher than dem65~ind60, and the mean of the difference (dem60~ind60 - dem65~ind60) is Estimate=0.46

hypothesis(mc_out, "bet_sign[1] - bet_sign[2] > 0", alpha = 0.05)
## New names:
##  `ly_sign[4]` -> `ly_sign[4]...4`
##  `ly_sign[5]` -> `ly_sign[5]...5`
##  `ly_sign[6]` -> `ly_sign[6]...6`
##  `ly_sign[7]` -> `ly_sign[7]...7`
##  `ly_sign[4]` -> `ly_sign[4]...8`
##  `ly_sign[5]` -> `ly_sign[5]...9`
##  `ly_sign[6]` -> `ly_sign[6]...10`
##  `ly_sign[7]` -> `ly_sign[7]...11`
## Hypothesis Tests for class :
##                 Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (bet_sign[1]-bet_... > 0     0.46      0.25     0.06     0.88      39.54
##   Post.Prob Star
## 1      0.98    *
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.

Region of Practical Equivalence (ROPE)

Note that so far we have only tested the hypothesis against 0, which would be equivalent to the frequentist null hypothesis tests. But we can test against any other. Bayesian inference is not based on statistical significance, where effects are tested against “zero”. Indeed, the Bayesian framework offers a probabilistic view of the parameters, allowing assessment of the uncertainty related to them. Thus, rather than concluding that an effect is present when it simply differs from zero, we would conclude that the probability of being outside a specific range that can be considered as “practically no effect” (i.e., a negligible magnitude) is sufficient. This range is called the region of practical equivalence (ROPE).

Indeed, statistically, the probability of a posterior distribution being different from 0 does not make much sense (the probability of it being different from a single point being infinite). Therefore, the idea underlining ROPE is to let the user define an area around the null value enclosing values that are equivalent to the null value for practical purposes (Kruschke and Liddell 2018)

For these examples, we would change the value tested, a common recommendations is to use |0.1| as the minimally relevant value for standardized regressions, in this case we find that 0.79 proportion of the posterior is above 0.1

hypothesis(mc_out, "bet_sign[2] > .1", alpha = 0.05)
## New names:
##  `ly_sign[4]` -> `ly_sign[4]...4`
##  `ly_sign[5]` -> `ly_sign[5]...5`
##  `ly_sign[6]` -> `ly_sign[6]...6`
##  `ly_sign[7]` -> `ly_sign[7]...7`
##  `ly_sign[4]` -> `ly_sign[4]...8`
##  `ly_sign[5]` -> `ly_sign[5]...9`
##  `ly_sign[6]` -> `ly_sign[6]...10`
##  `ly_sign[7]` -> `ly_sign[7]...11`
## Hypothesis Tests for class :
##               Hypothesis Estimate Est.Error CI.Lower CI.Upper Evid.Ratio
## 1 (bet_sign[2])-(.1) > 0     0.15      0.18    -0.15     0.44       4.02
##   Post.Prob Star
## 1       0.8     
## ---
## 'CI': 90%-CI for one-sided and 95%-CI for two-sided hypotheses.
## '*': For one-sided hypotheses, the posterior probability exceeds 95%;
## for two-sided hypotheses, the value tested against lies outside the 95%-CI.
## Posterior probabilities of point hypotheses assume equal prior probabilities.

89% vs. 95% CI

Most commonly and from the frequentist tradition you will see the use of the 95% Credible interval. Using 89% is another popular choice, and used to be the default for a long time. How did it start?

Naturally, when it came about choosing the CI level to report by default, people started using 95%, the arbitrary convention used in the frequentist world. However, some authors suggested that 95% might not be the most appropriate for Bayesian posterior distributions, potentially lacking stability if not enough posterior samples are drawn (McElreath 2020).

The proposition was to use 90% instead of 95%. However, recently, McElreath (2020) suggested that if we were to use arbitrary thresholds in the first place, why not use 89%? Moreover, 89 is the highest prime number that does not exceed the already unstable 95% threshold. What does it have to do with anything? Nothing, but it reminds us of the total arbitrariness of these conventions (McElreath 2020).

You can use this as the argument alpha argument in the hypothesis function, or as the interpretation values for Post.Prob

Caveats

Although this allows testing of hypotheses in a similar manner as in the frequentist null-hypothesis testing framework, we strongly argue against using arbitrary cutoffs (e.g., p < .05) to determine the ‘existence’ of an effect.

ROPE is sensitive to scale, so be aware that the value of interest is representative in the respective scale. For this, standardize parameters are useful to have in a commonly used scale

References

Bollen, Kenneth A. 1989. Structural Equations with Latent Variables. Wiley Series in Probability and Mathematical Statistics. John Wiley & Sons, Inc.
Bürkner, Paul-Christian. 2017. brms: An R Package for Bayesian Multilevel Models Using Stan.” Journal of Statistical Software 80 (1): 1–28. https://doi.org/10.18637/jss.v080.i01.
Kruschke, John K., and Torrin M. Liddell. 2018. “The Bayesian New Statistics: Hypothesis Testing, Estimation, Meta-Analysis, and Power Analysis from a Bayesian Perspective.” Psychonomic Bulletin & Review 25 (1): 178–206. https://doi.org/10.3758/s13423-016-1221-4.
Makowski, Dominique, Mattan S. Ben-Shachar, S. H. Annabel Chen, and Daniel Lüdecke. 2019. “Indices of Effect Existence and Significance in the Bayesian Framework.” Frontiers in Psychology 10: 2767. https://doi.org/10.3389/fpsyg.2019.02767.
McElreath, Richard. 2020. Statistical Rethinking: A Bayesian Course with Examples in R and Stan. 2nd ed. CRC Texts in Statistical Science. Boca Raton: Taylor; Francis, CRC Press.