The model object used in these analyses is stored in ‘results/Psychometric_Bayesian_model_object.rds’. To use it, load it into an object named: ‘Bayes.fit’.

library('readr')
library('tidyr')
library('magrittr')
## 
## Attaching package: 'magrittr'
## The following object is masked from 'package:tidyr':
## 
##     extract
library('rstan')
## Loading required package: StanHeaders
## Loading required package: ggplot2
## rstan (Version 2.21.2, GitRev: 2e1f913d3ca3)
## For execution on a local, multicore CPU with excess RAM we recommend calling
## options(mc.cores = parallel::detectCores()).
## To avoid recompilation of unchanged Stan programs, we recommend calling
## rstan_options(auto_write = TRUE)
## 
## Attaching package: 'rstan'
## The following object is masked from 'package:magrittr':
## 
##     extract
## The following object is masked from 'package:tidyr':
## 
##     extract
rstan_options(auto_write = TRUE)
options(mc.cores = parallel::detectCores())
Sys.setenv(LOCAL_CPPFLAGS = '-march=corei7 -mtune=corei7')
library('brms')
## Loading required package: Rcpp
## Loading 'brms' package (version 2.14.0). Useful instructions
## can be found by typing help('brms'). A more detailed introduction
## to the package is available through vignette('brms_overview').
## 
## Attaching package: 'brms'
## The following object is masked from 'package:rstan':
## 
##     loo
## The following object is masked from 'package:stats':
## 
##     ar
cd.long <- read_delim('colour_discriminate_long_format.txt', delim="\t")
## Parsed with column specification:
## cols(
##   Colour.difference = col_double(),
##   background = col_character(),
##   stimuli = col_character(),
##   ind = col_character(),
##   batch = col_character(),
##   sex = col_character(),
##   variable = col_character(),
##   of = col_double(),
##   success = col_logical(),
##   target.same = col_logical(),
##   chick = col_character(),
##   Colour.difference.s = col_double()
## )

Success in each trial is either TRUE or FALSE.

summary(cd.long)
##  Colour.difference  background          stimuli              ind           
##  Min.   :0.300     Length:6560        Length:6560        Length:6560       
##  1st Qu.:0.900     Class :character   Class :character   Class :character  
##  Median :1.600     Mode  :character   Mode  :character   Mode  :character  
##  Mean   :2.034                                                             
##  3rd Qu.:2.500                                                             
##  Max.   :5.000                                                             
##     batch               sex              variable               of       
##  Length:6560        Length:6560        Length:6560        Min.   : 1.00  
##  Class :character   Class :character   Class :character   1st Qu.:22.00  
##  Mode  :character   Mode  :character   Mode  :character   Median :34.00  
##                                                           Mean   :29.69  
##                                                           3rd Qu.:38.00  
##                                                           Max.   :40.00  
##   success        target.same        chick           Colour.difference.s
##  Mode :logical   Mode :logical   Length:6560        Min.   :-1.1948    
##  FALSE:1349      FALSE:3200      Class :character   1st Qu.:-0.7814    
##  TRUE :5211      TRUE :3360      Mode  :character   Median :-0.2991    
##                                                     Mean   : 0.0000    
##                                                     3rd Qu.: 0.3210    
##                                                     Max.   : 2.0434

Individual chicken is a factor. We added “chick”, which accounts for the fact that “ind” names shared across experiments actually belong to different chicks (“A1” is like “Svensson” in the world of chicks).

cd.long$chick  <- as.factor(cd.long$chick)#chick is also a factor

We add a row that states whether target was the same or different colour type as the background (e.g. green on green = ‘same’). Different is the default reference level

cd.long$target <- ifelse(cd.long$target.same, 'same', 'diff')
cd.long$target <- as.factor(cd.long$target)#this is a factor
levels(cd.long$target)
## [1] "diff" "same"

We relevel this to be same instead.

cd.long$target <- relevel(cd.long$target,'same')

Set reference conditions - set same as the reference level. Keep background_green as reference level.

cd.long$background <- as.factor(cd.long$background)
levels(cd.long$background)
## [1] "green"  "orange"

We also split conditions into two factors with an interaction: Backgrounds could be green or orange, and targets could be the same colour type as their background, or a different one.

Fit a Non-linear Model with All Fixed Effects

Formulae and factors are arranged so that reference condition is: background_green & target_same (green on green). This is determined by both the reference levels in the data frame. The order of “background” and “target” in the formula determines naming and which one’s independent effects are estimated first, i.e. background*target: 1st background, 2nd target, 3rd background:target. With broad unbiased priors this effect is negligible.

Threshold and width are estimated on a log scale, which keeps them positive and allows free reign for random effects lapse is estimated on a logit scale, also allowing free estimation of random effects in [0,1] space.

Model 1, intercept is background_green, target_same:

modnm1 <- 'TW.Model-bg_target_ACCEPTED'#distinguish it from others
frm1 <-              bf(#Bayes formula
    formula = success ~ base     +  #guess rate
                        (1-inv_logit(lapse)-base)   * #curve region
            inv_logit(4.39*(    Colour.difference-exp(threshold)    )   /
                        (   exp(width)  )), #threshold-width curve
      base ~ 1, #baseline has a single mean
      lapse ~ 1 + (1|chick) +(1|batch),  #lapse rate depends on chick
      #threshold coef depend on fixef & chick
     threshold ~ background*target*sex +(1|chick) +(1|batch),
     #width coef depend on fixef & chick
     width ~ background*target*sex +(1|chick) +(1|batch),
      nl = TRUE)#the joint distribution for these parameters is undefined, and therefore the parameters themselves are "nonlinear"

Select Some Priors

Model 1, Raneff of lapse; Intercept:target_green&background_green

We print the priors.

get_prior(frm1, data = cd.long)
##                 prior class                                coef group resp dpar
##  student_t(3, 0, 2.5) sigma                                                    
##                (flat)     b                                                    
##                (flat)     b                           Intercept                
##                (flat)     b                                                    
##                (flat)     b                           Intercept                
##  student_t(3, 0, 2.5)    sd                                                    
##  student_t(3, 0, 2.5)    sd                                     batch          
##  student_t(3, 0, 2.5)    sd                           Intercept batch          
##  student_t(3, 0, 2.5)    sd                                     chick          
##  student_t(3, 0, 2.5)    sd                           Intercept chick          
##                (flat)     b                                                    
##                (flat)     b                    backgroundorange                
##                (flat)     b            backgroundorange:sexmale                
##                (flat)     b         backgroundorange:targetdiff                
##                (flat)     b backgroundorange:targetdiff:sexmale                
##                (flat)     b                           Intercept                
##                (flat)     b                             sexmale                
##                (flat)     b                          targetdiff                
##                (flat)     b                  targetdiff:sexmale                
##  student_t(3, 0, 2.5)    sd                                                    
##  student_t(3, 0, 2.5)    sd                                     batch          
##  student_t(3, 0, 2.5)    sd                           Intercept batch          
##  student_t(3, 0, 2.5)    sd                                     chick          
##  student_t(3, 0, 2.5)    sd                           Intercept chick          
##                (flat)     b                                                    
##                (flat)     b                    backgroundorange                
##                (flat)     b            backgroundorange:sexmale                
##                (flat)     b         backgroundorange:targetdiff                
##                (flat)     b backgroundorange:targetdiff:sexmale                
##                (flat)     b                           Intercept                
##                (flat)     b                             sexmale                
##                (flat)     b                          targetdiff                
##                (flat)     b                  targetdiff:sexmale                
##  student_t(3, 0, 2.5)    sd                                                    
##  student_t(3, 0, 2.5)    sd                                     batch          
##  student_t(3, 0, 2.5)    sd                           Intercept batch          
##  student_t(3, 0, 2.5)    sd                                     chick          
##  student_t(3, 0, 2.5)    sd                           Intercept chick          
##      nlpar bound       source
##                       default
##       base            default
##       base       (vectorized)
##      lapse            default
##      lapse       (vectorized)
##      lapse            default
##      lapse       (vectorized)
##      lapse       (vectorized)
##      lapse       (vectorized)
##      lapse       (vectorized)
##  threshold            default
##  threshold       (vectorized)
##  threshold       (vectorized)
##  threshold       (vectorized)
##  threshold       (vectorized)
##  threshold       (vectorized)
##  threshold       (vectorized)
##  threshold       (vectorized)
##  threshold       (vectorized)
##  threshold            default
##  threshold       (vectorized)
##  threshold       (vectorized)
##  threshold       (vectorized)
##  threshold       (vectorized)
##      width            default
##      width       (vectorized)
##      width       (vectorized)
##      width       (vectorized)
##      width       (vectorized)
##      width       (vectorized)
##      width       (vectorized)
##      width       (vectorized)
##      width       (vectorized)
##      width            default
##      width       (vectorized)
##      width       (vectorized)
##      width       (vectorized)
##      width       (vectorized)

Only base has a very informative prior.

prr1 <- c(
  #very restrictive prior for guess rate, centred on 0.5
    prior(beta(250,250), nlpar= 'base', lb = 0.25, ub = 0.75),
  #lapse rate is unbiased, but cannot be more than 27%
    prior(normal(-3,10), nlpar= 'lapse', ub = -1),
  # use the default prior for random effects of lapse:
  # student_t(3, 0, 10)),
  # this can be done by leaving:
  # prior(... nlpar= 'lapse', class= sd), unassigned
  #Both threshold and width should be positive numbers, probably ≈1
  #i.e. exp(0) = 1
  #beware, bounds on threshold and width priors
  #affect their coefficients (so don't apply bounds)
  #     signif(exp(qnorm(c(0.025, 0.975), 0, 3)),2)
  # [1] 2.8e-03 3.6e+02 #broad range of potential values for fixed effects
    prior(normal(0,3), nlpar= 'threshold', class = 'b'),
    prior(normal(0,3), nlpar = 'width', class = 'b'),
  #Coefficient parameters, centred on 0
  #(<0 = param smaller, >0 = param larger)
    prior(normal(0,3), nlpar= 'threshold', coef= 'backgroundorange'),
    prior(normal(0,3), nlpar= 'threshold', coef= 'targetdiff'),
    prior(normal(0,3), nlpar= 'threshold', coef= 'backgroundorange:targetdiff'),
  # use the default prior for random effects of threshold, unassigned
    prior(normal(0,3), nlpar= 'width', coef= 'backgroundorange'),
    prior(normal(0,3), nlpar= 'width', coef= 'targetdiff'),
    prior(normal(0,3), nlpar= 'width', coef= 'backgroundorange:targetdiff')#,
  # use the default prior for random effects of width, unassigned
)

Inspect Stan Code

stc1 <- make_stancode( 
             formula = frm1,
             data = cd.long, family = bernoulli("identity"), 
             prior = prr1 )
stc1
## // generated with brms 2.14.0
## functions {
## }
## data {
##   int<lower=1> N;  // total number of observations
##   int Y[N];  // response variable
##   int<lower=1> K_base;  // number of population-level effects
##   matrix[N, K_base] X_base;  // population-level design matrix
##   int<lower=1> K_lapse;  // number of population-level effects
##   matrix[N, K_lapse] X_lapse;  // population-level design matrix
##   int<lower=1> K_threshold;  // number of population-level effects
##   matrix[N, K_threshold] X_threshold;  // population-level design matrix
##   int<lower=1> K_width;  // number of population-level effects
##   matrix[N, K_width] X_width;  // population-level design matrix
##   // covariate vectors for non-linear functions
##   vector[N] C_1;
##   // data for group-level effects of ID 1
##   int<lower=1> N_1;  // number of grouping levels
##   int<lower=1> M_1;  // number of coefficients per level
##   int<lower=1> J_1[N];  // grouping indicator per observation
##   // group-level predictor values
##   vector[N] Z_1_lapse_1;
##   // data for group-level effects of ID 2
##   int<lower=1> N_2;  // number of grouping levels
##   int<lower=1> M_2;  // number of coefficients per level
##   int<lower=1> J_2[N];  // grouping indicator per observation
##   // group-level predictor values
##   vector[N] Z_2_lapse_1;
##   // data for group-level effects of ID 3
##   int<lower=1> N_3;  // number of grouping levels
##   int<lower=1> M_3;  // number of coefficients per level
##   int<lower=1> J_3[N];  // grouping indicator per observation
##   // group-level predictor values
##   vector[N] Z_3_threshold_1;
##   // data for group-level effects of ID 4
##   int<lower=1> N_4;  // number of grouping levels
##   int<lower=1> M_4;  // number of coefficients per level
##   int<lower=1> J_4[N];  // grouping indicator per observation
##   // group-level predictor values
##   vector[N] Z_4_threshold_1;
##   // data for group-level effects of ID 5
##   int<lower=1> N_5;  // number of grouping levels
##   int<lower=1> M_5;  // number of coefficients per level
##   int<lower=1> J_5[N];  // grouping indicator per observation
##   // group-level predictor values
##   vector[N] Z_5_width_1;
##   // data for group-level effects of ID 6
##   int<lower=1> N_6;  // number of grouping levels
##   int<lower=1> M_6;  // number of coefficients per level
##   int<lower=1> J_6[N];  // grouping indicator per observation
##   // group-level predictor values
##   vector[N] Z_6_width_1;
##   int prior_only;  // should the likelihood be ignored?
## }
## transformed data {
## }
## parameters {
##   vector<lower=0.25,upper=0.75>[K_base] b_base;  // population-level effects
##   vector<upper=-1>[K_lapse] b_lapse;  // population-level effects
##   vector[K_threshold] b_threshold;  // population-level effects
##   vector[K_width] b_width;  // population-level effects
##   vector<lower=0>[M_1] sd_1;  // group-level standard deviations
##   vector[N_1] z_1[M_1];  // standardized group-level effects
##   vector<lower=0>[M_2] sd_2;  // group-level standard deviations
##   vector[N_2] z_2[M_2];  // standardized group-level effects
##   vector<lower=0>[M_3] sd_3;  // group-level standard deviations
##   vector[N_3] z_3[M_3];  // standardized group-level effects
##   vector<lower=0>[M_4] sd_4;  // group-level standard deviations
##   vector[N_4] z_4[M_4];  // standardized group-level effects
##   vector<lower=0>[M_5] sd_5;  // group-level standard deviations
##   vector[N_5] z_5[M_5];  // standardized group-level effects
##   vector<lower=0>[M_6] sd_6;  // group-level standard deviations
##   vector[N_6] z_6[M_6];  // standardized group-level effects
## }
## transformed parameters {
##   vector[N_1] r_1_lapse_1;  // actual group-level effects
##   vector[N_2] r_2_lapse_1;  // actual group-level effects
##   vector[N_3] r_3_threshold_1;  // actual group-level effects
##   vector[N_4] r_4_threshold_1;  // actual group-level effects
##   vector[N_5] r_5_width_1;  // actual group-level effects
##   vector[N_6] r_6_width_1;  // actual group-level effects
##   r_1_lapse_1 = (sd_1[1] * (z_1[1]));
##   r_2_lapse_1 = (sd_2[1] * (z_2[1]));
##   r_3_threshold_1 = (sd_3[1] * (z_3[1]));
##   r_4_threshold_1 = (sd_4[1] * (z_4[1]));
##   r_5_width_1 = (sd_5[1] * (z_5[1]));
##   r_6_width_1 = (sd_6[1] * (z_6[1]));
## }
## model {
##   // likelihood including all constants
##   if (!prior_only) {
##     // initialize linear predictor term
##     vector[N] nlp_base = X_base * b_base;
##     // initialize linear predictor term
##     vector[N] nlp_lapse = X_lapse * b_lapse;
##     // initialize linear predictor term
##     vector[N] nlp_threshold = X_threshold * b_threshold;
##     // initialize linear predictor term
##     vector[N] nlp_width = X_width * b_width;
##     // initialize non-linear predictor term
##     vector[N] mu;
##     for (n in 1:N) {
##       // add more terms to the linear predictor
##       nlp_lapse[n] += r_1_lapse_1[J_1[n]] * Z_1_lapse_1[n] + r_2_lapse_1[J_2[n]] * Z_2_lapse_1[n];
##     }
##     for (n in 1:N) {
##       // add more terms to the linear predictor
##       nlp_threshold[n] += r_3_threshold_1[J_3[n]] * Z_3_threshold_1[n] + r_4_threshold_1[J_4[n]] * Z_4_threshold_1[n];
##     }
##     for (n in 1:N) {
##       // add more terms to the linear predictor
##       nlp_width[n] += r_5_width_1[J_5[n]] * Z_5_width_1[n] + r_6_width_1[J_6[n]] * Z_6_width_1[n];
##     }
##     for (n in 1:N) {
##       // compute non-linear predictor values
##       mu[n] = nlp_base[n] + (1 - inv_logit(nlp_lapse[n]) - nlp_base[n]) * inv_logit(4.39 * (C_1[n] - exp(nlp_threshold[n])) / (exp(nlp_width[n])));
##     }
##     target += bernoulli_lpmf(Y | mu);
##   }
##   // priors including all constants
##   target += beta_lpdf(b_base | 250, 250)
##     - 1 * log_diff_exp(beta_lcdf(0.75 | 250, 250), beta_lcdf(0.25 | 250, 250));
##   target += normal_lpdf(b_lapse | -3, 10)
##     - 1 * normal_lcdf(-1 | -3, 10);
##   target += normal_lpdf(b_threshold[1] | 0, 3);
##   target += normal_lpdf(b_threshold[2] | 0, 3);
##   target += normal_lpdf(b_threshold[3] | 0, 3);
##   target += normal_lpdf(b_threshold[4] | 0, 3);
##   target += normal_lpdf(b_threshold[5] | 0, 3);
##   target += normal_lpdf(b_threshold[6] | 0, 3);
##   target += normal_lpdf(b_threshold[7] | 0, 3);
##   target += normal_lpdf(b_threshold[8] | 0, 3);
##   target += normal_lpdf(b_width[1] | 0, 3);
##   target += normal_lpdf(b_width[2] | 0, 3);
##   target += normal_lpdf(b_width[3] | 0, 3);
##   target += normal_lpdf(b_width[4] | 0, 3);
##   target += normal_lpdf(b_width[5] | 0, 3);
##   target += normal_lpdf(b_width[6] | 0, 3);
##   target += normal_lpdf(b_width[7] | 0, 3);
##   target += normal_lpdf(b_width[8] | 0, 3);
##   target += student_t_lpdf(sd_1 | 3, 0, 2.5)
##     - 1 * student_t_lccdf(0 | 3, 0, 2.5);
##   target += std_normal_lpdf(z_1[1]);
##   target += student_t_lpdf(sd_2 | 3, 0, 2.5)
##     - 1 * student_t_lccdf(0 | 3, 0, 2.5);
##   target += std_normal_lpdf(z_2[1]);
##   target += student_t_lpdf(sd_3 | 3, 0, 2.5)
##     - 1 * student_t_lccdf(0 | 3, 0, 2.5);
##   target += std_normal_lpdf(z_3[1]);
##   target += student_t_lpdf(sd_4 | 3, 0, 2.5)
##     - 1 * student_t_lccdf(0 | 3, 0, 2.5);
##   target += std_normal_lpdf(z_4[1]);
##   target += student_t_lpdf(sd_5 | 3, 0, 2.5)
##     - 1 * student_t_lccdf(0 | 3, 0, 2.5);
##   target += std_normal_lpdf(z_5[1]);
##   target += student_t_lpdf(sd_6 | 3, 0, 2.5)
##     - 1 * student_t_lccdf(0 | 3, 0, 2.5);
##   target += std_normal_lpdf(z_6[1]);
## }
## generated quantities {
## }

Posterior predictive checks

Check that the priors are valid. This samples from the priors only (and not the data) - the distribution of estimates indicates the range of possible values the model can take and where it is weighted (biased).

Prior.fit <- brm( formula = frm1,
               data = cd.long, family = bernoulli("identity"), 
             prior = prr1,
                 control = list(adapt_delta = 0.9999),
             sample_prior = "only",
                 inits = 0,
                 iter = 1000  )
## Compiling Stan program...
## Start sampling

We can visualize the range of plausible models using conditional effects.

Prior.cond <- conditional_effects(Prior.fit, spaghetti = TRUE, effects = "Colour.difference")
plot(Prior.cond)

The plot above shows the model fit at every iteration when using only the priors and not the data. Many values are possible, including many different slopes for the line, though there is definite concentration at certain values.

Run the Model

200 or 2000 iterations give similar results to 10000. Random effects of threshold and width should change together (ideally, explicit correlation). and setting initial values to 0 is a work-around.

Bayes.fit <- brm( formula = frm1,
               data = cd.long, family = bernoulli("identity"), 
             prior = prr1,
             #finely sampled
                control = list(adapt_delta = 0.99),
                inits = 0,
                iter = 2000  )
## Compiling Stan program...
## Start sampling
## Warning: There were 2240 transitions after warmup that exceeded the maximum treedepth. Increase max_treedepth above 10. See
## http://mc-stan.org/misc/warnings.html#maximum-treedepth-exceeded
## Warning: Examine the pairs() plot to diagnose sampling problems

Inspect the model

plot(Bayes.fit,type="dens_overlay", pars = "^b_")

Derive summary information

summary(Bayes.fit)
##  Family: bernoulli 
##   Links: mu = identity 
## Formula: success ~ base + (1 - inv_logit(lapse) - base) * inv_logit(4.39 * (Colour.difference - exp(threshold))/(exp(width))) 
##          base ~ 1
##          lapse ~ 1 + (1 | chick) + (1 | batch)
##          threshold ~ background * target * sex + (1 | chick) + (1 | batch)
##          width ~ background * target * sex + (1 | chick) + (1 | batch)
##    Data: cd.long (Number of observations: 6560) 
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
##          total post-warmup samples = 4000
## 
## Group-Level Effects: 
## ~batch (Number of levels: 4) 
##                         Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
## sd(lapse_Intercept)         0.75      0.63     0.04     2.36 1.00     1129
## sd(threshold_Intercept)     0.36      0.45     0.01     1.51 1.00     1063
## sd(width_Intercept)         0.65      0.66     0.03     2.50 1.00     1180
##                         Tail_ESS
## sd(lapse_Intercept)         1494
## sd(threshold_Intercept)     1927
## sd(width_Intercept)         1572
## 
## ~chick (Number of levels: 32) 
##                         Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
## sd(lapse_Intercept)         0.71      0.21     0.34     1.17 1.00     1064
## sd(threshold_Intercept)     0.28      0.06     0.18     0.42 1.00     1557
## sd(width_Intercept)         0.32      0.17     0.02     0.68 1.00     1058
##                         Tail_ESS
## sd(lapse_Intercept)         1038
## sd(threshold_Intercept)     2345
## sd(width_Intercept)         1053
## 
## Population-Level Effects: 
##                                               Estimate Est.Error l-95% CI
## base_Intercept                                    0.48      0.02     0.44
## lapse_Intercept                                  -3.28      0.50    -4.43
## threshold_Intercept                              -0.22      0.37    -0.94
## threshold_backgroundorange                        0.29      0.51    -0.79
## threshold_targetdiff                              0.53      0.21     0.11
## threshold_sexmale                                -0.08      0.24    -0.56
## threshold_backgroundorange:targetdiff            -0.51      0.30    -1.12
## threshold_backgroundorange:sexmale               -0.06      0.32    -0.70
## threshold_targetdiff:sexmale                      0.25      0.33    -0.40
## threshold_backgroundorange:targetdiff:sexmale    -0.11      0.46    -1.01
## width_Intercept                                   0.03      0.64    -1.30
## width_backgroundorange                           -0.92      0.93    -2.73
## width_targetdiff                                  0.95      0.37     0.27
## width_sexmale                                    -0.03      0.55    -1.08
## width_backgroundorange:targetdiff                -0.09      0.73    -1.55
## width_backgroundorange:sexmale                    0.33      0.75    -1.14
## width_targetdiff:sexmale                         -0.37      0.71    -1.83
## width_backgroundorange:targetdiff:sexmale         0.61      0.99    -1.29
##                                               u-95% CI Rhat Bulk_ESS Tail_ESS
## base_Intercept                                    0.52 1.00     3946     3282
## lapse_Intercept                                  -2.30 1.00     1700     1458
## threshold_Intercept                               0.57 1.00     1370     1062
## threshold_backgroundorange                        1.23 1.00     1325     1264
## threshold_targetdiff                              0.94 1.00     1510     2316
## threshold_sexmale                                 0.38 1.00     1524     2160
## threshold_backgroundorange:targetdiff             0.09 1.00     1219     1830
## threshold_backgroundorange:sexmale                0.55 1.00     1287     1931
## threshold_targetdiff:sexmale                      0.90 1.00     1476     2128
## threshold_backgroundorange:targetdiff:sexmale     0.79 1.00     1255     1897
## width_Intercept                                   1.24 1.00     1266     1366
## width_backgroundorange                            0.90 1.00     1534     1485
## width_targetdiff                                  1.70 1.00     1748     2186
## width_sexmale                                     1.04 1.00     1422     1769
## width_backgroundorange:targetdiff                 1.37 1.00     1558     1992
## width_backgroundorange:sexmale                    1.83 1.00     1567     2039
## width_targetdiff:sexmale                          1.02 1.00     1545     1931
## width_backgroundorange:targetdiff:sexmale         2.57 1.00     1576     2434
## 
## Samples were drawn using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).

The ESS and Rhat values confirm that the model converged well. Divergent transitions indicate that a between MC draws an unexpected transition has occurred.

conditional_effects(Bayes.fit)

Conditional effects at the mean is used to investigate certain parameters at the mean values of the remaining parameters.

Posterior predictive checks (PPCs)

pp_check(Bayes.fit,
         type = "rootogram", nsamples=2000, prob = 0.9, size = 1)

pp_check(Bayes.fit,
         type = "bars", nsamples=200, prob = 0.9, size = 1)

These checks show that the predictions are consistent with the observed data.