£400Registration Fee
Register Now- Overview
- Instructors
- Schedule
Course Description
This three-day workshop provides a comprehensive introduction to Bayesian data analysis for empirical researchers. The course teaches both theoretical foundations and practical implementation of Bayesian statistical inference using Stan via the brms package in R. Day 1 establishes conceptual foundations through Bayesian reasoning principles, Bayes’ rule in simple examples, and a complete analytical treatment of the Bernoulli model covering likelihood functions, prior distributions, posterior distributions, credible intervals, posterior predictive distributions, and Bayes factors. Day 2 introduces Markov Chain Monte Carlo (MCMC) methods and focuses on Bayesian linear regression using brms: fitting models, interpreting MCMC output, specifying priors, conducting prior and posterior predictive checks, and performing model comparison using WAIC, LOO, and Bayes factors. Day 3 extends the Bayesian framework beyond normal linear models to robust regression with Student-t distributions, distributional regression for heteroskedasticity, Bayesian generalized linear models including logistic regression, and Bayesian mixed effects models for hierarchical data. Through hands-on coding with real datasets, participants gain both conceptual understanding and practical skills for applying Bayesian methods to their research using modern computational tools.
What You’ll Learn
During the course we will cover the following:
- Understand how Bayesian inference differs conceptually and practically from classical frequentist approaches to statistics.
- Apply Bayes’ rule to calculate the probability of causes from observed effects and use this as a foundation for statistical inference.
- Perform complete Bayesian inference in the Bernoulli model: specifying likelihood functions, choosing prior distributions (beta distributions), computing posterior distributions analytically.
- Calculate point estimates using maximum a posteriori (MAP) estimation and interval estimates using credible intervals and highest posterior density (HPD) intervals.
- Compute posterior predictive distributions for forecasting future observations.
- Compare models using marginal likelihoods and Bayes factors, understanding when analytical approaches are possible.
- Understand Markov Chain Monte Carlo (MCMC) methods as numerical solutions for Bayesian inference in complex models.
- Use the brms package in R to fit Bayesian models through an intuitive interface to Stan.
- Fit Bayesian linear regression models and compare results with classical lm output to understand similarities and differences.
- Interpret MCMC output including trace plots, effective sample size, and Rhat convergence diagnostics.
- Work with posterior distributions over regression coefficients, including visualization and summarization.
- Handle categorical predictors and understand varying intercept and varying slope formulations in linear models.
- Specify informative prior distributions based on domain knowledge and evaluate default priors in brms.
- Conduct prior sensitivity analysis to understand how prior specifications affect posterior inference.
- Perform prior predictive checks (simulating data before seeing actual data) and posterior predictive checks (evaluating model fit).
- Compare models using WAIC (Widely Applicable Information Criterion), LOO (Leave-One-Out cross-validation), and Bayes factors.
- Extend normal linear models to robust regression using Student-t distributions for handling outliers.
- Implement distributional regression (modeling sigma as a function of predictors) to handle heteroskedasticity.
- Fit Bayesian generalized linear models including logistic regression for binary outcomes.
- Understand the logit link function and interpret coefficients in logistic regression.
- Work with other GLM families including Poisson regression and ordinal regression.
- Apply Bayesian mixed effects models to grouped and hierarchical data structures.
- Implement varying intercept and varying slope models for correlated observations.
- Understand practical advantages of Bayesian approaches for mixed models, including handling convergence issues common in frequentist approaches.
- Diagnose MCMC problems and apply solutions when models fail to converge.
- Report Bayesian analyses clearly and appropriately in research publications.
- Leverage the brms package for flexible, powerful statistical modeling within familiar R workflows.
Course Format
Interactive Learning Format
Each day features a well-balanced combination of lectures and hands-on practical exercises, with dedicated time for discussing participants’ own data, time permitting.
Global Accessibility
All live sessions are recorded and made available on the same day, ensuring accessibility for participants across different time zones.
Collaborative Discussions
Open discussion sessions provide an opportunity for participants to explore specific research questions and engage with instructors and peers.
Comprehensive Course Materials
All code, datasets, and presentation slides used during the course will be shared with participants by the instructor.
Personalized Data Engagement
Participants are encouraged to bring their own data for discussion and practical application during the course.
Post-Course Support
Participants will receive continued support via email for 30 days following the course, along with on-demand access to session recordings for the same period.
Who Should Attend / Intended Audiences
This course is designed for empirical researchers who want to learn Bayesian data analysis from first principles and apply it using modern computational tools in R. It is aimed at researchers who are comfortable fitting linear and generalised linear models in R and who want to understand what Bayesian approaches offer beyond classical methods, including the use of prior information and the analysis of complex models where Bayesian methods provide practical advantages. Participants are expected to have a solid foundation in regression modelling and core statistical concepts such as probability distributions, likelihood, and hypothesis testing, as well as comfort with mathematical notation and abstract reasoning. The course combines rigorous conceptual foundations with hands-on implementation using brms, with no prior experience of Bayesian methods, Stan, or brms required, and focuses on developing both understanding and practical competence through substantial coding and applied examples.
Equipment and Software requirements
A laptop or desktop computer with a functioning installation of R and RStudio is required. Both R and RStudio are free, open-source programs compatible with Windows, macOS, and Linux systems.
A working webcam is recommended to support interactive elements of the course. We encourage participants to keep their cameras on during live Zoom sessions to foster a more engaging and collaborative environment.
While not essential, using a large monitor—or ideally a dual-monitor setup—can significantly enhance your learning experience by allowing you to view course materials and work in R simultaneously.
All necessary R packages will be introduced and installed during the workshop. A comprehensive list of required packages will also be shared with participants ahead of the course to allow for optional pre-installation.
Dr. Mark Andrews
Mark is a psychologist and statistician whose work lies at the intersection of cognitive science, Bayesian data analysis, and applied statistics. His research focuses on developing and testing Bayesian models of human cognition, with a particular emphasis on language processing and memory. He also works extensively on the theory and application of Bayesian statistical methods in the social and behavioural sciences, bridging methodological advances with real-world research challenges.
Since 2015, Mark has co-led a programme of intensive workshops on Bayesian data analysis for social scientists, funded by the UK’s Economic and Social Research Council (ESRC). These workshops have trained hundreds of researchers in the practical application of Bayesian methods, particularly through R and modern statistical packages.
Education & Career
• PhD in Psychology, Cornell University, New York (Cognitive Science, Bayesian Models of Cognition)
• MA in Psychology, Cornell University, New York
• BA (Hons) in Psychology, National University of Ireland
• Senior Lecturer in Psychology, Nottingham Trent University, England
Research Focus
Mark’s work centres on:
• Bayesian models of human cognition, especially in language processing and memory
• General Bayesian data analysis methods for the social and behavioural sciences
• Comparative studies of Bayesian vs. classical approaches to inference and model comparison
• Promoting reproducibility and transparent statistical practice in psychological research
Current Projects
• Developing Bayesian cognitive models of memory and linguistic comprehension
• Exploring Bayesian approaches to regression, multilevel, and mixed-effects models in psychology and social science research
• Co-leading ESRC-funded workshops on Bayesian data analysis for applied researchers
Professional Consultancy & Teaching
Mark provides expert training and advice in Bayesian data analysis for academic and applied research projects. His teaching portfolio includes courses and workshops on:
• Bayesian linear and generalized linear models
• Multilevel and mixed-effects models
• Cognitive modelling with Bayesian methods
• Applied statistics in R for psychologists and social scientists
He is also an advocate of open science and is experienced in communicating complex statistical methods to diverse audiences.
Teaching & Skills
• Instructor in Bayesian statistics, time series modelling, and machine learning
• Strong advocate for reproducibility, open-source tools, and accessible education
• Skilled in R, Stan, JAGS, and statistical computing for large datasets
• Experienced mentor and workshop leader at all academic levels
Links
• University Profile
• Personal Page
• ResearchGate
Session 1 – 02:00:00 – Introduction to Bayesian Data Analysis
This session establishes the conceptual foundations of Bayesian inference. We begin with an overview of what Bayesian data analysis is and how it fits into statistics as practiced generally. A central theme is that Bayesian inference represents an alternative school of statistics to the classical/frequentist approach rather than being a specialized or advanced technique. We explore the fundamental differences between Bayesian and frequentist philosophies: the role of probability in representing uncertainty, the treatment of parameters as random rather than fixed, and the incorporation of prior information. The session emphasizes that these two approaches need not be viewed as mutually exclusive competitors, and that a pragmatic blend is often appropriate. We discuss when Bayesian methods offer practical advantages: handling small samples, incorporating domain knowledge, working with complex hierarchical models, and providing complete quantification of uncertainty through posterior distributions.
Break – 01:00:00
Session 2 – 02:00:00 – Bayes’ Rule and Introduction to the Bernoulli Model
This session introduces Bayes’ rule as the mathematical foundation for Bayesian inference. We begin with simple examples showing how Bayes’ rule calculates the probability of causes from observed effects. Working through discrete probability problems with small numbers of possibilities, we build intuition for how prior beliefs are updated by data to produce posterior beliefs. These simple cases provide the template for all Bayesian data analysis: likelihood times prior produces posterior (up to normalization). We then introduce the Bernoulli model as a classic statistical problem: inferring the bias of a coin from observed heads and tails (or equivalently, estimating a proportion from binary data). The likelihood function for this model is developed, showing how different observed data (different numbers of heads and tails) provide different information. Prior distributions are introduced using the beta distribution family, showing how different beta distributions represent different prior beliefs about the coin’s bias. The posterior distribution is computed analytically, demonstrating conjugacy: with a beta prior and binomial likelihood, the posterior is also beta with updated parameters.
Break – 01:00:00
Session 3 – 02:00:00 – Inference in the Bernoulli Model
This session provides a complete treatment of Bayesian inference using the analytically tractable Bernoulli model. We cover all key concepts that generalize to complex models analyzed via MCMC in later sessions. Point estimation is addressed through maximum a posteriori (MAP) estimation, finding the parameter value with highest posterior probability. Interval estimation is covered through credible intervals (quantile-based) and highest posterior density (HPD) intervals, explaining how these differ conceptually from frequentist confidence intervals. Posterior predictive distributions are developed, showing how to forecast future observations by integrating over posterior uncertainty in the parameters. Model comparison is introduced through marginal likelihoods (the probability of the data under a model, integrating over all parameter values) and Bayes factors (the ratio of marginal likelihoods for competing models). Throughout, we use the priorexposure package to visualize likelihoods, priors, and posteriors, and to perform all calculations interactively. This session ensures participants understand the complete Bayesian inference pipeline before moving to computational methods.
Session 4 – 02:00:00 – Introduction to MCMC and brms
This session transitions from analytical Bayesian inference to numerical methods via MCMC. We begin by explaining why analytical approaches, while pedagogically valuable, are only possible in restricted cases with conjugate priors and simple models. Markov Chain Monte Carlo methods are introduced as a general numerical solution that can be applied to virtually any Bayesian model. Rather than calculating posterior distributions analytically, MCMC generates samples from the posterior distribution, which can then be used to approximate any posterior quantity. We introduce Stan as a state-of-the-art MCMC implementation and brms as a high-level R interface to Stan that allows fitting models with familiar R formula syntax. To demonstrate the connection with Day 1, we first re-analyze the Bernoulli model using brms, showing that MCMC recovers the same posterior we calculated analytically. We then fit our first Bayesian linear regression using brm and compare results with lm, examining similarities and differences. The session covers understanding brms output, basic visualization of MCMC samples, and interpreting posterior summaries.
Break – 01:00:00
Session 5 – 02:00:00 – Bayesian Linear Regression
This session provides in-depth coverage of Bayesian linear regression. We work through multiple regression models with continuous and categorical predictors, examining the posterior distribution over regression coefficients and comparing Bayesian credible intervals with frequentist confidence intervals. MCMC diagnostics are covered in detail: trace plots showing the sampling path of each chain, effective sample size measuring how many independent samples we have, and Rhat statistics indicating convergence across chains. Participants learn to recognize when MCMC has not converged and what to do about it. We explore varying intercept and varying slope formulations with categorical predictors, understanding how these relate to interaction terms. The brms functions for visualization are introduced: mcmc_plot for posterior distributions, pp_check for posterior predictive checks. The stancode function is used to examine the underlying Stan model, helping participants understand what brms is doing behind the scenes. Through multiple worked examples with real datasets, participants develop fluency in fitting and interpreting Bayesian regression models.
Break – 01:00:00
Session 6 – 02:00:00 – Priors and Model Comparison
This session addresses two critical aspects of Bayesian analysis: prior specification and model comparison. We begin by examining default priors in brms using prior_summary, understanding that these are weakly informative priors designed to regularize without dominating the likelihood. We then cover how to specify custom priors using set_prior, including different priors for different parameters (intercepts, slopes, variance parameters). Prior predictive checks are introduced: simulating data from the prior before seeing real data to ensure priors are sensible. Prior sensitivity analysis demonstrates how to assess whether posterior inference is robust to prior specification. The session then turns to model comparison, covering multiple approaches: WAIC (Widely Applicable Information Criterion) and LOO (Leave-One-Out cross-validation) for comparing predictive accuracy, and Bayes factors for formal hypothesis comparison. We discuss when each method is appropriate and how to interpret differences between models. Practical workflows for comparing multiple models are demonstrated, including comparing models that differ in fixed effects, interaction terms, or distributional assumptions.
Session 7 – 02:00:00 – Beyond Normal Linear Models
This session demonstrates how Bayesian methods easily extend beyond the restrictive assumptions of normal linear models. Classical normal linear models assume normally distributed residuals with constant variance, but these assumptions often fail with real data. We show how to relax these assumptions in brms. First, robust regression using Student-t distributions is covered: replacing the normal distribution for residuals with a t-distribution allows heavier tails and reduces sensitivity to outliers. The degrees of freedom parameter can be estimated from the data. Second, distributional regression is introduced using the bf (brmsformula) syntax to model sigma (residual standard deviation) as a function of predictors, handling heteroskedasticity directly. We examine real datasets where these extensions improve model fit, using pp_check to visualize how well models capture the data-generating process. Comparing normal, t-distributed, and heteroskedastic models using WAIC demonstrates the practical value of these extensions. This session illustrates a key advantage of Bayesian methods: the ease of modifying model assumptions to match data properties.
Break – 01:00:00
Session 8 – 02:00:00 – Bayesian Generalised Linear Models
This session extends the Bayesian framework to generalized linear models for non-normal response variables. We begin with binary logistic regression, the most widely applicable GLM. The logit link function is explained, connecting linear predictors to probabilities. We fit logistic regression models using brm with family = bernoulli(), interpreting coefficients on the log-odds scale and transforming to probabilities for interpretation. Model comparison for logistic models is demonstrated using LOO. We then briefly cover other GLM families to show the breadth of models available in brms: Poisson regression for count data, negative binomial regression for overdispersed counts, and ordinal regression for ordered categorical outcomes. Throughout, we emphasize that the Bayesian workflow remains the same regardless of model family: specify the model, examine MCMC diagnostics, check priors, evaluate fit via posterior predictive checks, and compare models using information criteria. Real datasets from diverse research domains illustrate these models in practice.
Break – 01:00:00
Session 9 – 02:00:00 – Bayesian Mixed Effects Models
This session introduces Bayesian approaches to mixed effects (multilevel/hierarchical) models for grouped or correlated data. We begin by explaining when mixed models are needed: repeated measures, clustered data, or any situation where observations are grouped and share common features. The varying intercept model is introduced, allowing each group to have its own baseline level while estimating an overall population mean and between-group variance. We extend this to varying slope models, where the effect of a predictor can vary across groups. The brms syntax for mixed models (using the lme4-style formula with random effects specified as (1|group) for varying intercepts and (predictor|group) for varying slopes) is covered in detail. We compare brms output with lme4/lmer output to understand similarities and differences. A key advantage of Bayesian mixed models is demonstrated: they often converge where lme4 struggles, particularly with complex random effects structures, small sample sizes, or binary/count data. Multiple real datasets with hierarchical structure are analyzed, including cases with both linear and generalized linear mixed models. The session concludes with practical guidance on specifying, fitting, and interpreting Bayesian mixed models, and pointers to further learning for more complex hierarchical structures.
Frequently asked questions
Everything you need to know about the product and billing.
When will I receive instructions on how to join?
You’ll receive an email on the Friday before the course begins, with full instructions on how to join via Zoom. Please ensure you have Zoom installed in advance.
Do I need administrator rights on my computer?
I’m attending the course live — will I also get access to the session recordings?
I can’t attend every live session — can I join some sessions live and catch up on others later?
I’m in a different time zone and plan to follow the course via recordings. When will these be available?
I can’t attend live — how can I ask questions?
Will I receive a certificate?
When will I receive instructions on how to join?
You’ll receive an email on the Friday before the course begins, with full instructions on how to join via Zoom. Please ensure you have Zoom installed in advance.
Do I need administrator rights on my computer?
I’m attending the course live — will I also get access to the session recordings?
I can’t attend every live session — can I join some sessions live and catch up on others later?
I’m in a different time zone and plan to follow the course via recordings. When will these be available?
I can’t attend live — how can I ask questions?
Will I receive a certificate?
Still have questions?
Can’t find the answer you’re looking for? Please chat to our friendly team.





5.0
