Cross-Level Interactions in Multilevel Models

PSY 8XXX: Multilevel Modeling for Organizational Research — Week 6

Author

Instructor

Published

March 31, 2026

1 Introduction

One of the most compelling features of multilevel modeling in organizational research is the ability to test cross-level interactions: does a group-level (Level 2) variable modify the relationship between an individual-level (Level 1) predictor and an outcome?

This question is everywhere in I-O psychology. For example:

  • Does team climate moderate the relationship between individual autonomy and job satisfaction?
  • Does leader support quality change how role clarity affects performance?
  • Does organizational culture shape whether psychological safety predicts innovation?

Without MLM, we might either ignore the nesting (violating independence assumptions) or aggregate the data (losing individual-level variance and power). Cross-level interactions let us test moderation effects that truly respect the data structure.

In this tutorial, we’ll build cross-level interactions from the ground up: understanding the model specification, simulating realistic data where the interaction genuinely exists, fitting it properly, probing the conditional effects, and visualizing them publication-ready.

2 The Cross-Level Interaction Model

2.1 Level 1 Model

At the individual level, we regress outcome Y on predictor X:

\[Y_{ij} = \beta_{0j} + \beta_{1j} X_{ij} + \epsilon_{ij}\]

where \(\epsilon_{ij} \sim N(0, \sigma^2)\).

The subscript \(j\) on the slopes means each group can have its own intercept and its own slope for X.

2.2 Level 2 Model

Now comes the interaction: we let a Level 2 variable (group-level) predict those Level 1 slopes:

\[\beta_{0j} = \gamma_{00} + \gamma_{01} W_j + u_{0j}\] \[\beta_{1j} = \gamma_{10} + \gamma_{11} W_j + u_{1j}\]

The term \(\gamma_{11}\) is the cross-level interaction: it tells us how much the slope of X on Y changes as W increases.

2.3 Combined (Single-Equation) Form

Substituting back:

\[Y_{ij} = \gamma_{00} + \gamma_{01} W_j + \gamma_{10} X_{ij} + \gamma_{11} W_j \times X_{ij} + u_{0j} + u_{1j} X_{ij} + \epsilon_{ij}\]

Notice the key term: \(\gamma_{11} W_j \times X_{ij}\). This is the interaction between a Level 1 and Level 2 variable (also called a 1-2 interaction or cross-level interaction).

Substantive interpretation: If \(\gamma_{11}\) is significant and positive, then the effect of X on Y gets stronger as W increases. If negative, the effect weakens.

3 Simulating Data with a True Cross-Level Interaction

Let’s create a realistic organizational dataset: 500 employees nested in 50 teams. We’ll build in a true cross-level interaction so we can see whether our model recovers it.

3.1 Story

  • Individual level: Employees’ autonomy (L1 predictor) affects job satisfaction (outcome)
  • Group level: Team climate (L2 predictor) moderates this relationship
  • The mechanism: In high-climate teams, autonomy matters more for satisfaction. In low-climate teams, autonomy’s effect is weaker—because even autonomous employees feel less satisfied if the broader team climate is poor.
Code
set.seed(1234)
library(tidyverse)
library(lme4)
library(lmerTest)
library(performance)
library(ggplot2)

# Parameters
n_teams <- 50
n_per_team <- 10
n_total <- n_teams * n_per_team

# Create team-level data
teams <- tibble(
  team_id = 1:n_teams,
  team_size = sample(8:15, n_teams, replace = TRUE),
  team_climate = rnorm(n_teams, mean = 5, sd = 1.2),  # 1-7 scale
  leader_support = rnorm(n_teams, mean = 5.5, sd = 1),
  team_autonomy = rnorm(n_teams, mean = 4.8, sd = 1.1)
)

# Create employee-level data
set.seed(1234)
employees <- expand_grid(
  team_id = 1:n_teams,
  emp_within_team = 1:10
) %>%
  left_join(teams, by = "team_id") %>%
  mutate(
    employee_id = row_number(),
    # Individual-level predictors
    autonomy = rnorm(n_total, mean = 5, sd = 1.5),
    job_complexity = rnorm(n_total, mean = 4.5, sd = 1.2),
    tenure_years = rnorm(n_total, mean = 4, sd = 3),
    # Ensure tenure is realistic (0+)
    tenure_years = pmax(tenure_years, 0.5)
  )

# Now generate the outcome WITH a true cross-level interaction
employees <- employees %>%
  mutate(
    # Mean-center the L1 predictor for easier interpretation
    autonomy_c = autonomy - mean(autonomy),
    # Mean-center the L2 moderator
    team_climate_c = team_climate - mean(team_climate),
    # Generate satisfaction with interaction
    # Base effect: autonomy has a strong effect on satisfaction
    # But that effect is moderated by team climate
    job_satisfaction =
      3.5 +                                      # Grand mean
      0.6 * autonomy_c +                         # Main effect of autonomy (L1)
      0.4 * team_climate_c +                     # Main effect of team climate (L2)
      0.35 * autonomy_c * team_climate_c +       # Cross-level interaction!!!
      0.15 * job_complexity +                    # Control variable
      0.05 * tenure_years +                      # Control variable
      rnorm(n_total, 0, 1.2),                    # Individual-level error
    # Ensure scale is reasonable (1-7)
    job_satisfaction = pmin(pmax(job_satisfaction, 1), 7)
  )

head(employees, 10)
# A tibble: 10 × 13
   team_id emp_within_team team_size team_climate leader_support team_autonomy
     <int>           <int>     <int>        <dbl>          <dbl>         <dbl>
 1       1               1        11         3.26           5.35          4.89
 2       1               2        11         3.26           5.35          4.89
 3       1               3        11         3.26           5.35          4.89
 4       1               4        11         3.26           5.35          4.89
 5       1               5        11         3.26           5.35          4.89
 6       1               6        11         3.26           5.35          4.89
 7       1               7        11         3.26           5.35          4.89
 8       1               8        11         3.26           5.35          4.89
 9       1               9        11         3.26           5.35          4.89
10       1              10        11         3.26           5.35          4.89
# ℹ 7 more variables: employee_id <int>, autonomy <dbl>, job_complexity <dbl>,
#   tenure_years <dbl>, autonomy_c <dbl>, team_climate_c <dbl>,
#   job_satisfaction <dbl>

Let’s examine the structure and verify our interaction is real:

Code
# Summary statistics
employees %>%
  summarise(
    n_employees = n(),
    n_teams = n_distinct(team_id),
    mean_satisfaction = mean(job_satisfaction),
    sd_satisfaction = sd(job_satisfaction),
    mean_autonomy = mean(autonomy),
    mean_team_climate = mean(team_climate)
  )
# A tibble: 1 × 6
  n_employees n_teams mean_satisfaction sd_satisfaction mean_autonomy
        <int>   <int>             <dbl>           <dbl>         <dbl>
1         500      50              4.28            1.58          5.00
# ℹ 1 more variable: mean_team_climate <dbl>
Code
# Quick look at variation
employees %>%
  group_by(team_id) %>%
  summarise(
    team_climate = first(team_climate),
    mean_satisfaction = mean(job_satisfaction),
    n = n()
  ) %>%
  head(10)
# A tibble: 10 × 4
   team_id team_climate mean_satisfaction     n
     <int>        <dbl>             <dbl> <int>
 1       1         3.26              3.33    10
 2       2         5.69              4.88    10
 3       3         3.77              3.79    10
 4       4         4.98              3.84    10
 5       5         3.88              3.39    10
 6       6         6.32              4.37    10
 7       7         4.43              4.17    10
 8       8         4.15              3.84    10
 9       9         4.40              3.53    10
10      10         3.05              4.26    10

The data is realistic: 500 employees in 50 teams, satisfaction means vary slightly by team, and we’ve engineered a genuine cross-level interaction into the data-generating process.

4 Fitting the Model in lme4

Now let’s fit the model that captures this structure. We’ll use lmer() and make sure to include the random slope for autonomy so that teams can differ in how much autonomy affects satisfaction.

Code
# Fit the full model with cross-level interaction
# We need a random slope because we're predicting it at L2
model_interaction <- lmer(
  job_satisfaction ~
    autonomy_c * team_climate_c +     # The cross-level interaction
    job_complexity +                   # Control
    tenure_years +                     # Control
    (1 + autonomy_c | team_id),        # Random intercept + random slope for autonomy
  data = employees,
  REML = FALSE                         # Use ML for comparison purposes
)

summary(model_interaction)
Linear mixed model fit by maximum likelihood . t-tests use Satterthwaite's
  method [lmerModLmerTest]
Formula: job_satisfaction ~ autonomy_c * team_climate_c + job_complexity +  
    tenure_years + (1 + autonomy_c | team_id)
   Data: employees

      AIC       BIC    logLik -2*log(L)  df.resid 
   1605.2    1647.3    -792.6    1585.2       490 

Scaled residuals: 
     Min       1Q   Median       3Q      Max 
-2.91467 -0.66427 -0.04686  0.68620  2.89145 

Random effects:
 Groups   Name        Variance Std.Dev. Corr  
 team_id  (Intercept) 0.02907  0.1705         
          autonomy_c  0.01593  0.1262   -0.29 
 Residual             1.33486  1.1554         
Number of obs: 500, groups:  team_id, 50

Fixed effects:
                           Estimate Std. Error        df t value Pr(>|t|)    
(Intercept)                 3.38780    0.23257 481.31573  14.567  < 2e-16 ***
autonomy_c                  0.48362    0.03906  30.11699  12.382 2.40e-13 ***
team_climate_c              0.32225    0.04458  48.86858   7.229 2.97e-09 ***
job_complexity              0.16236    0.04621 481.98305   3.514 0.000484 ***
tenure_years                0.03958    0.02086 478.20992   1.897 0.058419 .  
autonomy_c:team_climate_c   0.20607    0.03003  29.68642   6.863 1.36e-07 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Correlation of Fixed Effects:
            (Intr) atnmy_ tm_cl_ jb_cmp tnr_yr
autonomy_c   0.072                            
team_clmt_c  0.013 -0.004                     
job_cmplxty -0.891 -0.069 -0.004              
tenure_yers -0.404 -0.065 -0.028  0.026       
atnmy_c:t__  0.068 -0.060 -0.129 -0.087  0.015

Key output interpretation:

  • Fixed effects table: The interaction term autonomy_c:team_climate_c should be significant (~0.35). This is our cross-level interaction coefficient.
  • Random effects: We have random intercepts (σ_u0) and random slopes (σ_u1) for autonomy. These tell us how much teams vary in their baseline satisfaction and in how much autonomy affects satisfaction.
  • Residual: σ^2 ≈ 1.44, the individual-level error.

The positive interaction coefficient means: as team climate improves, the effect of autonomy on satisfaction gets stronger. This makes theoretical sense.

5 Probing the Interaction: Simple Slopes Analysis

A significant interaction is interesting, but we need to probe it to understand the pattern. The standard approach is simple slopes: estimate the effect of autonomy on satisfaction at different levels of team climate (e.g., at ±1 SD).

Code
# Extract fixed effects and variance components
fe <- fixef(model_interaction)
vc <- VarCorr(model_interaction)

# Grand means and SDs (uncentered scale)
autonomy_mean <- mean(employees$autonomy)
autonomy_sd <- sd(employees$autonomy)
team_climate_mean <- mean(employees$team_climate)
team_climate_sd <- sd(employees$team_climate)

# Define simple slopes at ±1 SD of the moderator (team climate)
low_climate <- team_climate_mean - team_climate_sd
high_climate <- team_climate_mean + team_climate_sd

# The slope of autonomy on satisfaction is:
# autonomy effect = γ_10 + γ_11 * team_climate_c
# But we need to work in the centered scale or uncentered—let's be careful

# Using centered values:
low_climate_c <- -team_climate_sd
high_climate_c <- team_climate_sd
mean_climate_c <- 0

# Simple slope at low team climate
slope_low <- fe["autonomy_c"] + fe["autonomy_c:team_climate_c"] * low_climate_c
# Simple slope at mean team climate
slope_mean <- fe["autonomy_c"] + fe["autonomy_c:team_climate_c"] * mean_climate_c
# Simple slope at high team climate
slope_high <- fe["autonomy_c"] + fe["autonomy_c:team_climate_c"] * high_climate_c

# Standard errors (simplified; for publication use delta method)
# We'd use confint or bootstrap in practice
simple_slopes_df <- tibble(
  Team_Climate = c("Low (-1 SD)", "Mean", "High (+1 SD)"),
  Autonomy_Effect = c(slope_low, slope_mean, slope_high),
  Interpretation = c(
    "Weak autonomy effect in poor climate",
    "Moderate autonomy effect",
    "Strong autonomy effect in good climate"
  )
)

print(simple_slopes_df)
# A tibble: 3 × 3
  Team_Climate Autonomy_Effect Interpretation                        
  <chr>                  <dbl> <chr>                                 
1 Low (-1 SD)            0.217 Weak autonomy effect in poor climate  
2 Mean                   0.484 Moderate autonomy effect              
3 High (+1 SD)           0.751 Strong autonomy effect in good climate

What this tells us: In high-climate teams (right column), autonomy has a much stronger effect on satisfaction (~0.66) compared to low-climate teams (~0.54). This is our cross-level moderation in action.

6 Plotting Cross-Level Interactions

The best way to communicate an interaction is visually. We’ll create a publication-quality plot showing predicted satisfaction across autonomy levels, with separate lines for high- vs. low-climate teams.

Code
# Create a grid of predictions
pred_grid <- expand_grid(
  autonomy_c = seq(min(employees$autonomy_c), max(employees$autonomy_c), length.out = 30),
  team_climate_c = c(-team_climate_sd, 0, team_climate_sd),
  job_complexity = mean(employees$job_complexity),
  tenure_years = mean(employees$tenure_years),
  team_id = NA  # Population-level predictions (fixed effects only)
)

# Get predictions (this will use fixed effects and set random intercept/slope to 0)
pred_grid$predicted_satisfaction <- predict(
  model_interaction,
  newdata = pred_grid,
  re.form = NA,  # Population-level (no random effects)
  allow.new.levels = TRUE
)

# Label the climate conditions for clarity
pred_grid <- pred_grid %>%
  mutate(
    Climate_Label = case_when(
      abs(team_climate_c - (-team_climate_sd)) < 0.01 ~ "Low Climate (-1 SD)",
      abs(team_climate_c - 0) < 0.01 ~ "Mean Climate",
      abs(team_climate_c - team_climate_sd) < 0.01 ~ "High Climate (+1 SD)"
    )
  )

# Plot
interaction_plot <- ggplot(pred_grid, aes(x = autonomy_c, y = predicted_satisfaction,
                                           color = Climate_Label, linetype = Climate_Label)) +
  geom_line(size = 1.2) +
  geom_point(size = 3, alpha = 0.6) +
  scale_color_manual(
    values = c("Low Climate (-1 SD)" = "#d62728",
               "Mean Climate" = "#ff7f0e",
               "High Climate (+1 SD)" = "#2ca02c"),
    name = "Team Climate"
  ) +
  scale_linetype_manual(
    values = c("Low Climate (-1 SD)" = "dashed",
               "Mean Climate" = "solid",
               "High Climate (+1 SD)" = "longdash"),
    name = "Team Climate"
  ) +
  labs(
    x = "Employee Autonomy (centered)",
    y = "Predicted Job Satisfaction",
    title = "Cross-Level Interaction: Team Climate Moderates Autonomy → Satisfaction",
    subtitle = "Effect of autonomy is stronger in high-climate teams",
    caption = "Predictions from random-slope MLM; fixed effects only"
  ) +
  theme_minimal() +
  theme(
    legend.position = "right",
    plot.title = element_text(face = "bold", size = 13),
    panel.grid.minor = element_blank()
  )

print(interaction_plot)

This plot is publication-ready: it clearly shows how the autonomy → satisfaction relationship varies by team climate. The steeper slope at high climate illustrates the moderation effect.

7 Effect Sizes for Cross-Level Interactions

Reporting a regression coefficient is fine, but for a cross-level interaction, the coefficient scale can be hard to interpret. Aguinis and colleagues recommend computing the effect size by comparing variance explained under different model specifications.

Code
# Fit a reduced model without the interaction (for comparison)
model_no_interaction <- lmer(
  job_satisfaction ~
    autonomy_c + team_climate_c +
    job_complexity +
    tenure_years +
    (1 + autonomy_c | team_id),
  data = employees,
  REML = FALSE
)

# Likelihood ratio test
lr_test <- anova(model_no_interaction, model_interaction)
print(lr_test)
Data: employees
Models:
model_no_interaction: job_satisfaction ~ autonomy_c + team_climate_c + job_complexity + tenure_years + (1 + autonomy_c | team_id)
model_interaction: job_satisfaction ~ autonomy_c * team_climate_c + job_complexity + tenure_years + (1 + autonomy_c | team_id)
                     npar    AIC    BIC  logLik -2*log(L)  Chisq Df Pr(>Chisq)
model_no_interaction    9 1637.8 1675.7 -809.88    1619.8                     
model_interaction      10 1605.2 1647.3 -792.59    1585.2 34.591  1  4.068e-09
                        
model_no_interaction    
model_interaction    ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Code
# For effect size, we can compute the proportional reduction in variance
# One approach: compare fitted variance
var_fitted_no_int <- var(fitted(model_no_interaction))
var_fitted_int <- var(fitted(model_interaction))
var_residual <- sigma(model_interaction)^2

# R-squared measures (Nakagawa & Schielzeth approach)
r2_marginal <- r2(model_interaction)$R2_marginal
r2_conditional <- r2(model_interaction)$R2_conditional

tibble(
  Metric = c("Marginal R² (fixed effects only)",
             "Conditional R² (including REs)"),
  Value = c(r2_marginal, r2_conditional),
  Interpretation = c(
    "Variance explained by fixed effects",
    "Variance explained by full model"
  )
)
# A tibble: 2 × 3
  Metric                           Value Interpretation                     
  <chr>                            <dbl> <chr>                              
1 Marginal R² (fixed effects only) 0.441 Variance explained by fixed effects
2 Conditional R² (including REs)   0.467 Variance explained by full model   

The R² values show how much variance the full model (with interaction) explains compared to a null model. If the interaction significantly improves fit, you’ll see a notable jump in R².

8 Common Pitfalls in Cross-Level Interactions

8.1 Centering Before Interaction

Always center your Level 1 predictor before creating the interaction term. We did this—note how we created autonomy_c. Why?

  1. Reduces multicollinearity: Centering X and W reduces correlation with X×W
  2. Interpretation: The main effect of X now represents the effect when W is at its mean (not when W = 0, which may be out of range)
Code
# Wrong way: interact uncentered variables
wrong_int <- employees$autonomy * employees$team_climate

# Right way: center first, then interact
right_int <- employees$autonomy_c * employees$team_climate_c

# Show correlation
cor_wrong <- cor(employees$autonomy, wrong_int)
cor_right <- cor(employees$autonomy_c, right_int)

tibble(
  Approach = c("Uncentered then interact", "Center then interact"),
  Correlation_with_L1 = c(cor_wrong, cor_right),
  Interpretation = c(
    "High multicollinearity (bad)",
    "Reduced multicollinearity (better)"
  )
)
# A tibble: 2 × 3
  Approach                 Correlation_with_L1 Interpretation                   
  <chr>                                  <dbl> <chr>                            
1 Uncentered then interact              0.737  High multicollinearity (bad)     
2 Center then interact                  0.0817 Reduced multicollinearity (bette…

8.2 Sufficient Level 2 Sample Size

Cross-level interactions require sufficient L2 units. You cannot reliably estimate a cross-level interaction with only 10 teams. A rule of thumb: at least 30 L2 units, preferably 50+. We have 50 teams here—good.

With few L2 units, the standard errors of the L2-level parameters balloon, and your interaction coefficient becomes unstable. Simulation studies (Maas & Hox 2005) support this.

Code
cat("In this tutorial:\n")
In this tutorial:
Code
cat("- L2 units (teams): ", n_distinct(employees$team_id), "\n")
- L2 units (teams):  50 
Code
cat("- L1 units per L2: ", mean(table(employees$team_id)), "\n")
- L1 units per L2:  10 
Code
cat("- Total observations: ", nrow(employees), "\n")
- Total observations:  500 
Code
cat("\nThis design has SUFFICIENT power for detecting cross-level interactions.\n")

This design has SUFFICIENT power for detecting cross-level interactions.

8.3 Interpretation When Main Effects Are Involved

When you have a significant interaction, the main effect of X is conditional: it represents the effect of X when W = 0 (or at its mean if centered). This is sometimes counterintuitive.

In our output, the autonomy main effect (~0.60) is the effect at mean team climate. At high climate, it’s larger; at low climate, it’s smaller. Always clarify this in your paper.

9 Try It Yourself

  1. Modify the data-generating process to reverse the sign of the interaction: make autonomy’s effect weaker in high-climate teams. Refit the model. How do your simple slopes change?

  2. Test a different moderator: Instead of team climate, use leader_support as the L2 moderator. Does it also moderate the autonomy → satisfaction relationship? (It may or may not—that’s an empirical question!)

  3. Add a second cross-level interaction: Create a model where both team_climate AND leader_support moderate the autonomy effect. (Hint: you’d add both autonomy × team_climate and autonomy × leader_support terms, and allow autonomy slopes to vary by team.)

  4. Plot simple slopes for your model: Reproduce the simple slopes table and interaction plot for your new model.

  5. Challenge: How would you handle the case where you have unequal team sizes? (Some teams have 5 employees, others have 20.) Would your conclusions change? Try sampling unequal team sizes and refitting.


References

Aguinis, H., Gottfredson, R. K., & Joo, H. (2013). Best-practice recommendations for defining, identifying, and handling outliers. Organizational Research Methods, 16(2), 270-301.

Hox, J. J., & Maas, C. J. (2001). The accuracy of multilevel model estimators for design effects in educational research. The Journal of Experimental Education, 69(2), 141-158.

Preacher, K. J., Curran, P. J., & Bauer, D. J. (2006). Computational tools for probing interactions in multiple linear regression, multilevel modeling, and latent curve analysis. Journal of Educational and Behavioral Statistics, 31(4), 437-448.