In a previous post, we covered how to use Latent Growth Modeling in R to examine change in time. In that post, we assumed a simple linear model, which is often unrealistic. Here, I am going to show how we can free this assumption and find the best way to treat change in time.
We can use exploratory analysis and previous research to understand how to model change in time. Moreover, we can also compare models that treat change in time in different ways to find the best fit for our data.
If we look again at the log income in the Understanding Society data, we get this graph (see previous post for an explanation of the data and how it is structured):
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
ggplot(usl, aes(wave, logincome, group = pidp)) +
geom_line(alpha = 0.01) + # add individual line with transparency
stat_summary(# add average line
aes(group = 1),
fun = mean,
geom = "line",
size = 1.5,
color = "red"
) +
theme_bw() + # nice theme
labs(x = "Wave", y = "Logincome")# nice labels
ggplot(usl, aes(wave, logincome, group = pidp)) +
geom_line(alpha = 0.01) + # add individual line with transparency
stat_summary( # add average line
aes(group = 1),
fun = mean,
geom = "line",
size = 1.5,
color = "red"
) +
theme_bw() + # nice theme
labs(x = "Wave", y = "Logincome") # nice labels
ggplot(usl, aes(wave, logincome, group = pidp)) +
geom_line(alpha = 0.01) + # add individual line with transparency
stat_summary( # add average line
aes(group = 1),
fun = mean,
geom = "line",
size = 1.5,
color = "red"
) +
theme_bw() + # nice theme
labs(x = "Wave", y = "Logincome") # nice labels
Exploratory visualization of log income change in time
The graph would indicate we have an average change that is overall linear with a slight downward bend.
To see the individual level change, I also sampled 20 individuals and plotted each one’s change in income.
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# sample 20 ids
people <- unique(usl$pidp) %>% sample(20)
# do separate graph for each individual
usl %>%
filter(pidp %in% people) %>% # filter only sampled cases
ggplot(aes(wave, logincome, group = 1)) +
geom_line() +
facet_wrap(~pidp) + # a graph for each individual
theme_bw() + # nice theme
labs(x = "Wave", y = "Logincome")# nice labels
# sample 20 ids
people <- unique(usl$pidp) %>% sample(20)
# do separate graph for each individual
usl %>%
filter(pidp %in% people) %>% # filter only sampled cases
ggplot(aes(wave, logincome, group = 1)) +
geom_line() +
facet_wrap(~pidp) + # a graph for each individual
theme_bw() + # nice theme
labs(x = "Wave", y = "Logincome") # nice labels
# sample 20 ids
people <- unique(usl$pidp) %>% sample(20)
# do separate graph for each individual
usl %>%
filter(pidp %in% people) %>% # filter only sampled cases
ggplot(aes(wave, logincome, group = 1)) +
geom_line() +
facet_wrap(~pidp) + # a graph for each individual
theme_bw() + # nice theme
labs(x = "Wave", y = "Logincome") # nice labels
Examples of individual trends in time for income
It appears that at the individual level, we have a more mixed bag, although linear change would not be a bad approximation for quite a few people.
Keeping this in mind, we can also decide on the best way to model change in time by comparing several different models and seeing which fits the data best. As a starting point, we can run the linear model, which will be our reference (see previous post for an explanation of the model and syntax):
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
library(tidyverse)
library(lavaan)
# first LGM
model <- 'i =~ 1*logincome_1 + 1*logincome_2 + 1*logincome_3 +
1*logincome_4 + 1*logincome_5 + 1*logincome_6
s =~ 0*logincome_1 + 1*logincome_2 + 2*logincome_3 +
3*logincome_4 + 4*logincome_5 + 5*logincome_6'
fit1 <- growth(model, data = usw)
summary(fit1, standardized = TRUE)
## lavaan 0.6-8 ended normally after 45 iterations
Based on this model, log income is, on average, around 7.063 (~ £1,168) at the start of the study and goes up by 0.041 each wave (for more on the interpretation, see the previous post).
We can also visualize the change in time based on our model (again, check the previous post for explanations).
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# predict the two latent variables
pred_lgm <- predict(fit1)
# create long data for each individual
pred_lgm_long <- map(0:5, # loop over time
function(x) pred_lgm[, 1] + x * pred_lgm[, 2]) %>%
reduce(cbind) %>% # bring together the wave predictions
as.data.frame() %>% # make data frame
setNames(str_c("Wave ", 1:6)) %>% # give names to variables
mutate(id = row_number()) %>% # make unique id
gather(-id, key = wave, value = pred)# make long format
# make graph (takes a minute to plot)
pred_lgm_long %>%
ggplot(aes(wave, pred, group = id)) + # what variables to plot?
geom_line(alpha = 0.01) + # add a transparent line for each person
stat_summary(# add average line
aes(group = 1),
fun = mean,
geom = "line",
size = 1.5,
color = "red"
) +
theme_bw() + # makes graph look nicer
labs(y = "logincome", # labels
x = "Wave")
# predict the two latent variables
pred_lgm <- predict(fit1)
# create long data for each individual
pred_lgm_long <- map(0:5, # loop over time
function(x) pred_lgm[, 1] + x * pred_lgm[, 2]) %>%
reduce(cbind) %>% # bring together the wave predictions
as.data.frame() %>% # make data frame
setNames(str_c("Wave ", 1:6)) %>% # give names to variables
mutate(id = row_number()) %>% # make unique id
gather(-id, key = wave, value = pred) # make long format
# make graph (takes a minute to plot)
pred_lgm_long %>%
ggplot(aes(wave, pred, group = id)) + # what variables to plot?
geom_line(alpha = 0.01) + # add a transparent line for each person
stat_summary( # add average line
aes(group = 1),
fun = mean,
geom = "line",
size = 1.5,
color = "red"
) +
theme_bw() + # makes graph look nicer
labs(y = "logincome", # labels
x = "Wave")
# predict the two latent variables
pred_lgm <- predict(fit1)
# create long data for each individual
pred_lgm_long <- map(0:5, # loop over time
function(x) pred_lgm[, 1] + x * pred_lgm[, 2]) %>%
reduce(cbind) %>% # bring together the wave predictions
as.data.frame() %>% # make data frame
setNames(str_c("Wave ", 1:6)) %>% # give names to variables
mutate(id = row_number()) %>% # make unique id
gather(-id, key = wave, value = pred) # make long format
# make graph (takes a minute to plot)
pred_lgm_long %>%
ggplot(aes(wave, pred, group = id)) + # what variables to plot?
geom_line(alpha = 0.01) + # add a transparent line for each person
stat_summary( # add average line
aes(group = 1),
fun = mean,
geom = "line",
size = 1.5,
color = "red"
) +
theme_bw() + # makes graph look nicer
labs(y = "logincome", # labels
x = "Wave")
Predicted change in log income based on linear Latent Growth Model
There are two general ways to expand this model to include non-linear change. One is by including polynomials while the other is by looking at relative change in time. We will cover both below.
Estimating non-linear LGM using polynomials
Including polynomials to model nonlinear effects has a similar motivation to regression modelling. A polynomial (or interaction) allows the effect to change depending on the values of a predictor. In the case of LGM, this would mean that we allow the slope to be higher or lower as time passes. This, in effect, would bend the trend upwards or downwards. If we want to allow for multiple bends, then we need to include multiple polynomials. Below, we will include just the square effects modelled as a latent variable “q” (but the model can be easily expanded to include cubed effects and so on).
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# square LGM
model <- 'i =~ 1*logincome_1 + 1*logincome_2 + 1*logincome_3 +
1*logincome_4 + 1*logincome_5 + 1*logincome_6
s =~ 0*logincome_1 + 1*logincome_2 + 2*logincome_3 +
It appears that initially, log income goes up by 0.071 each wave, but as time passes, this bends downwards (because the square effect is negative) by 0.006 each wave. The variance of “q” (0.002) is the between variation in non-linear change. Substantively, it tells us if people have different non-linear bends in time. If this is 0, everyone follows the same non-linear trend. If the value is large, individuals are very different in the non-linear part of the change in time of income.
Next, we plot the new estimates of change from the new model. We will use a procedure similar to the one above. The main change is to the formula. Now, we need to add a new term, which is time squared (x^2) multiplied by the coefficient for the square effect (pred_lgm2[, 3]). We also added the line from the linear model for comparison.
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# predict scores
pred_lgm2 <- predict(fit2)
# create long data for each individual
pred_lgm2_long <- map(0:5, # loop over time
function(x) pred_lgm2[, 1] +
x * pred_lgm2[, 2] +
x^2* pred_lgm2[, 3]) %>%
reduce(cbind) %>% # bring together the wave predictions
as.data.frame() %>% # make data frame
setNames(str_c("Wave ", 1:6)) %>% # give names to variables
mutate(id = row_number()) %>% # make unique id
gather(-id, key = wave, value = pred)# make long format
# make graph
pred_lgm2_long %>%
ggplot(aes(wave, pred, group = id)) + # what variables to plot?
geom_line(alpha = 0.01) + # add a transparent line for each person
stat_summary(# add average line
aes(group = 1),
fun = mean,
geom = "line",
size = 1.5,
color = "blue"
) +
stat_summary(data = pred_lgm_long, # add average from linear model
aes(group = 1),
fun = mean,
geom = "line",
size = 1.5,
color = "red",
alpha = 0.5
) +
theme_bw() + # makes graph look nicer
labs(y = "logincome", # labels
x = "Wave")
# predict scores
pred_lgm2 <- predict(fit2)
# create long data for each individual
pred_lgm2_long <- map(0:5, # loop over time
function(x) pred_lgm2[, 1] +
x * pred_lgm2[, 2] +
x^2 * pred_lgm2[, 3]) %>%
reduce(cbind) %>% # bring together the wave predictions
as.data.frame() %>% # make data frame
setNames(str_c("Wave ", 1:6)) %>% # give names to variables
mutate(id = row_number()) %>% # make unique id
gather(-id, key = wave, value = pred) # make long format
# make graph
pred_lgm2_long %>%
ggplot(aes(wave, pred, group = id)) + # what variables to plot?
geom_line(alpha = 0.01) + # add a transparent line for each person
stat_summary( # add average line
aes(group = 1),
fun = mean,
geom = "line",
size = 1.5,
color = "blue"
) +
stat_summary(data = pred_lgm_long, # add average from linear model
aes(group = 1),
fun = mean,
geom = "line",
size = 1.5,
color = "red",
alpha = 0.5
) +
theme_bw() + # makes graph look nicer
labs(y = "logincome", # labels
x = "Wave")
# predict scores
pred_lgm2 <- predict(fit2)
# create long data for each individual
pred_lgm2_long <- map(0:5, # loop over time
function(x) pred_lgm2[, 1] +
x * pred_lgm2[, 2] +
x^2 * pred_lgm2[, 3]) %>%
reduce(cbind) %>% # bring together the wave predictions
as.data.frame() %>% # make data frame
setNames(str_c("Wave ", 1:6)) %>% # give names to variables
mutate(id = row_number()) %>% # make unique id
gather(-id, key = wave, value = pred) # make long format
# make graph
pred_lgm2_long %>%
ggplot(aes(wave, pred, group = id)) + # what variables to plot?
geom_line(alpha = 0.01) + # add a transparent line for each person
stat_summary( # add average line
aes(group = 1),
fun = mean,
geom = "line",
size = 1.5,
color = "blue"
) +
stat_summary(data = pred_lgm_long, # add average from linear model
aes(group = 1),
fun = mean,
geom = "line",
size = 1.5,
color = "red",
alpha = 0.5
) +
theme_bw() + # makes graph look nicer
labs(y = "logincome", # labels
x = "Wave")
Comparing estimates of change using latent growth models with linear and non-linear trajectories.
In the graph, we see that the blue line (based on the model with the squared effects) starts slightly below the red line, by the middle is slightly above and in the end, is somewhat lower. That being said, you need to squint your eyes to see a difference. While the square effects are significant, they might not be of substantive interest.
Non-linear change in time using relative change
The alternative way to model non-linear change is to estimate relative change. This is similar in spirit to including dummy variables in a regression model. The only thing we need to do is to tweak the loadings for the slope latent variable. We will fix only the first and last loading to 0 and 1. The rest of the loadings will not be fixed and will be estimated. Now, the interpretation of the slope will be the total amount of change from the first wave to the last one. The newly estimated loadings will tell us the proportion of change from the start until that point out of the total change observed.
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# relative change LGM
model <- 'i =~ 1*logincome_1 + 1*logincome_2 + 1*logincome_3 +
1*logincome_4 + 1*logincome_5 + 1*logincome_6
s =~ 0*logincome_1 + logincome_2 + logincome_3 +
logincome_4 + logincome_5 + 1*logincome_6'
fit3 <- growth(model, data = usw)
summary(fit3, standardized = TRUE)
## lavaan 0.6-8 ended normally after 113 iterations
So, based on these results from wave 1 to wave 6, income has increased, on average, by 0.174. If we had a genuinely linear change, we would expect the loadings to increase at the same rate of 0.2 per wave (five steps going from wave 1 to six out of a total change of 1 (the value of the last loading)). So the loading for “logincom_2” should be 0.2, the next, 0.4, and so on. We don’t see that. We see that the most change happened between waves 3 and 4. Then, around 30% of the total change happened ((0.718 – 0.410) * 100). On the other hand, very little change occurred between waves 5 and 6, less than 5%. This result would indicate that we don’t have linear change. We can visualize the change using a graph.
We need to extract the loadings to make a nice graph using the formula. We can use the parameterestimates() command to do that:
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
parameterestimates(fit3)
## lhs op rhs est se z pvalue ci.lower ci.upper
## 1 i =~ logincome_1 1.000 0.000 NA NA 1.000 1.000
## 2 i =~ logincome_2 1.000 0.000 NA NA 1.000 1.000
## 3 i =~ logincome_3 1.000 0.000 NA NA 1.000 1.000
## 4 i =~ logincome_4 1.000 0.000 NA NA 1.000 1.000
## 5 i =~ logincome_5 1.000 0.000 NA NA 1.000 1.000
## 6 i =~ logincome_6 1.000 0.000 NA NA 1.000 1.000
## 7 s =~ logincome_1 0.000 0.000 NA NA 0.000 0.000
# extract just th eloadings of the slopes
loadings <- parameterestimates(fit3) %>% # get estimates
filter(lhs == "s", op == "=~") %>% # filter the rows we want
.[["est"]] # extract "est" variable
# print result
loadings
## [1] 0.0000000 0.1576471 0.4102628 0.7176219 0.9575655 1.0000000
# extract just th eloadings of the slopes
loadings <- parameterestimates(fit3) %>% # get estimates
filter(lhs == "s", op == "=~") %>% # filter the rows we want
.[["est"]] # extract "est" variable
# print result
loadings
## [1] 0.0000000 0.1576471 0.4102628 0.7176219 0.9575655 1.0000000
We can follow a similar approach to the one before to create the long data with predicted scores from the LGM. The only difference is that we now loop over the loadings instead of the numbers 0 to 5:
Plain text
Copy to clipboard
Open code in new window
EnlighterJS 3 Syntax Highlighter
# predict scores
pred_lgm3 <- predict(fit3)
# create long data for each individual
pred_lgm3_long <- map(loadings, # loop over time
function(x) pred_lgm3[, 1] +
x * pred_lgm3[, 2]) %>%
reduce(cbind) %>% # bring together the wave predictions
as.data.frame() %>% # make data frame
setNames(str_c("Wave ", 1:6)) %>% # give names to variables
mutate(id = row_number()) %>% # make unique id
gather(-id, key = wave, value = pred)# make long format
pred_lgm3_long %>%
ggplot(aes(wave, pred, group = id)) + # what variables to plot?
geom_line(alpha = 0.01) + # add a transparent line for each person
stat_summary(# add average line
aes(group = 1),
fun = mean,
geom = "line",
size = 1.5,
color = "green"
) +
stat_summary(data = pred_lgm_long, # add average from linear model
aes(group = 1),
fun = mean,
geom = "line",
size = 1.5,
color = "red",
alpha = 0.5
) +
stat_summary(data = pred_lgm2_long, # add average from squared model
aes(group = 1),
fun = mean,
geom = "line",
size = 1.5,
color = "blue",
alpha = 0.5
) +
theme_bw() + # makes graph look nicer
labs(y = "logincome", # labels
x = "Wave")
# predict scores
pred_lgm3 <- predict(fit3)
# create long data for each individual
pred_lgm3_long <- map(loadings, # loop over time
function(x) pred_lgm3[, 1] +
x * pred_lgm3[, 2]) %>%
reduce(cbind) %>% # bring together the wave predictions
as.data.frame() %>% # make data frame
setNames(str_c("Wave ", 1:6)) %>% # give names to variables
mutate(id = row_number()) %>% # make unique id
gather(-id, key = wave, value = pred) # make long format
pred_lgm3_long %>%
ggplot(aes(wave, pred, group = id)) + # what variables to plot?
geom_line(alpha = 0.01) + # add a transparent line for each person
stat_summary( # add average line
aes(group = 1),
fun = mean,
geom = "line",
size = 1.5,
color = "green"
) +
stat_summary(data = pred_lgm_long, # add average from linear model
aes(group = 1),
fun = mean,
geom = "line",
size = 1.5,
color = "red",
alpha = 0.5
) +
stat_summary(data = pred_lgm2_long, # add average from squared model
aes(group = 1),
fun = mean,
geom = "line",
size = 1.5,
color = "blue",
alpha = 0.5
) +
theme_bw() + # makes graph look nicer
labs(y = "logincome", # labels
x = "Wave")
# predict scores
pred_lgm3 <- predict(fit3)
# create long data for each individual
pred_lgm3_long <- map(loadings, # loop over time
function(x) pred_lgm3[, 1] +
x * pred_lgm3[, 2]) %>%
reduce(cbind) %>% # bring together the wave predictions
as.data.frame() %>% # make data frame
setNames(str_c("Wave ", 1:6)) %>% # give names to variables
mutate(id = row_number()) %>% # make unique id
gather(-id, key = wave, value = pred) # make long format
pred_lgm3_long %>%
ggplot(aes(wave, pred, group = id)) + # what variables to plot?
geom_line(alpha = 0.01) + # add a transparent line for each person
stat_summary( # add average line
aes(group = 1),
fun = mean,
geom = "line",
size = 1.5,
color = "green"
) +
stat_summary(data = pred_lgm_long, # add average from linear model
aes(group = 1),
fun = mean,
geom = "line",
size = 1.5,
color = "red",
alpha = 0.5
) +
stat_summary(data = pred_lgm2_long, # add average from squared model
aes(group = 1),
fun = mean,
geom = "line",
size = 1.5,
color = "blue",
alpha = 0.5
) +
theme_bw() + # makes graph look nicer
labs(y = "logincome", # labels
x = "Wave")
Comparing change estimates using latent growth models with linear, non-linear and relative trajectories.
We see a very similar trend for the final model. It would appear that the trend here does not show a non-linear trajectory.
Finding the best fit
How do you decide on the best model? We can compare relative fit indices, like AIC and BIC, to help us with the decision (note that models are not nested, so can’t use the Chi-squared test):
Based on AIC and BIC, the best-fitting model is the one with square effects. That being said, it’s always important to consider whether those effects are really important from a substantive point of view and whether using just a linear model would really lead to different conclusions.
Conclusions
Hopefully, that will give you an idea of how to estimate non-linear LGM, how to visualize this change, and how to interpret it. Visualizing these models is always helpful, as the interpretation can get quite tricky.