The Wide and Long Data Format for Repeated Measures Data

by Karen Grace-Martin

One issue in data analysis that feels like it should be obvious, but often isn’t, is setting up your data.

The kinds of issues involved include:

  • What is a variable?
  • What is a unit of observation?
  • Which data should go in each row of the data matrix?

Answering these practical questions is one of those skills that comes with experience, especially in complicated data sets.

Even so, it’s extremely important. If the data isn’t set up right, the software won’t be able to run any of your analyses.

And in many data situations, you will need to set up the data different ways for different parts of the analyses. This article will outline one of the issues in data set up: using the long vs. the wide data format.

The Wide Format

In the wide format, a subject’s repeated responses will be in a single row, and each response is in a separate column.

For example, in this data set, each county was measured at four time points, once every 10 years starting in 1970. The outcome variable is Jobs, and indicates the number of jobs in each county. There are three predictor variables: Land Area, Natural Amenity (4=no and 3=Yes), and the proportion of the county population in that year that had graduated from college.

Since land area and presence of a natural amenity doesn’t change from decade to decade, those predictors have only one variable per county. But both our outcome, Jobs, and one predictor, College, have different values in each year, so require a different variable (column) for each year.

The Long Format

In the long format, each row is one time point per subject. So each subject (county) will have data in multiple rows. Any variables that don’t change across time will have the same value in all the rows.

You can see the same five counties’ data below in the long format. Each county has four rows of data–one for each year.

All the same information is there; we’re just set up the data differently.

We no longer need four columns for either Jobs or College. Instead, all four values of Jobs for each county are stacked–they’re all in the Jobs column. The same is true for the four values of College.

But to keep track of which observation occurred in which year, we need to add a variable, Year.

You’ll notice that variables that didn’t change from year to year–Land Area and Natural Amenity–have the same value in each of the four rows for each county. It looks strange, but it’s okay to have it this way, and as long as you analyze the data using the correct procedures, it will take into account that these are redundant.

A Comparison of the Two Approaches

One reason for setting up the data in one format or the other is simply that different analyses require different set ups.

For example, in all software that I know of, the wide format is required for MANOVA and repeated measures procedures.

Many data manipulations are much, much easier as well when data are in the wide format.

Likewise, mixed models and many survival analysis procedures require data to be in the long format.

Beyond software requirements, each approach has analytical implications. For example, in the wide format, the unit of analysis is the subject–the county–whereas in the long format, the unit of analysis is each measurement occasion for each county.

The practical difference is that when the occasion is the unit of analysis, you can use each decade’s college education rate as a covariate for the same decade’s Jobs value. In the wide format, when the unit of observation is the county, there is no way to do this. You can use any of the college rates as covariates for all years, but you can’t have decade-specific covariates.

Another implication is that in the wide format, those repeated outcomes are considered different and non-interchangeable variables. Each can have its own distribution. Each is distinct. This makes sense in the county example where each observation occurred in the same four years for every county. But if each county had been measured a different number of times, or measured in different years, this set up doesn’t make a lot of sense.

So it’s important to think about the implications before you enter data.

Luckily, converting from one to the other is generally not too difficult in most software packages. For example, you can do it with Proc Transpose in SAS or with the Restructure wizard in SPSS.

This is a good skill to practice, as it’s quite helpful to be able to switch back and forth. For example, it’s often easier to enter and manipulate data in the wide format, even if you need to analyze it in the long format.

Learn more about repeated measures analysis using mixed models in our most popular workshop, Analyzing Repeated Measures Data: GLM and Mixed Models Approaches.

"I wanted to share this recent publication with you. I would not have made it through the statistical analysis for this project without your Repeated Measures class. I am continually pulling out my class notes for other analyses. Thanks again!"

- Lisa Giencke, Plant Ecology Lead Technician, Joseph W. Jones Ecological Research Center



Fixed and Random Factors in Mixed Models
One of the hardest parts of mixed models is understanding which factors to make fixed and which to make random. Learn the important criteria to help you decide.

{ 11 comments… read them below or add one }

Ray

Hello Karen,
When you have many lines per participant, the denominator degrees of freedom will be high. Do you know if there should be an adjustment? I am running a linear mixed effect model in spss and my denominator df is 3500. Is this what should be reported even though I only had less than 100 participants? I

Reply

Karen Grace-Martin

Hi Ray,

Try including a random slope if you can. That will help the df.

Reply

Ray

Thanks for the response Karen. That seemed to give a more accurate representation of the degrees of freedom. Do you think there are issues if I did not include a random slope?

Reply

Karen Grace-Martin

Hi Ray,
Results are generally more accurate all around if you do include a random slope when you can.

Reply

Amanda

Hi there,

I am trying to analyse e-prime data which has 400 trials (data lines) per participant, output in long format. Could you please suggest how I might best analyse these in SPSS?

Thank you in advance for your help.

Best,
Amanda

Reply

ben

an obvious one? –> trying to figure out an intuitive way to understand partitioning variance (starting from an “empty” model): getting stuck on the partitioning part…
if you can partition the variance in your outcome variable into “sub”variances according to different levels, then why are they named with the implication that they’re random??? why not discerned variance? or sub-variance?? analyte??? sorry, i’m just learning about mixed modeling now from you and the methods consultants of ann arbor, llc,’s HLM video series.
thanks for any “random” feedback…

Reply

Karen

Hi Ben,

The idea is that only random variables have statistical distributions. So you’re measuring the variance of a distribution.

This is actually a great, great question and will require more of an answer than I can write here. I may write the answer in this month’s newsletter. Watch for it. 🙂

Reply

ben

karen! i just saw your response today (october 28th!) and i don’t see a newsletter archive (or blogs linked to by date…) that would allow me to go back and see what you elaborated on! how can i find the perspective i’ve waited so long for?! other than a few marbles, what am i missing???

Reply

Tara

Hello! I am requesting some more information regarding wide and long formats in SPSS and SAS. If you could point me in that direction, I would appreciate it!

Thank you!

Reply

Lauren

Hey Karen! Your multilevel data workshop was so helpful to me (as is The Analysis Factor in general!) I have a question: what if I have two separate within-subjects factors nested within each other? For instance, I have four time periods nested within three conditions, which are nested within participants (that is, participants did all three conditions, four times each). I’m struggling with how to shape my data with two within-subjects factors. Should I then have 12 lines of data per participant (each participant did three conditions x four time periods nested within each condition)? Or should I have four lines of data per participant (one line for each time period, and then one column for each condition)? What do you think?

Reply

Karen

Hi Lauren,

Thanks for the kind words.

You need 12 lines of data per participant. You’ll then have two variables indicating the exact trial. One will be condition and the other will be time period. It is often helpful to just have a unique identifier 1-12 for each trial. You may or may not need it in the analysis, but I tend to prefer to have it available if needed.

Reply

Leave a Comment

Please note that, due to the large number of comments submitted, any comments on problems related to a personal study/project will not be answered. We suggest joining Statistically Speaking, where you have access to a private forum and more resources 24/7.

{ 1 trackback }

Previous post:

Next post: