Empirical Project 11 Working in R

Download the code

To download the code chunks used in this project, right-click on the download link and select ‘Save Link As…’. You’ll need to save the code download to your working directory, and open it in RStudio.

Don’t forget to also download the data into your working directory by following the steps in this project.

Getting started in R

For this project you will need the following packages:

If you need to install any of these packages, run the following code:

install.packages(c("tidyverse", "readxl", "knitr", "psych"))

You can import the libraries now, or when they are used in the R walk-throughs below.

library(tidyverse)
library(readxl)
library(knitr)
library(psych)

Part 11.1 Summarizing the data

Learning objectives for this part

  • construct indices to measure attitudes or opinions
  • use Cronbach’s alpha to assess indices for internal consistency
  • practise recoding and creating new variables.

We will be using data collected from an internet survey sponsored by the German government.

First, download the survey data and documentation:

  1. While contingent valuation methods can be useful, they also have shortcomings. Read Section 5 of the paper ‘Introduction to economic valuation methods’ (Pages 16–19), and explain which limitations you think apply particularly to the survey we are looking at. You may also find it useful to look at Table 2 of that paper, which compares stated-preference with revealed-preference methods.

Before comparing between question formats (dichotomous choice (DC) and two-way payment ladder (TWPL)), we will first compare the people assigned to each question format to see if they are similar in demographic characteristics and attitudes towards related topics (such as beliefs about climate change and need for government intervention). If the groups are vastly dissimilar then any observed differences in answers between the groups might be due to differences in attitudes and/or demographics rather than the question format.

Likert scale
A numerical scale (usually ranging from 1–5 or 1–7) used to measure attitudes or opinions, with each number representing the individual’s level of agreement or disagreement with a particular statement.

Attitudes were assessed using a 1–5 Likert scale, where 1 = strongly disagree, and 5 = strongly agree. The way the questions were asked was not consistent, so an answer of ‘strongly agree’ might mean high climate change skepticism for one question, but low skepticism for another question. In order to combine these questions into an index we need to recode (in this case, reverse-code) some of the variables.

  1. Recode or create the variables as specified:
Original value New value
1 48
2 72
3 84
4 108
5 156
6 192
7 252
8 324
9 432
10 540
11 720
12 960
13 1,200
14 1,440

WTP survey categories (original value) and euro amounts (new value).

Figure 11.1 WTP survey categories (original value) and euro amounts (new value).

R walk-through 11.1 Importing data and recoding variables

Before importing data in Excel or .csv format, open it in a spreadsheet program (such as Excel) to ensure you understand the structure of the data and check if any additional options are required for the read_excel function in order to import the data correctly. In this case, the data is in a worksheet called ‘Data’, there are no missing values to worry about, and the first row contains the variable names. This format is straightforward to import. We can therefore import the data using the read_excel function without any additional options.

library(tidyverse)
library(readxl)
library(knitr)

# Set your working directory to the correct folder.
# Insert your file path for 'YOURFILEPATH'.
setwd("YOURFILEPATH")

WTP  read_excel(
  "Project 11 datafile.xlsx", sheet = "Data")

Reverse-code variables

The first task is to recode variables related to the respondents’ views on certain aspects of government behaviour and attitudes about global warming (cog_2, cog_5, scepticism_6, and scepticism_7). This coding makes the interpretation of high and low values consistent across all questions, since the survey questions do not have this consistency.

To recode all of these variables in one go, we use the piping operator (%>%), which can perform the same sequence of commands on a number of variables at once. (For a more detailed introduction to piping, see the University of Manchester’s Econometric Computing Learning Resource.) Note that even though the value of 3 for these variables will stay the same, for the recode function to work properly we have to specify how each new value corresponds to a previous value.

WTP  WTP %>%
  mutate_at(c("cog_2", "cog_5", 
    "scepticism_6", "scepticism_7"),
    funs(recode(., "1" = 5, "2" = 4, "3" = 3, 
      "4" = 2, "5" = 1)))

Create new variables containing WTP amounts

Although we could employ the same technique as above to recode the value for the minimum and maximum willingness to pay variables, an alternative is to use the merge function. This function allows us to combine two dataframes via values given in a particular variable.

We start by creating a new dataframe (category_amount) that has two variables: the original category value and the corresponding new euro amount. We then apply the merge function to the WTP dataframe and the new dataframe, specifying the variables that link the data in each dataframe together (by.x indicates which variable in the first dataframe, here WTP, is to be matched to by.y, the variable in the second dataframe, here category_amount). We also use the all.x = TRUE option to keep all observations, otherwise the merge function will drop any observations with missing values for the WTP_plmin and WTP_plmax variables. Finally we have to rename the column of the merged new values to something more meaningful (WTP_plmin_euro and WTP_plmax_euro respectively).

# Vector containing the Euro amounts
wtp_euro_levels  c(48, 72, 84, 108, 156, 192, 252, 324, 
  432, 540, 720, 960, 1200, 1440)

# Create mapping dataframe
category_amount  data.frame(original = 1:14, 
  new = wtp_euro_levels)

# Create a new variable for the minimum WTP
WTP  merge(WTP, category_amount, 
  by.x = "WTP_plmin", by.y = "original", 
  all.x = TRUE) %>%
  rename(., "WTP_plmin_euro" = "new")

# Create a new variable for the maximum WTP
WTP  merge(WTP, category_amount, 
  by.x = "WTP_plmax", by.y = "original", all.x = TRUE) %>%
  rename(., "WTP_plmax_euro" = "new")
  1. Create the following indices, giving them an appropriate name in your spreadsheet (make sure to use the reverse-coded variable wherever relevant):

R walk-through 11.2 Creating indices

We can create all of the required indices in three steps using the rowMeans function. In each step we use the cbind function to join the required variables (columns) together as a matrix. As the data is stored as a single observation per row, the index value is the average of the values in each row of this matrix, which we calculate using the rowMeans function.

WTP  WTP %>%
  # Ensure subsequent operations are applied by row
  rowwise() %>%
  mutate(., climate = rowMeans(cbind(
    scepticism_2, scepticism_6, scepticism_7))) %>%
  mutate(., gov_intervention = rowMeans(cbind(
    cog_1, cog_2, cog_3, cog_4, cog_5, cog_6))) %>%
  mutate(., pro_environment  = rowMeans(cbind(
    PN_1, PN_2, PN_3, PN_4, PN_6, PN_7))) %>%
  # Return the dataframe to the original format
  ungroup()
Cronbach’s alpha
A measure used to assess the extent to which a set of items is a reliable or consistent measure of a concept. This measure ranges from 0–1, with 0 meaning that all of the items are independent of one another, and 1 meaning that all of the items are perfectly correlated with each other.

When creating indices, we may be interested to see if each item used in the index measures the same underlying concept of interest (known as reliability or consistency). There are two common ways to assess reliability: either look at the correlation between items in the index, or use a summary measure called Cronbach’s alpha (this measure is used in the social sciences). We will be calculating and interpreting both of these measures.

Cronbach’s alpha is a way to summarize the correlations between many variables, and ranges from 0 to 1, with 0 meaning that all of the items are independent of one another, and 1 meaning that all of the items are perfectly correlated with each other. While higher values of this measure indicate that the items are closely related and therefore measure the same concept, with values that are very close to 1 (or 1), we could be concerned that our index contains redundant items (for example, two items that tell us the same information, so we would only want to use one or the other, but not both). You can read more about this in the paper ‘Using and interpreting Cronbach’s Alpha’.

  1. Calculate correlation coefficients and interpret Cronbach’s alpha:
  exaggeration not.human.activity no.evidence
exaggeration 1
not.human.activity   1
no.evidence     1

Correlation table for survey items on climate change scepticism: Climate change is exaggerated (exaggeration), Human activity is not the main cause of climate change (not.human.activity), No evidence of global warming (no.evidence).

Figure 11.2 Correlation table for survey items on climate change scepticism: Climate change is exaggerated (exaggeration), Human activity is not the main cause of climate change (not.human.activity), No evidence of global warming (no.evidence).

R walk-through 11.3 Calculating correlation coefficients

Calculate correlation coefficients and Cronbach’s alpha

We covered calculating correlation coefficients in R walk-through 10.1. In this case, since there are no missing values we can use the cor function without any additional options.

For the questions on climate change:

cor(cbind(WTP$scepticism_2, WTP$scepticism_6, 
  WTP$scepticism_7))
##           [,1]      [,2]      [,3]
## [1,] 1.0000000 0.3904296 0.4167478
## [2,] 0.3904296 1.0000000 0.4624211
## [3,] 0.4167478 0.4624211 1.0000000

For the questions on government behaviour:

cor(cbind(WTP$cog_1, WTP$cog_2, WTP$cog_3, 
  WTP$cog_4, WTP$cog_5, WTP$cog_6))
##           [,1]      [,2]       [,3]      [,4]       [,5]      [,6]
## [1,] 1.0000000 0.2509464 0.32358783 0.6823385 0.28925672 0.4141992
## [2,] 0.2509464 1.0000000 0.11761093 0.2771883 0.40794667 0.0828661
## [3,] 0.3235878 0.1176109 1.00000000 0.3347662 0.01818617 0.3128608
## [4,] 0.6823385 0.2771883 0.33476619 1.0000000 0.27424993 0.4597244
## [5,] 0.2892567 0.4079467 0.01818617 0.2742499 1.00000000 0.1045843
## [6,] 0.4141992 0.0828661 0.31286082 0.4597244 0.10458434 1.0000000

For the questions on personal behaviour:

cor(cbind(WTP$PN_1, WTP$PN_2, WTP$PN_3, 
  WTP$PN_4, WTP$PN_6, WTP$PN_7))
##           [,1]      [,2]      [,3]      [,4]      [,5]      [,6]
## [1,] 1.0000000 0.4824823 0.4282149 0.4226534 0.4138090 0.4584007
## [2,] 0.4824823 1.0000000 0.6315015 0.4375971 0.4994126 0.6542377
## [3,] 0.4282149 0.6315015 1.0000000 0.4596711 0.5219712 0.5894731
## [4,] 0.4226534 0.4375971 0.4596711 1.0000000 0.5668642 0.3947270
## [5,] 0.4138090 0.4994126 0.5219712 0.5668642 1.0000000 0.4551294
## [6,] 0.4584007 0.6542377 0.5894731 0.3947270 0.4551294 1.0000000

Calculate Cronbach’s alpha

It is straightforward to compute the Cronbach’s alpha using the alpha function from the psych package. This function calculates Cronbach’s alpha and stores it in $total$std.alpha.

psych::alpha(WTP[c("scepticism_2", 
  "scepticism_6", "scepticism_7")])$total$std.alpha
## [1] 0.6876079
psych::alpha(WTP[c("cog_1", "cog_2", "cog_3", 
  "cog_4", "cog_5", "cog_6")])$total$std.alpha
## [1] 0.7102249
psych::alpha(WTP[c("PN_1", "PN_2", "PN_3", 
  "PN_4", "PN_6", "PN_7")])$total$std.alpha
## [1] 0.8543827

Now we will compare characteristics of people in the dichotomous choice (DC) group and two-way payment ladder (TWPL) group (the variable abst_format indicates which group an individual belongs to). Since the groups are of different sizes, we will use percentages instead of frequencies.

  1. For each group (DC and TWPL), create tables to summarize the distribution of the following variables (a separate table for each variable):

    • gender (sex)
    • age (age)
    • number of children (kids_nr)
    • household net income per month in euros (hhnetinc)
    • membership in environmental organization (member)
    • highest educational attainment (education).

    Using the tables you have created, and without doing formal calculations, discuss any similarities/differences in demographic characteristics between the two groups.

R walk-through 11.4 Using loops to obtain summary statistics

The two different formats (DC and TWPL) are recorded in the variable abst_format, and take the values ref and ladder respectively. We will store all the variables of interest into a list called variables, and use a ‘for’ loop to calculate summary statistics for each variable and present it in a table.

variables  list(quo(sex), quo(age),
  quo(kids_nr), quo(hhnetinc),
  quo(member), quo(education))

for (i in seq_along(variables)){
  WTP %>%
    group_by(abst_format, !!variables[[i]]) %>%
    summarize (n = n()) %>%
    mutate(freq = n / sum(n)) %>%
    select(-n) %>%
    spread(abst_format, freq) %>%
    print()
}
## # A tibble: 2 x 3
##   sex    ladder   ref
##       
## 1 female  0.518 0.523
## 2 male    0.482 0.477
## # A tibble: 6 x 3
##   age     ladder    ref
##         
## 1 18 - 24 0.0949 0.0964
## 2 25 - 29 0.0830 0.0865
## 3 30 - 39 0.178  0.172 
## 4 40 - 49 0.223  0.226 
## 5 50 - 59 0.241  0.239 
## 6 60 - 69 0.180  0.181 
## # A tibble: 5 x 3
##   kids_nr                ladder     ref
##                         
## 1 four or more children 0.00988 0.00895
## 2 no children           0.646   0.657  
## 3 one child             0.204   0.176  
## 4 three children        0.0296  0.0348 
## 5 two children          0.111   0.123  
## # A tibble: 12 x 3
##    hhnetinc                  ladder     ref
##                             
##  1 1100 bis unter 1500 Euro 0.142   0.132  
##  2 1500 bis unter 2000 Euro 0.150   0.146  
##  3 2000 bis unter 2600 Euro 0.115   0.148  
##  4 2600 bis unter 3200 Euro 0.107   0.107  
##  5 3200 bis unter 4000 Euro 0.111   0.0815 
##  6 4000 bis unter 5000 Euro 0.0514  0.0497 
##  7 500 bis unter 1100 Euro  0.134   0.142  
##  8 5000 bis unter 6000 Euro 0.0277  0.0169 
##  9 6000 bis unter 7500 Euro 0.00791 0.00398
## 10 7500 und mehr            0.00395 0.00497
## 11 bis unter 500 Euro       0.0296  0.0417 
## 12 do not want to answer    0.121   0.125  
## # A tibble: 2 x 3
##   member ladder    ref
##        
## 1 no     0.923  0.914 
## 2 yes    0.0771 0.0865
## # A tibble: 6 x 3
##   education ladder    ref
##           
## 1         1 0.0119 0.0129
## 2         2 0.0198 0.0209
## 3         3 0.342  0.328 
## 4         4 0.263  0.269 
## 5         5 0.0692 0.0686
## 6         6 0.294  0.300

The output above gives the required tables, but is not easy to read. You may want to tidy up the results, for example by translating (from German to English) and reordering the options in the household net income variable (hhnetinc).

  1. Create a separate summary table as shown in Figure 11.3 for each of the three indices you created in Question 3. Without doing formal calculations, do the two groups of individuals look similar in the attitudes specified?
  Mean Standard deviation Min Max
DC format        
TWPL format        

Summary table for indices.

Figure 11.3 Summary table for indices.

 

R walk-through 11.5 Calculating summary statistics

The summarize_at function can provide multiple statistics for a number of variables in one command. Simply provide a list of the variables you want to summarize and then use the funs() option to specify the summary statistics you need. Here, we need the mean, sd, mean, and max for the variables climate, gov_intervention, and pro_environment.

WTP %>%
  group_by(abst_format) %>%
  summarise_at(c("climate", "gov_intervention", 
      "pro_environment"),
    funs(mean, sd, min, max)) %>%
  # Use gather and spread functions to reformat output 
  # for aesthetic reasons
  gather(index, value, 
    climate_mean:pro_environment_max) %>%
  spread(abst_format, value) %>%
  kable(., format = "markdown", digits = 2)
|index                 | ladder|  ref|
|:---------------------|------:|----:|
|climate_max           |   5.00| 5.00|
|climate_mean          |   2.29| 2.37|
|climate_min           |   1.00| 1.00|
|climate_sd            |   0.84| 0.85|
|gov_intervention_max  |   5.00| 5.00|
|gov_intervention_mean |   3.15| 3.19|
|gov_intervention_min  |   1.00| 1.00|
|gov_intervention_sd   |   0.70| 0.66|
|pro_environment_max   |   5.00| 5.00|
|pro_environment_mean  |   3.03| 3.01|
|pro_environment_min   |   1.00| 1.00|
|pro_environment_sd    |   0.79| 0.82|

Part 11.2 Comparing willingness to pay across methods and individual characteristics

Learning objectives for this part

  • compare survey measures of willingness to pay.

Before comparing WTP across question formats, we will summarize the distribution of WTP within each question format.

  1. For individuals who answered the TWPL question:

R walk-through 11.6 Summarizing willingness to pay variables

Create column charts for minimum and maximum WTP

Before we can plot a column chart, we need to compute frequencies (number of observations) for each value of the willingness to pay (1–14). We do this separately for the minimum and maximum willingness to pay.

In each case we select the relevant variable and remove any observations with missing values using the na.omit function. We can then separate the data by level (WTP amount) of the WTP_plmin_euro or WTP_PLmax_euro variables (using group_by), then obtain a frequency count using the summarize function. We also use the factor function to set this variable’s type to factor, to get the correct horizontal axis labels in the column chart.

Once we have the frequency count stored as a dataframe, we can plot the column charts.

For the minimum willingness to pay:

df.plmin  WTP %>%
  select(WTP_plmin_euro) %>%
  na.omit() %>%
  group_by(WTP_plmin_euro) %>%
  summarize(n = n()) %>%
  mutate(WTP_plmin_euro = factor(WTP_plmin_euro, 
    levels = wtp_euro_levels))

ggplot(df.plmin, aes(WTP_plmin_euro, n)) +
  geom_bar(stat = "identity", position = "identity") + 
  xlab("Minimum WTP (euros)") + 
  ylab("Frequency") +
  theme_bw()

Minimum WTP (euros).

Figure 11.4 Minimum WTP (euros).

For the maximum willingness to pay:

df.plmax  WTP %>%
  select(WTP_plmax_euro) %>%
  na.omit() %>%
  group_by(WTP_plmax_euro) %>%
  summarize(n = n()) %>%
  mutate(WTP_plmax_euro = factor(WTP_plmax_euro, 
    levels = wtp_euro_levels))

ggplot(df.plmax, aes(WTP_plmax_euro, n)) +
  geom_bar(stat = "identity", position = "identity") + 
  xlab("Maximum WTP (euros)") + 
  ylab("Frequency") +
  theme_bw()

Maximum WTP (euros).

Figure 11.5 Maximum WTP (euros).

Calculate average WTP for each individual

We can use the rowMeans function to obtain the average of the minimum and maximum willingness to pay.

WTP  WTP %>%
  rowwise() %>%
  mutate(., WTP_average = rowMeans(cbind(
    WTP_plmin_euro, WTP_plmax_euro))) %>%
  ungroup()

Calculate mean and median WTP across individuals

The mean and median of this average value can be obtained using the mean and median functions, although we have to use the na.rm = TRUE option to handle missing values correctly.

mean(WTP$WTP_average, na.rm = TRUE)
## [1] 268.5345
median(WTP$WTP_average, na.rm = TRUE)
## [1] 132

Calculate correlation coefficients

We showed how to obtain a matrix of correlation coefficients for a number of variables in R walk-through 8.8. We use the same process here, storing the coefficients in an object called M.

WTP %>%
  # Create the gender variable
  mutate(gender = 
    as.numeric(ifelse(sex == "female", 0, 1))) %>%
  select(WTP_average, education, gender,
    climate, gov_intervention, pro_environment) %>%
  cor(., use = "pairwise.complete.obs") -> M

M[, "WTP_average"]
##      WTP_average        education           gender          climate 
##       1.00000000       0.13817368       0.03694972      -0.14462072 
## gov_intervention  pro_environment 
##      -0.18845205       0.18750331
  1. For individuals who answered the DC question:

R walk-through 11.7 Summarizing Dichotomous Choice (DC) variables

Create frequency table for DC_ref_outcome

We can group by costs and DC_ref_outcome to obtain the number of observations for each combination of amount and vote response. We can also recode the voting options to ‘Yes’, ‘No’, and ‘Abstain’.

WTP_DC  WTP %>%
  group_by(costs, DC_ref_outcome) %>%
  summarize(n = n()) %>%
  na.omit() %>%
  mutate_at("DC_ref_outcome", 
    funs(recode(., 
      "do not support referendum and no pay" = "No",
      "support referendum and pay" = "Yes",
      "would not vote" = "Abstain"))) %>%
  spread(DC_ref_outcome, n)

kable(WTP_DC, format = "markdown", digits = 2)
| costs| Abstain| No| Yes|
|-----:|-------:|--:|---:|
|    48|      12| 21|  32|
|    72|      11| 30|  40|
|    84|      12| 24|  45|
|   108|       7| 35|  31|
|   156|      13| 31|  40|
|   192|      11| 25|  25|
|   252|       9| 32|  28|
|   324|      16| 41|  27|
|   432|      11| 35|  29|
|   540|       9| 31|  22|
|   720|      12| 39|  13|
|   960|      14| 28|  15|
|  1200|      11| 42|  21|
|  1440|      19| 42|  15|

Add column showing proportion voting yes or no

We can extend the table from Question 2(a) to include the proportion voting yes or no (to obtain percentages, multiply the values by 100).

WTP_DC  WTP_DC %>%
  mutate(total = Abstain + No + Yes, 
    prop_no = (Abstain + No) / total, 
    prop_yes = Yes / total) %>%
  # Round all numbers to 2 decimal places
  mutate_if(is.numeric, funs(round(., 2)))

kable(WTP_DC, format = "markdown", digits = 2)

Make a line chart of WTP

Using the dataframe generated for Questions 2(a) and (b) (WTP_DC), we can plot the ‘demand curve’ as a scatterplot with connected points by using the geom_point and geom_line options for ggplot. Adding the extra option scale_x_continuous changes the default labeling on the horizontal axis to display ticks at every 100 euros, enabling us to read the chart more easily.

p  ggplot(WTP_DC, aes(y = prop_yes, x = costs)) +
  geom_point() + 
  geom_line(size = 1) +
  ylab("% Voting 'Yes'") + 
  xlab("Amount (euros)") +
  scale_x_continuous(breaks = seq(0, 1500, 100)) +
  theme_bw()

print(p)

Demand curve (in euros), DC method.

Figure 11.6 Demand curve (in euros), DC method.

Calculate new proportions and add them to the table and chart

It is straightforward to calculate the new proportions and add them to the existing dataframe, however, we will need to reshape the data (using gather) to plot multiple lines on the same scatterplot.

WTP_DC  WTP_DC %>%
  mutate(total_ex = No + Yes, 
    prop_no_ex = No / total_ex, 
    prop_yes_ex = Yes / total_ex) %>%
  # Round all numbers to 2 decimal places
  mutate_if(is.numeric, funs(round(., 2)))

kable(WTP_DC, format = "markdown", digits = 2)
WTP_DC %>%
  select(costs, prop_yes, prop_yes_ex) %>%
  gather(Vote, value, prop_yes:prop_yes_ex) %>%
  ggplot(., aes(y = value, x = costs, color = Vote)) + 
    geom_line(size = 1) + 
    geom_point() +
    ggtitle("'Demand curve' from DC respondents, under 
      different treatments for 'Abstain' responses.") +
    scale_color_manual(values = c("blue", "red"), 
      labels = c("counted as no", "excluded")) +
    ylab("% voting 'yes'") + 
    xlab("Costs (euros)") +
    theme_bw()

Demand curve from DC respondents, under different treatments for ‘Abstain’ responses.

Figure 11.7 Demand curve from DC respondents, under different treatments for ‘Abstain’ responses.

  1. Compare the mean and median WTP under both question formats:
Format Mean Standard deviation Number of observations
DC      
TWPL      

Summary table for WTP.

Figure 11.8 Summary table for WTP.

 

R walk-through 11.8 Calculating confidence intervals for differences in means

Calculate the difference in means, standard deviations, and number of observations

We first create two vectors that will contain the WTP values for each of the two question methods. For the DC format, willingness to pay is recorded in the costs variable, so we select all observations where the DC_ref_outcome variable indicates the individual voted ‘yes’ and drop any missing observations. For the TWPL format we use the WTP_average variable that we created in R walk-through 11.6.

DC_WTP  WTP %>% subset(
  DC_ref_outcome == "support referendum and pay") %>%
  select(costs) %>%
  filter(!is.na(costs)) %>%
  as.matrix()

# Print out the mean, sd, and count
cat(sprintf("DC Format - mean: %.1f, 
  standard deviation %.1f, count %d\n",
  mean(DC_WTP), sd(DC_WTP), length((DC_WTP))))
## DC Format - mean: 348.2, standard deviation 378.6, count 383
TWPL_WTP  WTP %>%
  select(WTP_average) %>%
  filter(!is.na(WTP_average)) %>%
  as.matrix()

cat(sprintf("TWPL Format - mean: %.1f, 
  standard deviation %.1f, count %d\n", 
  mean(TWPL_WTP), sd(TWPL_WTP),
  length((TWPL_WTP))))
## TWPL Format - mean: 268.5, standard deviation 287.7, count 348

Calculate 95% confidence intervals

Using the t.test function to obtain 95% confidence intervals was covered in R walk-throughs 8.10 and 10.6. As we have already separated the data for the two different question formats in Question 3(a), we can obtain the confidence interval directly.

t.test(DC_WTP, TWPL_WTP, conf.level = 0.05)$conf.int
## [1] 78.10141 81.20560
## attr(,"conf.level")
## [1] 0.05

Calculate median WTP for the DC format

In R walk-through 11.6 we obtained the median WTP for the TWPL format (132). We now obtain the WTP using the DC format.

median(DC_WTP)
## [1] 192