## Content

### How biases can arise in sampling

There is more than one way to select a sample badly, and great care is needed in both the design and the practical implementation of a sampling process to avoid biases arising. A sampling plan might be well designed, but if the plan is impractical in the field, the final sample will not be representative: it may be biased.

#### Convenience samples

Some methods of obtaining participants are particularly bad; relying on volunteers is one of them. This can result in strong bias. Often people who volunteer for a sample are more likely to have a vested interest in the topic of the survey than people who don't volunteer.

Recently, internet companies have begun recruiting people to join online survey panels; these are groups of people willing to answer online surveys, usually for some pay. When these people sign up for the panel, they do not have knowledge of the topics of the surveys they might be invited to participate in. However, they have agreed to answer surveys for reward, and this makes them a particular subset of the general population, and not a random selection.

Exercise 4

You will often see invitations to participate in surveys on the television and on the internet. An example is an online poll for the magazine Cosmopolitan, which invited readers to take part in a survey with the title 'Do you abuse prescription drugs?'

1. What do you think motivates people to participate in such polls, and who would be interested in being a part of this survey?
2. What kind of biases might arise in the results of this survey?

#### Problems with the sample frame

Obtaining a suitable sample frame is important for avoiding biases in the results obtained from a sample chosen from that frame. This can be tricky, as discussed in the previous section; the population of interest might not be static in space or in time. Simply selecting a sample where it is convenient may be logistically easier, but it might not give a fair representation of the population of interest.

#### Example: 1936 US presidential election (The Literary Digest poll)

In the presidential election held in the United States in 1936, the candidates were the incumbent President Franklin Roosevelt (Democrat) and Alf Landon (Republican).

The Literary Digest was a popular and widely read weekly magazine that ran a poll to predict the winner of the presidential race, and had done so correctly from 1920 to 1932. In 1936, The Literary Digest mailed a questionnaire to 10 million people to ask about their voting intentions. This extraordinary number of people included readers of The Literary Digest, registered car owners and people listed in the phone book. In one of the largest surveys ever (although not The Literary Digest's largest), 2.4 million voters replied. The response rate — the percentage of people responding of those invited to participate — was $$\dfrac{2\ 400\ 000}{10\ 000\ 000}$$, or 24%. The Literary Digest claimed 'The country will know to within a fraction of one per cent the actual popular vote of forty million.' The prediction that The Literary Digest made based on their survey was that Franklin Roosevelt would receive only 43% of the vote; Landon was predicted to win in a landslide. As you may know, Roosevelt won — he obtained 62% of the vote of around 40 million voters.

The failure of The Literary Digest's poll was an embarrassment, and The Literary Digest subsequently went out of business; eventually its subscriber list was bought by Time magazine.

Why did The Literary Digest get the result so wrong? One problem was that the sample frame — the set of lists of names from which they recruited voters — was biased. Magazine readers, car club members and telephone subscribers tended to be relatively wealthy, and the wealthy at this time (during the great Depression) tended to be Republican voters. This is an example of a biased sample frame. The large number of people responding to the survey did not guarantee that the result would be accurate.

You might also wonder why The Literary Digest failed in 1936 when its reputation had been built on successful predictions of the results of earlier presidential elections. One reason is that economic conditions in the US were in decline, and in 1936 voting patterns related more strongly to economic circumstances than they had in the past. Biases in the sample frame used mattered less in earlier elections.

#### Example: 1936 US presidential election (American Institute of Public Opinion)

In 1936, the American Institute of Public Opinion also carried out a poll asking voters about their intentions in the upcoming presidential election. The institute's founder, George Gallup, understood that a very large sample would not necessarily provide an accurate result. Gallup had worked out what kinds of personal characteristics (including state, urban/rural residence, gender, age and income) related to voting patterns, and used these in the design of his sample. He set quotas for the numbers of individuals needed for each type of respondent, so that the number surveyed would reflect the population distribution. Gallup's method of filling quotas is not a random sampling method.

On 31 October 1936, Gallup reported the results of his final poll in his syndicated newspaper column 'America Speaks'; he predicted that Roosevelt would win the election with 56% of the vote. This prediction was 6% lower than the actual result, but much closer than The Literary Digest's prediction and, importantly, a correct prediction of a win for Roosevelt. Gallup's result was based on the responses of around 50 000 voters.

Why did the American Institute of Public Opinion's poll do better than The Literary Digest's poll? First, they asked about voting patterns in face-to-face interviews as well as by mail; they employed hundreds of interviewers across the country with quotas to fill, and so the survey did not suffer the same biases as The Literary Digest's poll. Second, the intention was to avoid bias in the sampling of individuals to be interviewed, although details about how this was carried out are sketchy.

#### Example: 1936 US presidential election (The Literary Digest versus Gallup)

George Gallup was relatively unknown before the 1936 presidential election. He had studied the polling methods used by The Literary Digest and thought he could do better. Gallup made a bold prediction of The Literary Digest's prediction of the election result before The Literary Digest made its own prediction! Gallup based his claim on a survey of 3000 people who were selected at random from the lists (sample frame) used by The Literary Digest. Gallup used the same method as The Literary Digest to collect information — by mail. Gallup predicted that The Literary Digest would call the result for Landon with Roosevelt obtaining only 44% of the vote. This was only 1% higher than the prediction The Literary Digest made. Gallup got enormous publicity.

Why did this American Institute of Public Opinion survey result correspond so well to The Literary Digest's result? The population that Gallup and his colleagues wanted to survey was the population of voters who were to be surveyed by The Literary Digest. For this, they had a very well-defined sample frame: the lists of magazine readers, registered car owners and telephone subscribers. Gallup sampled randomly from this list.

#### Problems with the response rate

When sampling human populations in particular, the willingness of invited participants to respond to the invitation and participate in the study can be an important issue. Increasingly, targeted potential respondents are unwilling to participate.

The response rate is the percentage of targeted potential respondents that actually respond. In sampling from a human population, there are many reasons why a sampled unit might not participate in the study. If the details in the sample frame are incorrect, the individual might not be able to be contacted. Contact details might be correct, but busy individuals may be hard to find. People might refuse to participate for a wide range of reasons: health, ethical issues relating to the study, lack of interest or insufficient time. This is an important issue in human surveys; it can be less problematic in surveys of other entities.

#### Example: 1936 US presidential election (response rates)

A common criticism of the 1936 Literary Digest poll is the low response rate of 24%. In Gallup's news report on 31 October 1936, where he gave the result of the final pre-election poll by the American Institute of Public Opinion, he stated that 'The number of ballots distributed in the poll was 312 551.' The final sample size for this poll is reported as 'around 50 000' (see the textbook listed in the References section).

There is little information available about the response rate for the American Institute of Public Opinion poll. Lawrence E. Benson, an associate of the American Institute of Public Opinion, reported a $$17.3\%$$ response rate for the institute's mail surveys in 1936. It seems that the response rate to the American Institute of Public Opinion election poll must have been lower than that for the Literary Digest poll. Gallup would have needed about 75 000 responses for a response rate comparable to that of The Literary Digest.

Why is The Literary Digest's response rate so often highlighted as a concern? One reason is that, when there is a poor response rate, there is a potential for non-response bias. We discuss this next.

#### Non-response bias

People who have been randomly selected to be part of a survey but refuse the invitation to participate can be different from the people who agree to participate.

Non-response bias occurs when the people who respond to the survey are different, on average, from those who do not. More generally, the units in a sample that cannot be contacted and the units in the sample that can be contacted may differ in important ways that relate to the purpose of the survey.

Often, unfortunately, the possibility for non-response bias is ignored. The lower the response rate, the more scope there is for non-response bias. If a large proportion of a sample fails to respond, having a large sample will not help: the results should be regarded as unreliable.

Exercise 5

Consider carrying out a long-term study of drug and alcohol use in young men, where you select a random sample of boys aged 13 to 16 and intend to follow them up several times into adulthood.

1. How might you select the sample of boys?
2. What issues might arise in following these adolescents over time?
3. Might these issues relate to the outcome we are interested in?
4. What kind of biases might this introduce?

#### Example

There was a state election in Victoria in late 2010. The Age is one of the newspapers published in Victoria, owned by Fairfax Media Limited. On 12 November 2010, The Age reported the outcome of an online poll they conducted. There were three main parties in Victoria at the time: Labor, the Greens and the Liberal/National coalition. In the poll, the percentage of respondents indicating that they would vote for Labor was two-thirds of those saying they would vote for the Greens, that is, considerably less.

Exercise 6

Consider the 2010 The Age poll on voting intentions in the Victorian election. In the actual election, the vote for Labor was more than three times greater than that for the Greens.

1. There were over 27 000 respondents to the poll. Is that enough for it to be reliable?
3. Who was likely to respond?
4. The Age conceded that the poll was not scientific, but suggested that the results would disturb Labor supporters. Why was there a disclaimer?

#### Example: 1936 US presidential election (non-response bias)

The response rates for The Literary Digest and the American Institute of Public Opinion's polls of 1936 election were both relatively poor, allowing for the possibility of non-response bias. The purpose of both polls was to provide accurate estimates of the percentage of people voting for Roosevelt and for Landon. The Literary Digest's result was highly inaccurate; the American Institute of Public Opinion's poll was over 6% out. The voters responding to The Literary Digest's invitation to participate, in particular, tended to be Republican (Landon) voters; those who chose not to participate tended to be Democrat voters. Biases in the same direction are likely to have been operating in Gallup's pre-election poll, as his result underestimated the Democrat vote.

Scholars still debate the extent to which the spectacular failure of the Literary Digest poll was due to non-response bias versus a biased sample frame. It is likely that both were contributing factors.

Exercise 7

The Literary Digest's presidential poll of 1936 obtained responses from 2 400 000 voters from 10 000 000 sampled. Recall that 43% of the respondents indicated they would vote for Roosevelt.
1. If the poll was unbiased, what percentage vote for Roosevelt is predicted?
2. Consider how biased the survey could be. If all the non-respondents were to vote for Roosevelt, what is the predicted percentage vote for Roosevelt?
3. If all the non-respondents were to vote for Landon, what is the predicted percentage vote for Roosevelt?
4. If half of the non-respondents were to vote for Roosevelt, what is the predicted percentage vote for Roosevelt?
5. Roosevelt received 62% of the actual vote. What proportion of the non-respondents to the Literary Digest poll would have had to vote for Roosevelt so that his percentage vote among the 10 000 000 people sampled was 62%?

Next page - Content - Sampling from an infinite population