top of page

Sampling Part 3: Sampling Methods


Sampling In Practice


In Part 1 and Part 2 of the series on sampling, we talked a lot about the theory. Now, it’s time to put it into practice.


Specific survey goals, the required level of accuracy, and available resources determine the survey design. When we have our survey design defined, we should choose the most appropriate data collection method or a combination of methods. Each data collection method has its specifics in terms of the sample design.


F2F (Face-to-Face) door-to-door interviews


This traditional form of data collection is still irreplaceable if we want to have the best possible control over screening, keep respondents engaged for longer questionnaires, and monitor their non-verbal behavior. From the sampling perspective, F2F can also have advantages over other data collection forms because it allows for territorial representativeness. In Part 1, we mentioned that the main characteristic of the representative sample is that all target population members had the same, or roughly the same, the chance of being selected into the sample. Also, we mentioned that we rarely have a register of all population members from which to choose at our disposal. A list of residential addresses is typically better than other available registers such as telephone numbers, email addresses, etc.

On the other hand, F2F door-to-door method is far more expensive and logistically demanding because it requires well-trained interviewers to travel around while also carrying data-collection equipment. One of the most popular ways to reduce logistic costs while maintaining sample representativeness is stratification. When creating a stratified sample, we split the targeted territory into smaller areas (e.g., municipalities) and randomly selected respondents from those areas. The selection of the areas should reflect the structure of the population with regards to the most critical demographical characteristics such as age, gender, and type of settlement (e.g., urban/rural).


Central location sampling


In central location surveys, we also conduct the interviews in person (i.e., F2F), but instead of visiting our respondents in their homes, we find our sample in a public space. The organization of the central location surveys goes as follows: we rent a space in the city center when there is a lot of crowd in the street. Our interviewers are trying to recruit the respondents for the survey, typically using their charm and some carefully selected gift (called incentive). The main advantage of this approach is that it allows us to test the broadest range of products, including foods and drinks. However, this possibility also brings some risks. Please think twice if you want to test alcoholic beverages or food that contains allergens. You may find yourself in trouble. Whenever you are unsure, you can consult the legal department about the test material and the incentives. Central location survey costs are typically lower than those of door-to-door surveys.


Quota samples


The type of sampling that we use in central location surveys is called quota sampling. It means that we make a priori proportions of various groups (gender, age, product/category consumption levels, preferred brands, etc.) that we want to have in our sample. Typically, we create our quotas based on other sources, such as the national census or a survey done on a population-representative sample. The idea behind the quota sample is to mimic the structure of the representative sample.


There are some important caveats that we should keep in mind when dealing with quota samples. Firstly, we should never forget that they are only mimicking the population structure. Secondly, and more importantly, we should keep in mind that the quota sample is not a probabilistic type of sample, meaning that people are not randomly selected. This means that, strictly statistically speaking, we should not generalize based on quota samples. However, in practice, we all make generalizations using quota samples. So, yes – do the t-test and z-test on your quota sample, but be cautious about the conclusions you draw and the metrics you report. Skip the confidence intervals and other metrics that would be more appropriate for a random sample. Don’t estimate prevalences or other absolute measures (expenditures, consumption volume, and similar).


Phone Interviewing


Phone Interviewing or, in research jargon, CATI (computer-assisted telephone interviewing), can be a good solution for very short and straightforward questionnaires but is limited only to verbal content (evaluation of any visual stimulus is not possible). CATI is cheaper than F2F surveys (both door to door and central location) because we do not have traveling costs, but it can still carry considerable cost since we have to pay and train the interviewers. From a sampling perspective, we can obtain representative data with CATI if the register we use as a sample frame is an exhaustive list of the population’s phone numbers. In practice, it is much more common to conduct telephone surveys using quota samples.


Online Interviewing


Online interviewing, or CAWI (Computer-assisted web interviewing), has many different forms. However, the common denominators are that the respondent answers the questions him- or herself and uses the Internet to provide the answers. Since we do not need interviewers and equipment, this is the cheapest form of data collection. Also, CAWI can be very quick, test different materials, and be more convenient for hard-to-reach respondents. Because of that, CAWI has been taking precedence over the other forms of surveys.


However, it has some disadvantages as well. Generally, screening of the target population with CAWI is less reliable. In addition, we can not get non-verbal responses, and it’s harder to keep the respondents engaged and control their understanding of the questions. Most importantly, CAWI is not an appropriate solution for people with reading problems, those who have no access to devices or do not usually use the Internet. In the following text, we will discuss the primary forms of CAWI in more detail.


Opt-in (crowdsourcing polling)


In this form of the CAWI, researchers embed questionnaire form into a web page so any visitor can participate in the survey. In addition, some popular social media sites (e.g., FB) can provide fairly detailed targeting. This form of surveying is the cheapest one and can get you a ton of responses quickly. However, from the point of the generalizability of the results, Opt-in CAWI is the weakest solution. As you're can probably already guess, the reason for that is the sample characteristics. Opt-in surveys have a very low response rate and are usually severely biased towards the audience with a particular interest in the topic.


Panel data


Another common variation of CAWI is an online panel. Online panels are data collection platforms with users who regularly respond to the questionnaires in return for a reward (money, gifts, humanitarian donations, etc.) Research agencies or specialized data collection agencies maintain their panels. The quality of the data varies drastically between the panels. It depends primarily on the size and quality assurance procedures. Good maintenance of a panel requires fair relations with panelists and good detection of cheating. Unfortunately, some respondents are very skilled in passing the screening questions to get the incentive, so they become so-called “professional respondents.” Therefore, researchers also have to think well when formulating their screening questions, and panel administrators should exclude those universal “consumers.”


Mobile App-based data collection


The Mobile App-based data collection method recruits the users by mobile applications. This method incentivizes the users to participate in the survey by unlocking some of the app's paid features by finishing the survey. It could be things like energy points in a game or an article on a news website or similar. A Mobile app-based approach can get you affordable results in a short time. In addition, the risk of having “professional respondents” in your sample is lower than in panels. However, coverage of Mobile App-based data collection can be a weak point of this approach. It depends on the number and variability of apps with which the data collection provider has contracted.


Direct email


Similar to mentioned phone interviews, direct email can give you a very high-quality sample if the email register covers your population and you have a high response rate (>80%). Moreover, it allows respondents to answer at their convenience, and it’s appropriate for longer questionnaires with pictures and audio-video materials.


Sampling bias


When we plan our sample, a significant cause of headaches is the response rate. Certain groups of people are generally more willing and likely to participate in surveys.

But, the response rate drastically depends on the data collection method. For example, by using F2F door-to-door surveying, we will more likely reach senior than young population, unemployed or stay-at-home parents rather than people working out of the home. On the other hand, various forms of web interviewing are more appropriate for the younger population since they are spending more time online. In practice, it is imperative to be aware of the biases related to the sampling method you use. Combining different data collection methods within a survey can sometimes be a good solution. But, you should always double-check the differences in results between the methods and try to understand them. When reporting results, you should always be explicit about the data collection method you used. Unfortunately, some research agencies use cheaper data collection methods (e.g., CAWI) than the method they originally sold to the clients (e.g., F2F), which is simply business fraud.


Weighting


A common way of reducing the impact of unequal response rates is a mathematical procedure of weighting the results. This procedure gives more weight to the underrepresented groups' responses and less weight to those we have too many in the sample. We multiply each response by a number higher or lower than one, depending on whether that respondent belongs to an under or over-represented group, respectively. A common practice is not to do weights lower than 0,3 and higher than 3. Weighting also sounds a bit shady, right?


In some cases, unfortunately, it is, but it doesn’t have to be. It all depends on how it is performed and communicated. To do weighting properly, we have to use updated data about the target population structure to precisely know how much our sample deviates from it. The table that contains population norms is called the table of margins, and we typically construct it based on the most recent national census data. In most countries, that information is publicly available on the websites of the national institutes for statistics or demographics.


But do we always need results that are nationally representative? If the survey aims to estimate the number of people with specific characteristics (e.g., how many people drink soda at least weekly among the 18+ population?), then the answer is yes. But, most commonly, we are just interested in the specific group’s opinion (e.g., what do 18+ weakly soda drinkers think about the new soda product?). In the latter case, we are not obliged to have a nationally representative sample. But, we would like to have a sample that is representative of 18+ weekly soda drinkers! So, how to achieve that in practice?


First, we strive to have a so-called general consumer survey on a nationally representative sample at least once a year. Then, based on the results of that extensive survey, we determine the structure of the users. We then aim to replicate that sample structure in all following surveys to get results representing the population of the users. Finally, researchers have to communicate the weighting procedure to the stakeholders and the audience. The best way to do that is to report both weighted and unweighted results.


Convenient samples


Keep in mind that surveys done on a representative sample are not the only ones with value. Sometimes our target audience is rare (e.g., surgeons, CEOs, lawyers, those who built a house or traveled to a specific destination), and we can not achieve any representativeness. In that case, we should use a convenient sample. In practice, it typically means: “grab everyone you can.” A more structured way of collecting rare audiences is the snowball method. Snowball sampling is when respondents recruit other participants for the study. A common restrictive rule is that one person can not bring more than three additional people into the sample. We introduce this restriction to minimize community bias. Surveys conducted on convenient samples do not allow us to make strict conclusions about populations. However, they can still improve our knowledge about the targeted audience and help us make better decisions.


Conclusion


If you take one thing from the whole post, it should be that all the details about the sample and the sampling procedure have to be explicitly presented. That includes the type of sample, sample size, data collection method, range of weights (if we did the weighting), and the response rate. In everyday practice, stakeholders often challenge us to adjust the sample design to the time and budget limitations. Finding a „Solomonic solution“ requires having both the bigger picture of the project goals and professional integrity.


Comments


bottom of page