When planning an evaluation that features interviews, it is difficult to know in advance how many interviews should be conducted. The usual approach is to continue to interview until you no longer hear new information. (Olney & Barnes, Data Collection and Analysis, p. 23) However, there are times when a numeric guideline would be very useful. For example, such a guideline would help in creating the budget for the evaluation section of a proposal when that evaluation features qualitative data collection. Guest, et al., conducted a research project in which they analyzed transcripts of sixty interviews and found that 94% of the coded topics that appeared were identified within six interviews. Analysis of six additional interviews revealed only another 3% of all the coded topics that were eventually found in the sixty interviews. They concluded that data saturation (the point at which no additional data are being found) occurred by the time they had analyzed twelve interviews. They point out that similar evidence has been presented by Nielsen and Landauer, who found that technology usability studies uncover 80% of problems after six evaluations, and 90% after twelve. In a later publication, Nielsen showed that usability testing with fifteen participants will uncover 100% of problems, but he recommends using a smaller number (“Why You Only Need to Test with Five Users” from Jakob Nielsen’s Alertbox, March 19, 2000).
Is twelve the magic number for interviews, then? Not necessarily. Guest, et al. caution that their study involved highly structured interviews with members of a relatively homogeneous group. In addition, the interview topics were familiar to the participants. With a heterogeneous group or a diffuse (vague) topic, more interviews will probably be needed. The bottom line is to select a purposeful group of interview candidates carefully. Olney and Barnes provide more details about purposeful sampling (Olney & Barnes, Data Collection and Analysis, p. 22).
The article by Guest, et al. is not open-access, but this is a link to the authors’ abstract: Guest, G., et al. “How many interviews are enough? : An experiment with data saturation and variability.” Field Methods, February 2006. 18(1):59-82.