Here is the CMT Uptime check phrase

FREDERICK CONRAD

Cognitive Interviewing

Blair, J. & Conrad, F.G. (2011). Sample size for cognitive interview pretesting. Public Opinion Quarterly, 75, 636-658.

Abstract. Every cognitive interview pretest designer must decide how many interviews need to be conducted. With little theory or empirical research to guide the choice of sample size, practitioners generally rely on the examples of other studies and their own experience or preferences. We investigated pretest sample size both theoretically and empirically. Using a model of the relationship of sample size to question problem prevalence, detection power of the cognitive interview technique, and probability of observing a problem, we computed the sample size necessary, under varying conditions, to detect problems. Under a range of plausible values for the model parameters, we found that additional problems continued to be detected as sample size increased. We also report on an empirical study that simulated the number of problems detected at different sample sizes. Multiple outcome measures showed a strong positive relationship between sample size and problem detection; serious problems that were not detected in small samples were consistently observed in larger samples. We discuss the implications of these findings for practice and for additional research.

Conrad, F.G. & Blair, J. (2009). Sources of Error in Cognitive Interviews. Public Opinion Quarterly, 73, 32-55.

Cognitive interviewing is used to identify problems in questionnaires under development by asking a small number of pretest participants to verbally report their thinking while answering the draft questions. Just as responses in production interviews include measurement error, so the detection of problems in cognitive interviews can include error. In the current study, we examine error in the problem detection of both cognitive interviewers evaluating their own interviews as well as independent judges listening to the full set of interviews. The cognitive interviewers were instructed to probe for additional information in one of two ways: the Conditional Probe group probed about what respondents had explicitly reported; the Discretionary Probe group probed when they felt it appropriate. Agreement about problems was surprisingly low overall, but differed by interviewing group. The Conditional Probe interviewers uncovered fewer potential problems but with higher inter-judge reliability than did the Discretionary Probe interviewers. These differences in reliability were related to the type of probes. When interviewers in either group probed beyond the content of respondents’ verbal reports, they were prone to believe the respondent had experienced a problem when the majority of judges did not believe this to be the case (false alarms). Despite generally poor performance at the level of individual verbal reports, judges reached relatively consistent conclusions across the interviews about which questions most needed repair. Some practical measures may improve the conclusions drawn from cognitive interviews but the quality of the findings is limited by the content of the verbal reports.

Conrad & Blair (2004). Aspects of data quality in cognitive interviews: The case of verbal reports. In S. Presser, J. Rothgeb, M. Couper, J. Lessler, E. Martin, J. Martin & E. Singer (Eds.) Questionnaire Development, Evaluation and Testing Methods. New York: John Wiley and Sons.

Cognitive interview techniques are constructed from a menu of laboratory procedures leading to many disparate techniques. However, one common thread across techniques is that they all produce verbal reports. It stands to reason that different versions of cognitive interviewing produce data and decisions that vary in their quality, but very little empirical evaluation has been conducted. We propose a research agenda for evaluating the quality of information produced by cognitive interview techniques. We propose that problem detection and, ultimately, problem repair are the fundamental purposes of the method. The quality of each should be assessed through standalone experiments that measure reliability and validity of potential problems and effectiveness of revisions in eliminating recurrence of problems. We then illustrate the kind of research we advocate with a case study that compares quality of the verbal reports in two cognitive interview techniques. One technique represents the practices of experienced cognitive interviewers. The other technique constrains interviewer probing to explicit indications of problems in respondents’ verbal reports. The results suggest that, in cognitive interviewing in general, verbal reports about answering survey questions are difficult to interpret consistently, raising concerns about their credibility. They further suggest that constraining probes to specific respondent indications of problems leads to fewer but more reliably identified problems.