Here is the CMT Uptime check phrase

FREDERICK CONRAD

Conversational Interviewing

Schober, M.F., Conrad, F.G, Dijkstra, W., & Ongena, Y. (2012). Disfluencies and gaze aversion in unreliable responses to survey questions. Journal of Official Statistics, 28, 555-582

When survey respondents answer survey questions, they can also produce “paradata” (Couper 2000, 2008): behavioral evidence about their response process. The study reported here demonstrates that two kinds of respondent paradata – fluency of speech and gaze direction during answers – identify answers that are likely to be problematic, as measured by changes in answers during the interview or afterward on a post-interview questionnaire. Answers with disfluencies were less reliable both face to face and on the telephone than fluent answers, and particularly diagnostic of unreliability face to face. Interviewers’ responsivity can affect both the prevalence and potential diagnosticity of paradata: both disfluent speech and gaze aversion were more frequent and diagnostic in conversational interviews, where interviewers could provide clarification if respondents requested it or the interviewer judged it was needed, than in strictly standardized interviews where clarification was not provided even if the respondent asked for it.

Schober, M.F., Conrad, F.G. and Fricker, S.S. (2004). Misunderstanding standardized language. Applied Cognitive Psychology, 18, 169-188.

Leaving the interpretation of words up to participants in standardized survey interviews, aptitude tests, and experiment instructions can lead to unintended interpretation; more collaborative interviewing methods can promote uniform understanding. In two laboratory studies (a factorial experiment and a more naturalistic investigation), respondents interpreted ordinary survey concepts like “household furniture” and “living in a house” quite differently than intended in strictly standardized interviews, when the interpretation was left entirely up to them. Comprehension was more accurate when interviewers responded to requests for clarification with nonstandardized paraphrased definitions, and most accurate when interviewers also provided clarification whenever they suspected respondents needed it.

Conrad & Schober (2000). Clarifying question meaning in a household telephone survey.

The techniques that survey interviewers use to clarify question meaning reflect tacit assumptions about communication. In this study, we contrast two interviewing techniques. In one, strictly standardized interviewing, interviewers leave the interpretation of questions up to respondents; this assumes that the same words will mean the same thing to most people on most occasions. In the other, conversational interviewing, interviewers say whatever it takes to make sure that questions are interpreted uniformly and as intended; this assumes that people may need to talk about what words mean in order to understand each other. Respondents from a national probability sample were interviewed twice. Each time they were asked the same ten factual questions from ongoing government surveys, five about their housing and five about their recent purchases. The first interview was strictly standardized; the second interview was standardized for half the respondents and conversational for the other half. Respondents in a second conversational interview gave different answers than in the first interview more often, and for reasons that conformed more closely to official definitions, than respondents in a second standardized interview. This suggests that conversational interviewing improved comprehension, although it also led to longer interviews. We conclude that respondents in a national sample may misinterpret non-sensitive factual questions frequently enough that data quality can be compromised, and that such misunderstandings cannot easily be eliminated through pretesting and question rewording alone. More standardized comprehension may require less standardized interviewer behavior.

Schober, M.F. & Conrad, F.G. (1997). Does conversational interviewing reduce survey measurement error? Public Opinion Quarterly, 61, 576-602.

Standardized survey interviewing is widely advocated in order to reduce interviewer-related error (e.g. Fowler & Mangione, 1990). But Suchman and Jordan (1990, 1991) argue that standardized wording may decrease response accuracy because it prevents the conversational flexibility that respondents need in order to understand questions as survey designers intended. We propose that the argument for these competing positions— standardized versus flexible interviewing approaches—may be correct under different circumstances. In particular, both standardized and flexible interviewing should produce high levels of accuracy when respondents have no doubts about how concepts in a question map onto their circumstances. However, flexible interviewing should produce higher response accuracy in cases where respondents are unsure about these mappings. We demonstrate this in a laboratory experiment where professional telephone interviewers, using either standardized or flexible interviewing techniques, asked respondents questions from three large government surveys. Respondents answered on the basis of fictional descriptions so that we could measure response accuracy. The two interviewing techniques led to virtually perfect accuracy when the concepts in the questions clearly mapped onto the fictional situations. When the mapping was less clear, flexible interviewing increased accuracy by almost 60%. This was true whether flexible respondents had requested help from interviewers or whether interviewers had intervened without being asked for help. But the improvement in accuracy came at a substantial cost: a large increase in interview duration. We propose that different circumstances may justify the use of either interviewing technique.