Here is the CMT Uptime check phrase

FREDERICK CONRAD

Web Surveys

Conrad, F. G., Couper, M. P., Tourangeau, R. & Peytchev, A. (2010). Impact of progress indicators on task completion. Interacting with Computers, 22, 417–427.

A near ubiquitous feature of user interfaces is feedback on task completion or progress indicators such as the graphical bar that grows as more of the task is completed. The presumed benefit is that users will be more likely to complete the task if they see they are making progress but it is also possible that feedback indicating slow progress may sometimes discourage users from completing the task. This paper describes two experiments that evaluate the impact of progress indicators on the completion of on-line questionnaires. In the first experiment, progress was displayed at different speeds throughout the questionnaire. If the early feedback indicated slow progress, abandonment rates were higher and users’ subjective experience more negative than if the early feedback indicated faster progress. In the second experiment, intermittent feedback seemed to minimize the costs of discouraging feedback while preserving the benefits of encouraging feedback. Overall, the results suggest that when progress seems to outpace users’ expectations, feedback can improve their experience though not necessarily their completion rates; when progress seems to lag behind what users expect, feedback degrades their experience and lowers completion rates.

Conrad, F.G., Schober, M. F., & Coiner, T. (2007) Bringing features of human dialogue to web surveys. Applied Cognitive Psychology, 21, 165-188

When web survey respondents self-administer a questionnaire, what they are doing is in many ways similar to what goes on in human-human interviews. The studies presented here demonstrate that enabling web survey respondents to engage in the equivalent of clarification dialogue can improve respondents’ comprehension of questions and thus the accuracy of their answers, much as it can in human-human interviews. In two laboratory experiments, web survey respondents (1) answered more accurately when they could obtain clarification, i.e. ground their understanding of survey questions, than when no clarification was available, and (2) answered particularly accurately with mixed-initiative clarification, where respondents could initiate clarification or the system could provide unsolicited clarification when respondents took too long to answer. Diagnosing the need for clarification based on respondent characteristics—in particular, age—proved more effective than relying on a generic model of all respondents’ need for clarification. Although clarification dialogue increased response times, respondents preferred being able to request clarification than not. The current results suggest that bringing features of human dialogue to web surveys can exploit the advantages of both interviewer- and self-administration of questionnaires.

Conrad, F.G., Couper, M.P., Tourangeau, R. & Peytchev, A. (2006). Use and non-use of clarification features in web surveys. Journal of Official Statistics, 22,245-269.

Survey respondents misunderstand questions often enough to compromise the quality of their answers. Web surveys promise to improve understanding by making definitions available to respondents when they need clarification. We explore web survey respondents’ use of clarification features in two experiments. The first experiment demonstrates that respondents rarely request definitions but are more likely to do so when they realize definitions could be helpful (i.e., definitions are available for technical terms) and when requests involve relatively little effort (i.e., just one click); respondents who obtained a definition requested more subsequent definitions when the initial one proved useful (i.e. included counter-intuitive of surprising information). In the second experiment, definitions available via mouse roll-over were requested substantially more than when available via clicking, suggesting that some respondents find even a click more effort than they are willing to expend. We conclude with a discussion of interactive features in web surveys in general, when they are likely to be used and when they are likely to be useful.