With the emerging ubiquity of cell phones, ecological momentary assessment (EMA) as a set of methods enable researchers to study momentary social, psychological, and affective responses to everyday ...life. Additionally, EMA enables researchers to acquire longitudinal data without the need for multiple lab visits. As the use of EMA in research increases, so too does the necessity of determining what constitutes valid or careless individual EMA responses to ensure validity and replicability of findings. Currently, EMA studies solely consider the response rate of a participant for exclusion. Yet, other features of an assessment can help to determine whether a response is careless or implausible. Here, we examined over 18,000 EMA text message responses of individual affect items to derive a data-driven model of what constitutes a "careless response." Results from this study indicate that an overly fast time to complete items (≤1 s), an overly narrow within assessment response variance (SD ≤ 5), and the percentage of items that fall at the mode (≥60%) are independent and reliable indicators of a careless response. Excluding careless responses such as these remove implausible positive correlations among psychometric antonyms (e.g., relaxed and anxious). Further, by identifying and removing careless responses, we also identify careless responders, participants who could be removed from group analyses. We use these results to develop and introduce an R package, EMAeval, so EMA researchers may similarly identify careless responses and responders either online during data collection or posthoc, after data collection has completed.
Translational AbstractUsing mobile technology to sample people's experiences as they go about their daily lives has quickly become a central method in Psychology research. These methods have allowed psychologists to better understand emotions and cognitions as they are experienced in daily life. These methods also allow psychologists to better understand how individuals with psychiatric disorders differ in their daily emotional experiences from those without any psychopathology. While this research area has blossomed in recent years, there remains no standardized approach to know if an assessment of real-world emotion has been completed thoughtfully or carelessly. Ensuring the quality of our real-world data is critical as not doing so may impact the conclusions researchers make. Here, we examine over 18,000 assessments of emotional experience collected from cell phone surveys and develop and test metrics that all researchers who rely on cell phone surveys can use to identify a response as potentially invalid, or careless. These three metrics include: (a) how quickly an assessment has been completed (where completing too quickly is likely invalid); (b) whether the responses are within a restricted range or cover a broader range (where too restricted a range is likely invalid); and (c) if the responses within the assessment are mostly identical. We present an R-package (https://github.com/manateelab/EMAeval-R-Package) for the research community to apply these criteria to their own work to enhance the validity of their experience sampling data.
Response times (RTs) to ecological momentary assessment (EMA) items often decrease after repeated EMA administration, but whether this is accompanied by lower response quality requires investigation. ...We examined the relationship between EMA item RTs and EMA response quality. In one data set, declining response quality was operationalized as decreasing correspondence over time between subjective and objective measures of blood glucose taken at the same time. In a second EMA study data set, declining response quality was operationalized as decreasing correspondence between subjective ratings of memory test performance and objective memory test scores. We assumed that measurement error in the objective measures did not increase across time, meaning that decreasing correspondence across days within a person could be attributed to lower response quality. RTs to EMA items decreased across study days, while no decrements in the mean response quality were observed. Decreasing EMA item RTs across study days did not appear problematic overall.
•Conversational survey is a new approach to design and administer questionnaires.•The conversational approach adds a storytelling and interactive flavour to surveys.•Survey compilers strongly prefer ...a conversational survey to a traditional approach.•Conversational surveys are a reliable alternative to traditional surveys.•Conversational survey lead to an improved response data quality.
Conversational interfaces are currently on the rise: more and more applications rely on a chat-like interaction pattern to increase their acceptability and to improve user experience. Also in the area of questionnaire design and administration, interaction design is increasingly looked at as an important ingredient of a digital solution. For those reasons, we designed and developed a conversational survey tool to administer questionnaires with a colloquial form through a chat-like Web interface.
In this paper, we present the evaluation results of our approach, taking into account both the user point of view – by assessing user acceptance and preferences in terms of survey compilation experience – and the survey design perspective – by investigating the effectiveness of a conversational survey in comparison to a traditional questionnaire. We show that users clearly appreciate the conversational form and prefer it over a traditional approach and that, from a data collection point of view, the conversational method shows the same reliability and a higher response quality with respect to a traditional questionnaire.
Probes are follow-ups to survey questions used to gain insights on respondents’ understanding of and responses to these questions. They are usually administered as open-ended questions, primarily in ...the context of questionnaire pretesting. Due to the decreased cost of data collection for open-ended questions in web surveys, researchers have argued for embedding more open-ended probes in large-scale web surveys. However, there are concerns that this may cause reactivity and impact survey data. The study presents a randomized experiment in which identical survey questions were run with and without open-ended probes. Embedding open-ended probes resulted in higher levels of survey break off, as well as increased backtracking and answer changes to previous questions. In most cases, there was no impact of open-ended probes on the cognitive processing of and response to survey questions. Implications for embedding open-ended probes into web surveys are discussed.
Recent years have seen a growing number of studies investigating the accuracy of nonprobability online panels; however, response quality in nonprobability online panels has not yet received much ...attention. To fill this gap, we investigate response quality in a comprehensive study of seven nonprobability online panels and three probability-based online panels with identical fieldwork periods and questionnaires in Germany. Three response quality indicators typically associated with survey satisficing are assessed: straight-lining in grid questions, item nonresponse, and midpoint selection in visual design experiments. Our results show that there is significantly more straight-lining in the nonprobability online panels than in the probability-based online panels. However, contrary to our expectations, there is no generalizable difference between nonprobability online panels and probability-based online panels with respect to item nonresponse. Finally, neither respondents in nonprobability online panels nor respondents in probability-based online panels are significantly affected by the visual design of the midpoint of the answer scale.
•Many respondents are unwilling to use a smartphone to answer a web survey.•A selection bias could lead to overestimating the network size on smartphones.•Smartphone usage did not have a negative ...effect on the network size.•Smartphones can effectively be used for network research in tech-savvy populations.•An open question can be used to identify respondents who show satisficing behavior.
The increasing use of smartphones around the world provides new opportunities for network data collection using smartphone surveys. We investigated experimentally whether the use of smartphones and of a recall aid affects the number of reported names in a network name generator question. In a German online access panel (N = 3891), respondents were randomly assigned to answer the survey on their PC or on their smartphone and were randomly assigned to receive an open-ended recall aid question before the name generator question or after. Results showed that respondents on PCs and smartphones reported the same number of network contacts. This suggests that smartphone surveys have no negative effect on the network sizes in ego-centered network studies. However, requiring people to answer on smartphones resulted in a selection bias due to non-compliance, which may have led to an overrepresentation of persons with larger network sizes. The recall aid question did not lead to more reported names, but it proved to be an indicator of respondents’ motivation and response quality. In sum, the study suggests that smartphones can effectively be used for network research in tech-savvy populations or when respondents can choose to complete the survey on another device.
Previous research reveals that the visual design of open-ended questions should match the response task so that respondents can infer the expected response format. Based on a web survey including ...specific probes in a list-style open-ended question format, we experimentally tested the effects of varying numbers of answer boxes on several indicators of response quality. Our results showed that using multiple small answer boxes instead of one large box had a positive impact on the number and variety of themes mentioned, as well as on the conciseness of responses to specific probes. We found no effect on the relevance of themes and the risk of item non-response. Based on our findings, we recommend using multiple small answer boxes instead of one large box to convey the expected response format and improve response quality in specific probes. This study makes a valuable contribution to the field of web probing, extends the concept of response quality in list-style open-ended questions, and provides a deeper understanding of how visual design features affect cognitive response processes in web surveys.
Survey researchers often assume that “professional” respondents, those who complete a large number of surveys in opt-in online panels, are more likely than others to provide low-quality responses ...because their primary motivation is to earn rewards with minimal effort. However, there is little empirical evidence for this assumption. It could also be that professional respondents are willing to expend effort in order to be compensated. We investigated this issue using data from four independent surveys of opt-in panelists with about 2,400 respondents in each survey. We classified panelists into three groups—"professional", “average,” and “novice”—according to the number of surveys they had previously completed and the number of panels they belonged to. We then compared the groups with respect to their demographic characteristics, reasons for joining a panel, and response behaviors. Professional respondents were the oldest group and, as expected, the most likely to report “for money” as the main reason for joining a panel. However, novices were actually the most likely to provide low-quality responses. Professional respondents appear to take the task of completing surveys more seriously than previously thought.