Researchers are combining self-reports from mobile surveys with passive data collection using sensors and apps on smartphones increasingly more often. While smartphones are commonly used in some ...groups of individuals, smartphone penetration is significantly lower in other groups. In addition, different operating systems (OSs) limit how mobile data can be collected passively. These limitations cause concern about coverage error in studies targeting the general population. Based on data from the Panel Study Labour Market and Social Security (PASS), an annual probability-based mixed-mode survey on the labor market and poverty in Germany, we find that smartphone ownership and ownership of smartphones with specific OSs are correlated with a number of sociodemographic and substantive variables. The use of weighting techniques based on sociodemographic information available for both owners and nonowners reduces these differences but does not eliminate them.
Recent advances in web survey methodology were motivated by the observation that respondents increasingly use mobile devices, such as smartphones and tablets, to participate in web surveys. Even ...though we do not doubt this general observation, we argue that the claim is lacking a solid empirical basis. Most research on increasing mobile device use in web surveys covers limited periods of time and/or analyzes data from only one study or panel. There is a surprising lack of comprehensive overviews on the magnitude of mobile device use in web surveys. In the present study, we explored this research gap by analyzing data from 128 web surveys collected in four different academic studies in Germany between 2012 and 2020. Overall, we found strong empirical evidence for an increase in smartphone use, a stagnation in tablet use, and a decrease in desktop PC use. There was no evidence that the increase in smartphone use will slow down any time soon. Thus, we recommend that survey researchers prepare for a device change in web surveys that may enable new applications in web surveys.
Forms and surveys often require address information, including state. State data entry fields in online forms typically use a dropdown where the user selects one state from the list. A review of ...online forms shows a variety of state lists used, with some including the state name fully spelled out while others use the state abbreviation, and still others use a combination of the two, like MD-Maryland. Through a series of three independent experiments, we investigate usability of state list designs as measured by time-on-task, accuracy of answers, or user preference. Results indicate that participants have difficulty with state abbreviations alone. That design results in longer time-on-task, and lower accuracy and preference, particularly for states where the user does not live. We did not find any significant difference in usability for full state names compared to the abbreviation and state name combination in a dropdown design.
Nonprobability online panels are commonly used in the social sciences as a fast and inexpensive way of collecting data in contrast to more expensive probability-based panels. Given their ubiquitous ...use in social science research, a great deal of research is being undertaken to assess the properties of nonprobability panels relative to probability ones. Much of this research focuses on selection bias, however, there is considerably less research assessing the comparability (or equivalence) of measurements collected from respondents in nonprobability and probability panels. This article contributes to addressing this research gap by testing whether measurement equivalence holds between multiple probability and nonprobability online panels in Australia and Germany. Using equivalence testing in the Confirmatory Factor Analysis framework, we assessed measurement equivalence in six multi-item scales (three in each country). We found significant measurement differences between probability and nonprobability panels and within them, even after weighting by demographic variables. These results suggest that combining or comparing multi-item scale data from different sources should be done with caution. We conclude with a discussion of the possible causes of these findings, their implications for survey research, and some guidance for data users.
While grids or matrix questions are a widely used format in PC web surveys, there is no agreement on the format in mobile web surveys. We conducted a two-wave experiment in an opt in panel in Russia, ...varying the question format (grid format and item-by-item format) and device respondents used for survey completion (smartphone and PC). The 1,678 respondents completed the survey in the assigned conditions in the first wave and 1,079 in the second wave. Overall, we found somewhat higher measurement error in the grid format in both mobile and PC web conditions. We found almost no significant effect of the question format on test–retest correlations between the latent scores in two waves and no differences in breakoff rates between the question formats. The multigroup comparison showed some measurement equivalence between the question formats. However, the difference varied depending on the length of a scale with a longer scale producing some differences in the measurement equivalence between the conditions. The levels of straightlining were higher in the grid than in the item-by-item format. In addition, concurrent validity was lower in the grid format in both PC and mobile web conditions. Finally, subjective indicators of respondent burden showed that the grid format increased reported technical difficulties and decreased subjective evaluation of the survey.
When asking survey participants about past events, respondents might not properly recall the requested information. Surveying participants right when an event of interest occurs should reduce these ...recall errors.
Such “in-the-moment surveys” are used nowadays but only in very specific occasions. Online panels that ask their members to share their online behaviors (metered panels) offer a new opportunity to use in-the-moment surveys whenever an online event of interest is detected.
Previous research shows that the willingness to participate in in-the-moment surveys is notably high in metered panels, but even panellists willing to participate may fail to do so if they do not see the invitation in time. Very little is known about how participants perceive the different invitation methods available.
A survey of members of a metered panel in Spain reveals that invitation methods deployed on smartphones get higher levels of acceptance and coverage, and are perceived as fastest. Moreover, offering several invitation methods on different devices would maximize the opportunities to participate in time, making them more feasible.
Although there is literature on the willingness to share visual data in the frame of web surveys and the actual participation when asked to do so, no research has investigated the skills of the ...participants to create and share visual data and the availability of such data, along with the willingness to share them. Furthermore, information on the burden associated with answering conventional questions and performing visual data-related tasks is also scarce. Our paper aims to fill those gaps, considering images and videos, smartphones and PCs, and visual data created before and during the survey. Results from a survey conducted among internet users in Spain (N = 857) show that most respondents know how to perform the studied tasks on their smartphone, while a lower proportion knows how to do them on their PC. Also, respondents mainly store images of landscapes and activities on their smartphone, and their availability to create visual data during the survey is high when answering from home. Furthermore, more than half of the participants are willing to share visual data. When analyzing the three dimensions together, the highest expected participation is observed for visual data created during the survey with the smartphone, which also results in a lower perception of burden. Moreover, older and lower educated respondents are less likely to capture and share visual data. Overall, asking for visual data seems feasible especially when collected during the survey with the smartphone. However, researchers should reflect on whether the expected benefits outweigh the expected drawbacks on a case-by-case basis.
In recent years, the number of surveys, especially online surveys, has increased dramatically. Due to the absence of interviewers in this survey mode (who can motivate the respondents to continue ...answering), some researchers and practitioners argue that online surveys should not be longer than 20 min. However, so far, there has been little research investigating how long respondents think that online surveys should or could be. In this study, we therefore asked respondents of two online panels in Germany (one probability-based panel and one nonprobability panel) about their opinions on the ideal and maximum lengths of surveys. We also investigated whether socio-demographic, personality-related, and survey-related variables were associated with the ideal and maximum lengths reported by respondents. Finally, we compared the stated and observed survey lengths to evaluate the extent to which respondents are able to accurately estimate survey length. Our results suggest that the ideal length of an online survey is between 10 and 15 min and the maximum length is between 20 and 28 min, depending on the measure of central tendency (mean or median) used and the panel. Moreover, we found significant effects of socio-demographics (gender, age, education, and number of persons in household), of personality traits, and survey-related questions (whether the respondents liked the survey, found it easy, and answered from a PC) on at least one of the dependent variables (ideal or maximum lengths). Finally, we found only small differences (less than two min) between stated and observed lengths.
Objectives
To systematically review the literature and compare response rates (RRs) of web surveys to alternative data collection methods in the context of epidemiologic and public health studies.
...Methods
We reviewed the literature using PubMed, LILACS, SciELO, WebSM, and Google Scholar databases. We selected epidemiologic and public health studies that considered the general population and used two parallel data collection methods, being one web-based. RR differences were analyzed using two-sample test of proportions, and pooled using random effects. We investigated agreement using Bland-and-Altman, and correlation using Pearson’s coefficient.
Results
We selected 19 studies (nine randomized trials). The RR of the web-based data collection was 12.9 percentage points (p.p.) lower (95% CI = − 19.0, − 6.8) than the alternative methods, and 15.7 p.p. lower (95% CI = − 24.2, − 7.3) considering only randomized trials. Monetary incentives did not reduce the RR differences. A strong positive correlation (
r
= 0.83) between the RRs was observed.
Conclusions
Web-based data collection present lower RRs compared to alternative methods. However, it is not recommended to interpret this as a meta-analytical evidence due to the high heterogeneity of the studies.
A major challenge in web-based cross-cultural data collection is varying response rates, which can result in low data quality and non-response bias. Country-specific factors such as the political and ...demographic, economic, and technological factors as well as the socio-cultural environment may have an effect on the response rates to web surveys. This study evaluates web survey response rates using meta-analytical methods based on 110 experimental studies from seven countries. Three dependent variables, so-called effect sizes, are used: the web response rate, the response rate to the comparison survey mode, and the difference between the two response rates. The meta-analysis indicates that four country-specific factors (political and demographic, economic, technological, and socio-cultural) impact the magnitude of web survey response rates. Specifically, web surveys achieve high response rates in countries with high population growth, high internet coverage, and a high survey participation propensity. On the other hand, web surveys are at a disadvantage in countries with a high population age and high cell phone coverage. This study concludes that web surveys can be a reliable alternative to other survey modes due to their consistent response rates and are expected to be used more frequently in national and international settings.