NUK - logo
E-resources
Peer reviewed Open access
  • How to assess a survey repo...
    Burns, Karen E A; Kho, Michelle E

    CMAJ. Canadian Medical Association journal, 2015-Apr-07, 2015-04-07, 20150407, Volume: 187, Issue: 6
    Journal Article

    Although designing and conducting surveys may appear straightforward, there are important factors to consider when reading and reviewing survey research. Several guides exist on how to design and report surveys, but few guides exist to assist readers and peer reviewers in appraising survey methods.1-9 We have developed a guide to aid readers and reviewers to discern whether the information gathered from a survey is reliable, unbiased and from a representative sample of the population. In our guide, we pose seven broad questions and specific subquestions to assist in assessing the quality of articles reporting on self-administered surveys (Box 1). We explain the rationale for each question posed and cite literature addressing its relevance in appraising the methodologic and reporting quality of survey research. Throughout the guide, we use the term "questionnaire" to refer to the instrument administered to respondents and "survey" to define the process of administering the questionnaire. We use "readers" to encompass both readers and peer reviewers. Several types of questionnaire testing can be performed, including pilot, clinical sensibility, reliability and validity testing. Readers should assess whether the investigators conducted formal testing to identify problems that may affect how respondents interpret and respond to individual questions and to the questionnaire as a whole. At a minimum, each questionnaire should have undergone pilot testing. Readers should evaluate what process was used for pilot testing the questionnaire (e.g., investigators sought feedback in a semi-structured format), the number and type of people involved (e.g., individuals similar to those in the sampling frame) and what features (e.g., the flow, salience and acceptability of the questionnaire) were assessed. Both pretesting and pilot testing minimize the chance that respondents will misinterpret questions. Whereas pretesting focuses on the wording of the questionnaire, pilot testing assesses the flow and relevance of the entire questionnaire, as well as individual questions, to identify unusual, irrelevant, poorly worded or redundant questions and responses.18 Through testing, the authors identify problems with questions and response formats so that modifications can be made to enhance questionnaire reliability, validity and responsiveness. Types of validity assessments include face, content, construct and criterion validity. Readers should assess whether any validity testing was conducted. Although the number of validity assessments depends on current or future use of the questionnaire, investigators should have assessed at a minimum the face validity of their questionnaire during clinical sensibility testing.2 In face validity, experts in the field or a sample of respondents similar to the target population determine whether the questionnaire measures what it aims to measure.20 In content validity, experts assess whether the content of the questionnaire includes all aspects considered essential to the construct or topic. Investigators evaluate construct validity when specific criteria to define the concept of interest are unknown; they verify whether key constructs were included using content validity assessments made by experts in the field or using statistical methods (e.g., factor analysis).2 In criterion validity, investigators compare responses to items with a gold standard.2