Voting Advice Applications (VAAs) are Web tools that are used to inform increasing numbers of voters during elections. This increasing usage indicates that VAAs fulfill voters' needs, but what these ...needs are is unknown. Previous research has shown that such tools are primarily used by young males and highly educated citizens. This suggests that VAAs are generally used by citizens who are already well-informed about politics and may not need the assistance of a VAA to make voting decisions. To analyze the functions that VAAs have for their users, this study utilizes unique user data from a popular Dutch VAA to identify different user types according to their cognitive characteristics and motivations. A latent class analysis (LCA) resulted in three distinct user types that vary in efficacy, vote certainty, and interest: doubters, checkers, and seekers. Each group uses the VAA for different reasons at different points in time. Seekers' use of VAAs increases as Election Day approaches; less efficacious and less certain voters are more likely to use the tool to become informed.
Survey designers have long assumed that respondents who disagree with a negative question ("This policy is bad.": Yes or No; 2-point scale) will agree with an equivalent positive question ("This ...policy is good.": Yes or No; 2-point scale). However, experimental evidence has proven otherwise: Respondents are more likely to disagree with negative questions than to agree with positive ones. To explain these response effects for contrastive questions, the cognitive processes underlying question answering were examined. Using eye tracking, the authors show that the first reading of the question and the answers takes the same amount of time for contrastive questions. This suggests that the wording effect does not arise in the cognitive stages of question comprehension and attitude retrieval. Rereading a question and its answering options also takes the same amount of time, but happens more often for negative questions. This effect is likely to indicate a mapping difference: Fitting an opinion to the response options is more difficult for negative questions.
► We examine linguistic behavior of maximizers and approximators. ► These modifiers should combine with absolute adjectives, but not with relative ones. ► Also, an adjective modifier combination ...should show a stable rating across contexts. ► However, the same combination is judged differently in different contexts. ► Therefore, linguistic behavior does not allow for a classification of adjectives.
Respondents are more likely to disagree with negative survey questions (
This text is boring.
Yes/
No) than to agree with positive ones (
This text is interesting.
Yes/
No). The size of this effect, however, varies largely between word pairs. A semantic classification of adjectives in closed scale/absolute and open scale/relative types was predicted to explain this variation. To classify survey adjectives, a judgment experiment was conducted. Language users (
N
=
173) rated sentences in which an adjective was modified by the maximizer
completely or the approximator
almost: it should be possible to combine closed scale/absolute adjectives with these modifiers, in contrast to open scale/relative adjectives for which this is not the case.
Results show that language users agree on which adjective and degree modifier combinations are acceptable and which combinations are unacceptable. Moreover, the two methods,
almost and
fully, show convergent validity. However, the rating of the same combination of a specific adjective and a specific degree modifier varies across contexts. This suggests that neither of the two methods allows for an unambiguous classification of adjectives. Hence, the distinction between closed scale/absolute and open scale/relative adjectives cannot explain variation in survey response effects. For semantics and pragmatics results indicate that context plays a crucial role in the linguistic behavior of adjectives and degree modifiers.
Answers to standardised attitude questions not only depend on the attitude respondents have on the issue, but are also influenced by the wording of the question. A well known example of this ...phenomenon is the forbid/allow asymmetry. Although the verbs 'forbid' and 'allow' are supposed to be each others' counterparts, the answers to questions worded with the verb 'forbid' turn out not to be opposite to answers to equivalent 'allow'-questions: respondents turn out to be more likely to respond 'no' rather than 'yes' to 'forbid' questions. It is commonly assumed that this is caused by the extreme connotations of both verbs. In this article a meta-analysis is reported over all forbid/allow research since 1940. First of all it is analysed whether the asymmetry can be generalised over questions and experiments. This turns out to be the case. The answer 'no, not forbid' is obtained more than 'yes, allow', and the mean effect size turns out to be quite large. The huge variance in asymmetry size however, offers room for a search after additional explanations. It turns out that the interaction between the complexity of the issue in the question, and the degree of ness of the question text, is systematically related to the size of the asymmetry. This stresses the importance of attitude strength as an explanatory variable, as well as the importance of looking at the communicative task as a whole.
As the verbs forbid and allow are considered each other's counterparts, one would expect the answers to questions worded with forbid or allow to be each other's opposites. Research shows that this is ...not the case. Use of these verbs in surveys causes a wording effect known as the forbid/allow asymmetry. Findings do not reveal, however, where the asymmetry originates: during the stage of attitude localization or during the mapping of the attitude onto one of the response options. In this article, a correlational design was used. Two experiments were carried out focusing on the congenericity of forbid and allow, one on attitudes toward environmental issues and a replication on attitudes concerning ethnic groups. Results of both experiments show that forbid and allow questions are congeneric; that is, they measure the same attitude. Answers to forbid/allow questions reflect similar attitudes that are expressed differently on the answering scales due to the use of both verbs. In addition, the explanation of the effect focusing on the answering behavior of indifferent respondents is discussed and explored.
Provider: - Institution: - Data provided by Europeana Collections- "Voting Advice Applications are online tools that provide users with a voting advice based on their answers to a set of political ...attitude questions. This study investigated to what extent VAA users understand the questions that lead to the voting advice, and what search and response behaviour they expose in case of comprehension difficulties. Two studies were conducted to investigate these issues: a cognitive interviewing study among 60 VAA users during the Dutch municipal elections in the city of Utrecht, and a statistical analysis of all answers provided by 357,858 users who accessed one of the 34 municipal VAAs during these same elections. Results of the two studies show a coherent picture: difficult concepts (e.g., tax names or municipal jargon), geographical locations (e.g., reference to a specific street), and vague quantifying terms (e.g., 'more') all complicate the question. In case of comprehension difficulties, Study 1 shows that VAA users make little effort to solve their problems, for example by looking up difficult terms on the Internet. Instead, they draw inferences about what the question might mean and proceed to answer nonetheless. These are often neutral or no opinion answers, which seems to suggest that the meanings of those options are confounded. In Study 2, however, we found that the choice for either a neutral or no opinion response is not accidental: semantic meaning problems often result in no opinion answers, whereas pragmatic problems are related to neutral responses. We discuss the implications of these findings for survey theory and practice." (author's abstract)- All metadata published by Europeana are available free of restriction under the Creative Commons CC0 1.0 Universal Public Domain Dedication. However, Europeana requests that you actively acknowledge and give attribution to all metadata sources including Europeana
Provider: - Institution: - Data provided by Europeana Collections- "For decades, survey researchers have known that respondents give different answers to attitude questions worded positively (X is ...good. Agree-Disagree), negatively (X is bad. Agree-Disagree) or on a bipolar scale (X is bad-good). This makes survey answers hard to interpret, especially since findings on exactly how the answers are affected are conflicting. In the current paper, we present twelve studies in which the effect of question polarity was measured for a set of thirteen contrastive adjectives. In each study, the same adjectives were used so the generalizability of wording effects across studies could be examined for each word pair. Results show that for five of the word pairs an effect of question wording can be generalized. The direction of these effects are largely consistent: respondents generally give the same answers to positive and bipolar questions, but they are more likely to disagree with negative questions than to agree with positive questions or to choose the positive side of the bipolar scale. In other words, respondents express their opinions more positively when the question is worded negatively. Even though answers to the three wording alternatives sometimes differ, results also show that reliable answers can be obtained with all three wording alternatives. So, for survey practice, these results suggest that all three wording alternatives may be used for attitude measurement." (author's abstract)- All metadata published by Europeana are available free of restriction under the Creative Commons CC0 1.0 Universal Public Domain Dedication. However, Europeana requests that you actively acknowledge and give attribution to all metadata sources including Europeana