Does the War Make Russian Public Opinion Polling Worthless?

December 21, 2023
  • Danielle N. Lussier
    Associate Professor of Political Science, Grinnell College
Danielle N. Lussier responds to the broad skepticism regarding public opinion surveys in wartime Russia. She argues that scholarly findings from both Russia and other non-democratic contexts suggest that well-designed polls in Russia should not be dismissed at face value, but merit closer scrutiny.
The atmosphere of Russia’s offensive war and internal repression raises important questions about how to accurately gauge public opinion. In fact, regardless of whether the data were gathered in a democratic or autocratic setting, the reliability and validity of any public opinion surveys need to be considered, along with the social context in which the research was conducted.

There are two areas where academic research can contribute to broader knowledge about public opinion surveys. The first involves research about opinion formation in general, particularly in non-democratic contexts. The second concerns the more technical aspects of conducting survey research in contemporary Russia. Both bodies of research suggest that the differences between open and repressive societies in how opinions are formed and publicly conveyed are not very significant.

In fact, survey research is challenging from a technical perspective independent of regime type. If we take these findings seriously,
“Skepticism about the validity of Russian public opinion polls may be a consequence of overestimating the honesty of survey respondents in democracies and underestimating the truthfulness of respondents in more politically repressive contexts.
Mobilized men. St Petersburg, November 2023. Source: VK
It is often overlooked that opinion formation is a socio-psychological process that is only partly affected by political circumstances. Similarly, public opinion polling is a technical process that encounters obstacles whether in a democratic or authoritarian political regime. I will discuss each of these points in turn.

Individuals follow similar scripts regardless of the regime type

Survey research has expanded dramatically across the globe over the past forty years, and questions of reliability and accuracy have always followed. In an in-depth study of the sources of popular support in authoritarian regimes, political scientists Barbara Geddes and John Zaller developed a model of opinion formation that — when tested — conforms very closely to social psychology findings about opinion formation in democracies.

In a nutshell, in forming their opinions about public policies, individuals in authoritarian and democratic contexts follow similar scripts. Writing in 1989, Geddes and Zaller find that exposure to elite discourse tends to foster popular support for the values embedded in the discourse, except among well-informed groups who are predisposed to holding different values.

These authors find that support for policies advanced by an authoritarian regime is strongest among people who are both heavily exposed to government-controlled communications and lack the tools to resist their messages, such as education or alternative value systems. Support is weakest among individuals who are inattentive to politics or who follow political matters closely and are predisposed against authoritarian messages due to their personal values or earlier experiences with political opposition.

Geddes and Zaller are clear to note that their model "focuses on patterns of support for regime policies rather than on absolute levels of support,” but reinforce the point that the complex pattern of support that emerges is unlikely to be “an artifact of the untruthful answers of fearful respondents.”

Similarly, Timur Kuran’s concept of “preference falsification,” which is “the act of misrepresenting one’s wants under perceived social pressure”, encourages us to take a broader perspective on the social pressure to conform to public opinion, rather than narrowing in on the individual issue of self-censorship in politically repressive contexts.

While democracies restrict the menu of possible penalties for expressing undesirable preferences to largely social, rather than physical or material consequences, an individual’s reputation within broader social, economic, and community circles is itself of value and creates incentives for individuals to alter how others perceive their preferences.

It is precisely this type of social pressure that facilitates mass quiescence in autocracies when broad public support for state policies is declining. Kuran offers a simple, but clear and effective way to think about the relationship between publicly expressed opinion and private preferences, noting that, “The distribution of public preferences across individuals makes up public opinion, and that of private preferences forms private opinion. The latter distribution is hidden.” This is true of individuals in both politically open and repressive contexts.

Kuran argues that preference falsification produces two categories of effects. First, because expressed preferences have social consequences, such as inducing conformity in a population, individual choices shape social outcomes so that there is “widespread public support for policies that would be rejected in a vote taken by secret ballot.”

Second, because the social climate that emerges due to preference falsification itself can transform which views people are trying to hide, social outcomes also shape individual choices. The power of this social climate is precisely what makes the process of opinion formation similar in democratic and non-democratic contexts.
Public opinion can cease to accurately reflect private preferences in democracies and autocracies alike, because preference falsification itself relates to questions of social desirability and perceived social pressure to conform.”
Palace Square, St.Petersburg. December 2023. Source: VK
While Kuran views the overall dynamic of preference falsification as something that occurs in all regime types, he notes that “widespread ignorance” about preference falsification is more common in a closed society.

Has the war in Ukraine affected the way Russians respond to surveys?

Concern about preference falsification in responses to public opinion surveys is not exclusive to non-democratic contexts. Nevertheless, analytical discourse about the reliability of public opinion surveys in Russia regularly invokes discussion about self-censorship and the extent to which respondents might feel unable to express views critical of the regime, its incumbents, or its policies in the context of political repression.

These concerns have been present in Russia for several years, and scholars of survey research have considered a range of different methodological approaches to try and ensure accurate measures of political support. A series of essays published in a special issue of Post Soviet Affairs earlier this year focuses directly on whether the war in Ukraine has shifted how Russians are responding to surveys. At its core, most of this technical work has focused on investigating two questions.

First, has the demographic of respondents changed? A substantial change in the overall refusal rate for participation, or a decline in participation among a particular segment of society could mean that a survey sample is no longer representative of the population. Methodological reports from the Levada Center—one of the most well-respected polling organizations in Russia — consistently show that there have been no meaningful changes in overall participation refusal rates since the war began. No sharp changes in the overall demographic of survey respondents have been recorded, although observers need to continue monitoring trends over time for potentially gradual, yet meaningful shifts.

Second, are Russians censoring themselves? Self-censorship can take many forms, from responding “don’t know” or “hard to say” to a question, to selecting a response option that is fully different from one’s real opinion. Again, Levada Center analytical reports show no meaningful changes in respondents selecting “don’t know” or “hard to say.”

Long before the war, observers of Russian public opinion questioned whether Russians misrepresent their views to pollsters directly, particularly regarding topics that we might perceive as contentious, such as Putin’s approval rating. Several scholars (including Timothy Frye, Scott Gehlbach, Kyle Marquardt, John Reuter, Philipp Chapkovski, Max Schaub, Noah Buckley and Katerina Tertytchnaya) have pioneered indirect questioning techniques and list experiments to develop more precise ways of measuring responses to sensitive questions.

This work generally finds that questions concerning sensitive topics likely lead directly to some inflation of pro-regime sentiments, but still tend to accurately reflect the overall trend in sincere views. In short, scholars working directly with public opinion data regularly examine polls for signs of bias and find that well-designed surveys in Russia are conveying useful information, even in wartime conditions.

While these questions require our attention, they highlight a broader tendency to overlook the technical challenges faced in any reliable collection of public opinion data. Whether gathered in an open society or a place where freedom of expression is not guaranteed, the validity of polling to accurately capture public attitudes depends on several factors related to how the sample is collected and the questions designed.

Building a nationally representative sample involves a process of randomization to select respondents, as well as repeated efforts to contact them for an interview.
Respondents regularly refuse to participate in public opinion polls for reasons unrelated to political repression.
Similarly, there are any number of questions that are sensitive under any political conditions. In contemporary Russia, we need to understand how the context of political repression and war presents additional obstacles on top of these standard challenges.

Experts of survey research are quick to caution that not all surveys are designed in such a way as to yield valid results. Biases due to sampling or poor question design are concerns for survey researchers irrespective of regime type. But there continues to be good quality sampling and questioning done among seasoned professionals in Russia. We have the tools to analyze their findings, and a solid body of evidence suggesting that these polls capture a fairly accurate picture of the public mood. We must continue to observe with caution and check for bias rather than assuming it.
Share this article
Read More
You consent to processing your personal data and accept our privacy policy