There is something unusual about a pollster deciding not to publish a poll. After the final phase of the West Bengal Assembly elections this year, Pradeep Gupta announced that Axis My India would not release its exit poll numbers for the state. His reason was straightforward. Around 60 to 70 per cent of voters had refused to answer survey questions. Without enough responses, he said, the sample was not reliable enough to support a prediction.
In the world of election coverage, where television studios rush to declare winners within minutes of polling ending, this was a rare admission. It may also have been shaped by a more personal experience. In 2024, Gupta’s exit poll projections for the Lok Sabha elections were inaccurate, and he broke down emotionally on television while discussing the results.
Gupta describes several such experiences in his 2023 book, Who Gets Elected, which offers insight into how the polling industry works. In 2013, when Axis My India was still largely unknown, Gupta published his findings on the company website and Facebook page because major television channels were unwilling to trust a new agency. The results drew attention. The agency accurately predicted the BJP’s tally in Madhya Pradesh and came close in Delhi, forecasting 27 seats for the Aam Aadmi Party, which eventually won 28.
Bihar in 2015 offered another lesson. At the time, most political commentary favoured the BJP after Narendra Modi’s 2014 Lok Sabha victory. Axis My India projected a strong win for the Mahagathbandhan alliance led by Nitish Kumar and Lalu Prasad Yadav. Several broadcasters reportedly refused to air the forecast because they believed the numbers were unrealistic. The alliance eventually won 178 seats, closely matching the projection. The episode showed how exit poll numbers are often dismissed when they go against the dominant political mood, even when they later prove correct.
Many agencies have successfully used surveys and fieldwork to predict election outcomes. These successes have strengthened the importance of exit polling in India. But they have also exposed a deeper problem: For every poll that gets it right, another gets it badly wrong.
This year’s assembly elections showed that clearly. In West Bengal and Kerala, several exit polls predicted a change of government. In Assam and Tamil Nadu, many projected the return of incumbent governments. But even within the same states, agencies often disagreed sharply. Many forecast sweeping BJP gains in Bengal, while others suggested that Mamata Banerjee would remain in power. The result was not a clear body of evidence pointing in one direction. It was a set of competing predictions, each presented with equal confidence.
This confusion is why voters and politicians treat exit polls with both interest and scepticism. Political parties praise the numbers when they are favourable and dismiss them when they are not. Exit polls create headlines and television debates, but they also create doubt.
The problem is not limited to India. In the 2024 United States presidential election, many polls showed Kamala Harris narrowly ahead in key battleground states. Television coverage presented the race as extremely close. Harris eventually lost all seven battleground states. The gap between prediction and result was significant.
Studies on polling accuracy raise further uncomfortable questions. Research covering more than 1,400 polls across several election cycles in the United States found that polls conducted 10 weeks before voting were correct only about half the time. Flipping the coin may also give a particular outcome around half the time.
The coin comparison is not entirely fanciful. Economist Steven Levitt, also known for his co-authored book, Freakonomics, once conducted an experiment in which people facing difficult life decisions were asked to flip a coin. Many who followed the result and made a major change later reported being happier. The point was not that coins are wise. It was that uncertainty is often greater than people are willing to admit.
That observation now applies to exit polls too. If polls are only slightly better than chance in some elections, it becomes harder to ignore the question of what exactly they are measuring.
Part of the answer lies in their role rather than their accuracy. Exit polls matter not only because they attempt to predict results, but because they sustain public attention during the gap between voting and counting. Elections have become large media events, and exit polls provide another stage in the spectacle. They generate suspense, debate and political momentum. In a ratings-driven news cycle, that has clear value.
But politics has also become harder to measure. Parties now use artificial intelligence, targeted social media campaigns and detailed voter data to reach specific groups in ways that traditional polling methods struggle to track.
In deeply contested elections, voters are also becoming more cautious about sharing their preferences. Bengal this year may have been an example of that. In a polarised atmosphere, silence itself becomes a rational choice.
That may be the more lasting lesson of this election season. Opinion and exit polls still matter. They shape debate, sustain interest and give structure to the long wait before results are declared. But they are not precise instruments. They are estimates produced in a complicated democracy where voters do not always behave predictably, and sometimes choose not to speak at all.
Exit polls are unlikely to disappear. But if a coin is right half the time, and a poll only slightly more than that, the question of what exactly is being measured remains worth asking.
People, naturally, will continue staring at colourful prediction charts, sometimes, in the same manner, looking for drama in IPL cricket matches.