Powered by

Can artificial intelligence predict your voting choice in the upcoming election?

Artificial intelligence can respond to complex survey questions like a real human

AI COMPANIES-OPENAI/

Artificial intelligence (AI) can respond to complex survey questions just like humans, says a recent study from Brigham Young University. 

The study tested the accuracy of a GPT-3 language model, which mimics the complicated relationship between human ideas, attitudes, and sociocultural contexts of subpopulations. 

In one experiment, researchers created artificial personas with certain characteristics like race, age, ideology, and religiosity; and tested if they would vote the same as humans did in US presidential elections. Using a comparative human database, they found a high correspondence between how the AI and humans voted. The findings suggest that AI can potentially substitute human responders in survey-style research.

"I was absolutely surprised to see how accurately it matched up," said David Wingate, BYU computer science professor, and co-author on the study. "It's especially interesting because the model wasn't trained to do political science -- it was just trained on a hundred billion words of text downloaded from the internet. But the consistent information we got back was so connected to how people really voted."

In another experiment, they conditioned artificial personas to offer responses from a list of options in an interview-style survey, again using the ANES as their human sample. They found high similarity between nuanced patterns in human and AI responses.

This innovation holds exciting prospects for researchers, marketers, and pollsters. Researchers envision a future where artificial intelligence is used to craft better survey questions, refining them to be more accessible and representative; and even simulate populations that are difficult to reach. It can be used to test surveys, slogans, and taglines as a precursor to focus groups.

"We're learning that AI can help us understand people better," said BYU political science professor Ethan Busby. "It's not replacing humans, but it is helping us more effectively study people. It's about augmenting our ability rather than replacing it. It can help us be more efficient in our work with people by allowing us to pre-test our surveys and our messaging."

And while the expansive possibilities of large language models are intriguing, the rise of artificial intelligence poses a host of questions -- how much does AI really know? Which populations will benefit from this technology and which will be negatively impacted? And how can we protect ourselves from scammers and fraudsters who will manipulate AI to create more sophisticated phishing scams?

While much of that is still to be determined, the study lays out a set of criteria that future researchers can use to determine how accurate an AI model is for different subject areas.

"We're going to see positive benefits because it's going to unlock new capabilities," said Wingate, noting that AI can help people in many different jobs be more efficient. "We're also going to see negative things happen because sometimes computer models are inaccurate and sometimes they're biased. It will continue to churn society."

Busby says surveying artificial personas shouldn't replace the need to survey real people and that academics and other experts need to come together to define the ethical boundaries of artificial intelligence surveying in research related to social science

2242883265

"I was absolutely surprised to see how accurately it matched up," said David Wingate, BYU computer science professor, and co-author on the study. "It's especially interesting because the model wasn't trained to do political science -- it was just trained on a hundred billion words of text downloaded from the internet. But the consistent information we got back was so connected to how people really voted."

In another experiment, they conditioned artificial personas to offer responses from a list of options in an interview-style survey, again using the ANES as their human sample. They found high similarity between nuanced patterns in human and AI responses.

This innovation holds exciting prospects for researchers, marketers, and pollsters. Researchers envision a future where artificial intelligence is used to craft better survey questions, refining them to be more accessible and representative; and even simulate populations that are difficult to reach. It can be used to test surveys, slogans, and taglines as a precursor to focus groups.

"We're learning that AI can help us understand people better," said BYU political science professor Ethan Busby. "It's not replacing humans, but it is helping us more effectively study people. It's about augmenting our ability rather than replacing it. It can help us be more efficient in our work with people by allowing us to pre-test our surveys and our messaging."

And while the expansive possibilities of large language models are intriguing, the rise of artificial intelligence poses a host of questions -- how much does AI really know? Which populations will benefit from this technology and which will be negatively impacted? And how can we protect ourselves from scammers and fraudsters who will manipulate AI to create more sophisticated phishing scams?

While much of that is still to be determined, the study lays out a set of criteria that future researchers can use to determine how accurate an AI model is for different subject areas.

"We're going to see positive benefits because it's going to unlock new capabilities," said Wingate, noting that AI can help people in many different jobs be more efficient. "We're also going to see negative things happen because sometimes computer models are inaccurate and sometimes they're biased. It will continue to churn society."

Busby says surveying artificial personas shouldn't replace the need to survey real people and that academics and other experts need to come together to define the ethical boundaries of artificial intelligence surveying in research related to social science.