Challenges in implementing 'normative' AI guidelines, say researchers

"Normative", not clear about implementation, say researchers


Reviewing global guidelines around the use of AI, researchers found that while most of the guidelines valued privacy, transparency, and accountability, very few valued truthfulness, intellectual property, or children's rights.

Further, most of these guidelines described ethical principles and values without proposing practical methods for implementing them and without pushing for legally binding regulation, a team of researchers from the Pontifical Catholic University of Rio Grande do Sul, Brazil, found.

To determine if a global consensus existed regarding the use of artificial intelligence (AI), the researchers identified and analysed 200 documents related to AI ethics and governance published between 2014 and 2022 from 37 countries and six continents and written or translated into five different languages - English, Portuguese, French, German and Spanish.

They found that the most commonly appearing principles were transparency, security, justice, privacy, and accountability, showing up in 82.5, 78, 75.5, 68.5, and 67 per cent of the documents, respectively.

On the other hand, labour rights, truthfulness, intellectual property, and children/adolescent rights appeared the least - 19.5, 8.5, 7, and 6 per cent, respectively - in these documents, they said in their study published in the journal Patterns, emphasising that these principles deserved more attention.

"Previous work predominantly centred around North American and European documents, which prompted us to actively seek and include perspectives from regions such as Asia, Latin America, Africa, and beyond," said lead author Nicholas Kluge Correa.

About 96 per cent of the AI-use guidelines were "normative", that is, describing ethical values that should be considered during AI development and use.

However, only 2 per cent of them recommended practical methods of implementing AI ethics, and only 4.5 per cent proposed legally binding forms of AI regulation, the researchers found.

"It's mostly voluntary commitments that say, 'these are some principles that we hold important', but they lack practical implementations and legal requirements," said social scientist and co-author James William Santos.

"If you're trying to build AI systems or if you're using AI systems in your enterprise, you have to respect things like privacy and user rights, but how you do that is the gray area that does not appear in these guidelines," said Santos.

Geographically, most of the guidelines came from countries in Western Europe (31.5 per cent), North America (34.5 per cent), and Asia (11.5 per cent), the team said, while less than 4.5 per cent of the documents originated in South America, Africa, and Oceania combined.

The research team said that these results suggest that many parts of the Global South are underrepresented in the global discourse on AI ethics.

In some cases, this includes countries heavily involved in AI research and development, such as China, whose output of AI-related research increased by over 120 per cent between 2016 and 2019.

"Our research demonstrates and reinforces our call for the Global South to wake up and a plea for the Global North to be ready to listen and welcome us," said co-author Camila Galvao. 

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp