International agreement sets guidelines for secure AI design

These nations aim to prevent the hijacking of AI technology by hackers


The United States, Britain, and 18 other nations have forged an international agreement to safeguard artificial intelligence (AI) from malicious actors. The comprehensive 20-page document outlines the necessity for AI systems to be "secure by design," urging companies to develop and deploy AI solutions that prioritise the safety of customers and the wider public.

While the agreement is non-binding, it provides crucial recommendations, such as monitoring AI systems for potential abuse, safeguarding data from tampering, and thoroughly vetting software suppliers. The director of the US Cybersecurity and Infrastructure Security Agency, Jen Easterly, hailed the agreement as a significant milestone, emphasizing that it signifies a collective understanding that security must be the foremost consideration during the design phase of AI development.

Easterly stated, "This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs. It is an agreement that the most important thing that needs to be done at the design phase is security."

The list of signatory countries includes Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore, among others. By joining forces, these nations aim to prevent the hijacking of AI technology by hackers and ensure that AI models are released only after rigorous security testing.

However, the agreement does not address the contentious issues surrounding the appropriate applications of AI or the collection of data that feeds these systems. While various governments worldwide have initiated initiatives to shape AI development, many lack enforceability. Europe, in particular, has taken the lead in AI regulations, with lawmakers actively working on drafting comprehensive AI rules.

In October, the Biden administration bolstered efforts to mitigate AI risks by issuing an executive order that focuses on protecting consumers, workers, and minority groups, while simultaneously strengthening national security. Despite these efforts, a divided US Congress has struggled to make substantial progress in enacting effective AI regulation.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp