- OpenAI CEO Sam Altman emphasises the need for AI regulation, warning of significant harm if technology goes wrong.
- Altman calls for the formation of a licensing agency to regulate AI companies and address concerns about election integrity.
- Concerns about misinformation and societal harms prompt calls for global cooperation and incentives for AI safety compliance.
OpenAI's CEO, Sam Altman, has emphasised the need for regulating artificial intelligence (AI) during his testimony before a US Senate committee.
Altman expressed concern about the potential harm that the AI industry could cause, stating, "if this technology goes wrong, it can go quite wrong." Recognising the significant area of concern related to AI's interference with election integrity, Altman emphasised the necessity of regulation in this domain.
To address the growing presence of AI models in the market, Altman proposed the establishment of a new agency responsible for licensing AI companies. He acknowledged the impressive capabilities of AI models like ChatGPT, which can provide human-like answers but also highlighted the issue of inaccuracies. Altman's push for regulation aligns with his commitment to addressing the ethical questions raised by AI.
Altman expressed apprehension about AI's impact on jobs, acknowledging the need for transparency in communicating this potential effect. However, some senators argued for the implementation of new laws that would make it easier for individuals to sue OpenAI, emphasising concerns regarding misinformation during elections.
The race to bring increasingly versatile AI to the market has led to investments of significant resources. Critics worry that AI technology may exacerbate societal issues such as prejudice and misinformation, with some even expressing fears about its potential to threaten humanity itself.
Senator Cory Booker acknowledged the explosive global growth of AI and highlighted the necessity of regulating it. Senator Mazie Hirono expressed concern about the danger of misinformation, particularly in the context of the upcoming 2024 election.
Altman suggested the implementation of licensing and testing requirements for AI model development. He emphasized that AI models capable of persuading or manipulating beliefs should be subject to licensing. Altman also supported the idea that companies should have the right to choose whether their data is used for AI training. However, he clarified that publicly available web material would be fair game.
Altman expressed a preference for a subscription-based model over advertising and participated in a White House meeting focused on AI. The US lawmakers, like Altman, aim to promote AI's benefits and national security while limiting potential misuse.
To ensure safety and compliance, Altman called for global cooperation on AI and incentives for safety compliance. IBM's Chief Privacy and Trust Officer, Christina Montgomery, urged Congress to concentrate regulatory efforts on areas with the potential for significant societal harm.
Altman's testimony highlighted the urgent need for AI regulation to mitigate potential risks and ensure responsible AI development and deployment. The proposal for a licensing agency reflects OpenAI's commitment to fostering a safe and secure AI landscape.