Open Letter: OpenAI and Google DeepMind Employees Warn of AI Dangers Hidden from the Public
On Tuesday, June 4, a group of 11 former and current employees from companies in the field of artificial intelligence, such as OpenAI and Google DeepMind, issued an open letter about the risks of this technology and called for better protection for whistleblowers, as reported by The Guardian.
In their statement, AI experts claim that the financial interests of companies developing AI hinder effective oversight of this tool. The authors highlight the dangers posed by this technology, including the spread of misinformation, loss of control over autonomous AI systems, and the deepening of social inequality, which could lead to the "extinction of humanity." They also called for stronger regulation in this sector.
"AI companies possess a significant amount of non-public information about the capabilities and limitations of their systems, the adequacy of their safety measures, and the risk levels of various types of potential harm. However, they currently have only weak obligations to share some of this information with governments and are not required at all to share it with civil society. We do not believe it is reliable to expect all of them to voluntarily share this information," the open letter states.
Pointing to the dangers of AI tools, the researchers provided examples of image generators from companies like OpenAI and Microsoft creating photos with election-related misinformation, despite policies prohibiting such content.
The experts also urged AI companies to support a process that allows current and former employees to raise concerns about AI risks. This includes ensuring that companies do not force employees to sign non-disclosure agreements that prohibit them from disclosing issues related to AI.
OpenAI responded, stating that they have mechanisms such as a hotline for reporting issues within the company and that they do not release new technology without appropriate safety guarantees.
AI technologies have indeed begun to be widely used for misinformation and the creation of fakes. For instance, The Gaze reported that two Russian influence groups, "Storm-1679" and "Storm-1099," use AI in combination with more traditional malicious methods. They employed this to spread misinformation about the upcoming Olympic Games in Paris.