Menu

Open Letter: OpenAI and Google DeepMind Employees Warn of AI Dangers Hidden from the Public

By
Photo: Open Letter: OpenAI and Google DeepMind Employees Warn of AI Dangers Hidden from the Public. Source: Collage The Gaze / by Leonid Lukashenko
Photo: Open Letter: OpenAI and Google DeepMind Employees Warn of AI Dangers Hidden from the Public. Source: Collage The Gaze / by Leonid Lukashenko

On Tuesday, June 4, a group of 11 former and current employees from companies in the field of artificial intelligence, such as OpenAI and Google DeepMind, issued an open letter about the risks of this technology and called for better protection for whistleblowers, as reported by The Guardian.

In their statement, AI experts claim that the financial interests of companies developing AI hinder effective oversight of this tool. The authors highlight the dangers posed by this technology, including the spread of misinformation, loss of control over autonomous AI systems, and the deepening of social inequality, which could lead to the "extinction of humanity." They also called for stronger regulation in this sector.

"AI companies possess a significant amount of non-public information about the capabilities and limitations of their systems, the adequacy of their safety measures, and the risk levels of various types of potential harm. However, they currently have only weak obligations to share some of this information with governments and are not required at all to share it with civil society. We do not believe it is reliable to expect all of them to voluntarily share this information," the open letter states.

Pointing to the dangers of AI tools, the researchers provided examples of image generators from companies like OpenAI and Microsoft creating photos with election-related misinformation, despite policies prohibiting such content.

The experts also urged AI companies to support a process that allows current and former employees to raise concerns about AI risks. This includes ensuring that companies do not force employees to sign non-disclosure agreements that prohibit them from disclosing issues related to AI.

OpenAI responded, stating that they have mechanisms such as a hotline for reporting issues within the company and that they do not release new technology without appropriate safety guarantees.

AI technologies have indeed begun to be widely used for misinformation and the creation of fakes. For instance, The Gaze reported that two Russian influence groups, "Storm-1679" and "Storm-1099," use AI in combination with more traditional malicious methods. They employed this to spread misinformation about the upcoming Olympic Games in Paris.


Similar articles

We use cookies to personalize content and ads, to provide social media features and to analyze our traffic. We also share information about your use of our site with our social media, advertising and analytics partners who may combine it with other information that you've provided to them. Cookie Policy

Outdated Browser
Для комфортної роботи в Мережі потрібен сучасний браузер. Тут можна знайти останні версії.
Outdated Browser
Цей сайт призначений для комп'ютерів, але
ви можете вільно користуватися ним.
67.15%
людей використовує
цей браузер
Google Chrome
Доступно для
  • Windows
  • Mac OS
  • Linux
9.6%
людей використовує
цей браузер
Mozilla Firefox
Доступно для
  • Windows
  • Mac OS
  • Linux
4.5%
людей використовує
цей браузер
Microsoft Edge
Доступно для
  • Windows
  • Mac OS
3.15%
людей використовує
цей браузер
Доступно для
  • Windows
  • Mac OS
  • Linux