US Forms AI Safety Board: Tech Titans Included, But Musk and Zuckerberg Left Out

Share Us

120
US Forms AI Safety Board: Tech Titans Included, But Musk and Zuckerberg Left Out
01 May 2024
5 min read

News Synopsis

The Biden administration established a new Artificial Intelligence Safety and Security Board (AISSB) this week. The board comprises prominent figures from the tech industry, including leaders from OpenAI, NVIDIA, Microsoft, Alphabet, Adobe, and AMD. Notably absent from the list are Elon Musk (Tesla, SpaceX) and Mark Zuckerberg (Meta).

Deepfakes: A Growing Threat 

The board's creation comes amidst rising concerns about deepfakes, manipulated videos or audio recordings designed to appear genuine. Malicious actors use deepfakes to target individuals, ranging from politicians and celebrities to even minors. A recent report highlighted the disturbing trend of "nudification" programs and GenAI being used to create deepfakes for blackmail and harassment, particularly targeting women within educational institutions.

Controversial Omissions: Musk and Zuckerberg Excluded

Despite the board's importance in addressing these issues, its composition has sparked controversy due to the exclusion of Musk and Zuckerberg. The Department of Homeland Security (DHS) cited the exemption of social media platforms as the reason for their absence. However, many remain skeptical.

Meta's Scrutiny:

Meta (formerly Facebook) faces criticism for allegedly failing to curb disinformation and harmful content. This includes a pending EU probe for not adequately addressing Russian disinformation on its platform, as well as concerns about insufficient measures to combat ads promoting "nudification" apps. A Media Matters report exposing ads alongside antisemitic content further prompted major advertisers to withdraw from the platform.

Musk's Unpredictability and Legal Issues:

Experts speculate that Musk's unpredictable nature and ongoing legal battles with the Securities and Exchange Commission (SEC) might be factors in his exclusion.

Zuckerberg's Advocacy for Open-Source AI:

Zuckerberg's support for open-source AI presents unique challenges in terms of regulation and safety.

Industry Efforts and Ongoing Challenges 

Despite their exclusion, companies involved in the AISSB have demonstrated a willingness to engage in AI safety discussions. OpenAI emphasizes creating safe AI products, and various companies have implemented their own safety protocols with varying success.

Examples of industry efforts:

  • OpenAI utilizes reinforcement learning with human feedback to guide its models.

  • Companies like Adobe and Google are adopting watermarking techniques to combat deepfakes.

  • Proposals for a CSAM (Child Sexual Abuse Material) database aim to train AI models in detecting potentially harmful content.

While these steps are positive, addressing the deepfake threat requires continued collaboration between independent researchers, industry leaders, and government agencies to effectively safeguard against AI-driven threats.

TWN Reviews