A Hacker Infiltrated OpenAI Last Year. Could China do it too?

Early last year, a hacker infiltrated OpenAI’s internal messaging systems, targeting the maker of ChatGPT, and obtained details about the company’s AI technologies. The breach exposed internal discussions among researchers and employees but did not compromise OpenAI’s core code. Sources familiar with the incident indicated that the hacker accessed information from an employee forum discussing OpenAI’s latest technologies. However, they did not breach the systems where the company develops its artificial intelligence.

OpenAI executives disclosed the breach during an all-hands meeting at their San Francisco offices in April 2023 and informed the board of directors. They opted not to publicize the incident since no customer or partner data was compromised, and they deemed it not a national security threat. The hacker was believed to be an individual unaffiliated with any foreign government, and OpenAI did not involve law enforcement agencies like the FBI.

The incident sparked concerns among some OpenAI employees that foreign entities – that is to say China – could steal AI technology, potentially posing national security risks. It also raised questions about OpenAI’s security measures and exposed internal divisions over AI risks. After the breach, Leopold Aschenbrenner, a technical program manager at OpenAI, sent a memo to the board criticizing the company’s inadequate safeguards against foreign espionage. He was fired.

Aschenbrenner claimed his dismissal was politically motivated. According to the New York Times, he mentioned the breach on a podcast last month for the first time. He argued that OpenAI’s security was insufficient to protect against theft by foreign actors. OpenAI spokeswoman Liz Bourgeois stated that Aschenbrenner’s concerns did not lead to his separation and disagreed with his claims about the company’s security.

Concerns about foreign interference in American tech companies are not unfounded. Recently, Microsoft’s president, Brad Smith, testified about Chinese hackers targeting federal government networks via Microsoft systems. However, U.S. law prohibits discrimination against employees based on nationality, and companies like OpenAI and Meta aim to attract the best talent globally for AI development.

Nevertheless, researchers and executives are worried that future AI advancements could potentially lead to bioweapons or compromise government systems. In response, OpenAI has established a Safety and Security Committee, which includes former NSA and Cyber Command leader Paul Nakasone, to address future technological risks. However, as tech giants in California focus on enhancing security measures, Chinese counterparts are also intensifying efforts. Security concerns are not merely speculative science fiction. They are a real global issue.

Source: Nyt

Published On: July 11, 2024Categories: News

Share:

Reimagining Hollywood: The Rise of Climate Conscious Storytelling
Nielsen Taps Tech Veteran Jared Grusd to Lead Gracenote

Early last year, a hacker infiltrated OpenAI’s internal messaging systems, targeting the maker of ChatGPT, and obtained details about the company’s AI technologies. The breach exposed internal discussions among researchers and employees but did not compromise OpenAI’s core code. Sources familiar with the incident indicated that the hacker accessed information from an employee forum discussing OpenAI’s latest technologies. However, they did not breach the systems where the company develops its artificial intelligence.

OpenAI executives disclosed the breach during an all-hands meeting at their San Francisco offices in April 2023 and informed the board of directors. They opted not to publicize the incident since no customer or partner data was compromised, and they deemed it not a national security threat. The hacker was believed to be an individual unaffiliated with any foreign government, and OpenAI did not involve law enforcement agencies like the FBI.

The incident sparked concerns among some OpenAI employees that foreign entities – that is to say China – could steal AI technology, potentially posing national security risks. It also raised questions about OpenAI’s security measures and exposed internal divisions over AI risks. After the breach, Leopold Aschenbrenner, a technical program manager at OpenAI, sent a memo to the board criticizing the company’s inadequate safeguards against foreign espionage. He was fired.

Aschenbrenner claimed his dismissal was politically motivated. According to the New York Times, he mentioned the breach on a podcast last month for the first time. He argued that OpenAI’s security was insufficient to protect against theft by foreign actors. OpenAI spokeswoman Liz Bourgeois stated that Aschenbrenner’s concerns did not lead to his separation and disagreed with his claims about the company’s security.

Concerns about foreign interference in American tech companies are not unfounded. Recently, Microsoft’s president, Brad Smith, testified about Chinese hackers targeting federal government networks via Microsoft systems. However, U.S. law prohibits discrimination against employees based on nationality, and companies like OpenAI and Meta aim to attract the best talent globally for AI development.

Nevertheless, researchers and executives are worried that future AI advancements could potentially lead to bioweapons or compromise government systems. In response, OpenAI has established a Safety and Security Committee, which includes former NSA and Cyber Command leader Paul Nakasone, to address future technological risks. However, as tech giants in California focus on enhancing security measures, Chinese counterparts are also intensifying efforts. Security concerns are not merely speculative science fiction. They are a real global issue.

Source: Nyt

Published On: July 11, 2024Categories: News

Share:

Reimagining Hollywood: The Rise of Climate Conscious Storytelling
Nielsen Taps Tech Veteran Jared Grusd to Lead Gracenote