OpenAI, the artificial intelligence (AI) chatbot behind ChatGPT suffered a security breach in early 2023. According to a New York Times report, a hacker gained access to the company’s internal messaging system and stole details of its AI technologies.

The hacker stole details from OpenAI’s discussion forum.
Citing two people familiar with the incident, the report said details about OpenAI’s AI systems were lifted from an online discussion forum where employees discussed its latest technologies. However, the hacker was unable to break into the system where OpenAI is located and create its AI, the report said.

OpenAI did not disclose the breach to the public.
According to the report, OpenAI executives disclosed the breach to their employees during their all-hands meeting in April 2023. However, they did not share the news publicly because no information about customers or partners was stolen, sources told the NYT.

“Executives did not consider the incident a national security threat because they believed the hacker was a private individual with no ties to a foreign government. The company did not notify law enforcement,” the report said. I was told.

In May of this year, the Microsoft-backed company said it had disrupted five covert influence operations that were trying to use its artificial intelligence models for “fraudulent activity” on the Internet. OpenAI then said that threat actors used its AI models to generate short comments, long articles in various languages, social media account names and bios over the past three months.

OpenAI also claimed that it stopped an Israeli company from interfering in India’s Lok Sabha elections, OperaOpera, calling the operation “ZeroZero”. The Microsoft-funded artificial intelligence giant said it blocked the fraudulent use of AI by Israeli firm STOIC within 24 hours, which had no significant impact on the election process.



Source link