Microsoft has revealed that it is. Pursuance of legal proceedings “Foreign-based threat – actor groups” to deliberately manipulate service infrastructure as hacking to get around the security controls of their creative artificial intelligence (AI) services and to produce offensive and harmful content. “against
The tech giant’s Digital Crimes Unit (DCU) said it has observed threat actors “develop sophisticated software that exploits exposed customer credentials scraped from public websites.” and “attempt to identify and illegally access accounts with some generative AI services and intentionally alter the capabilities of those services.”
Adversaries then use these services, such as the Azure OpenAI service, and monetize access by selling them to other malicious actors, providing them with detailed instructions on how to use these custom tools to generate malicious content. be used. Microsoft said it discovered the activity in July 2024.
The Windows maker said it has since revoked the threat actor group’s access, implemented new countermeasures, and strengthened its safeguards to prevent such activity from occurring in the future. What is strong? He also said he obtained a court order to seize a website (“aitism[.]net”) which was the focus of the group’s criminal activities.
The popularity of AI tools such as OpenAI Chat GPT is also a result of threat actors. Abuse them for Malicious intentfrom producing prohibited content to developing malware. Microsoft and OpenAI have. Repeatedly Disclosure that nation-state groups from China, Iran, North Korea and Russia are using their services for espionage, translation and disinformation campaigns.
Court documents Show him At least three unknown individuals are behind the operation, which leverages stolen Azure API keys and customer Entra ID authentication information to breach Microsoft systems and use it to create malicious images. DALL-E in violation of its Acceptable Use Policy. Seven other parties are believed to have used the services and equipment provided by them for similar purposes.
The method of harvesting the API keys is currently unknown, but Microsoft said the defendants engaged in “systematic API key theft” from multiple customers, including several US companies, some located in Pennsylvania and New Jersey.
“Using stolen Microsoft API keys belonging to US-based Microsoft customers, the defendants created a hacking-as-a-service scheme – accessible through infrastructure such as ‘rentry.org/de3u’ and ‘aitism.net’ domains – specifically designed to abuse Microsoft’s Azure infrastructure and software,” the company said in the filing.
According to one The GitHub repository has now been removed.de3u is described as a “DALL-E 3 frontend with reverse proxy support”. The GitHub account in question was created on November 8, 2023.
It said the threat actors “took steps to cover their tracks, including attempting to delete some Rentry.org pages, the GitHub repository for the de3u tool, and parts of the reverse proxy infrastructure” After the possession of[.]net”
Microsoft noted that threat actors used de3u and a bespoke reverse proxy service, called oai reverse proxy, to make Azure OpenAl Service API calls using stolen API keys. Using prompts can generate thousands of malicious images. It is not clear what kind of offensive image was taken.
The oai reverse proxy service running on the server is designed to funnel communications from de3u user computers through the Cloudflare tunnel to the Azure OpenAI service and route responses to the user device.
“de3u software allows users to issue Microsoft API calls to generate images using the DALL-E model through a simple user interface that leverages Azure APIs to access the Azure OpenAI service,” Redmond said. explained.
“Defendants’ de3u application communicates with Azure computers using undocumented Microsoft network APIs to send requests designed to mimic legitimate Azure OpenAPI service API requests. These requests are authenticated using stolen API keys and Other verification is done through information.”
It is worth mentioning that the use of proxy services to illegally access LLM services was highlighted by SysDeg in May 2024. LLM Jacking Attack Campaign Targeting AI offerings from Anthropic, AWS Bedrock, Google Cloud Vertex AI, Microsoft Azure, Mistral, and OpenAI and using stolen cloud credentials and selling access to other actors.
“Defendants have committed the Azure Abuse Enterprise matters through a coordinated and continuous pattern of unlawful activity to achieve their common unlawful objectives,” Microsoft said.
“Defendants’ pattern of unlawful activity is not limited to attacks on Microsoft. The evidence Microsoft has discovered to date indicates that Azure Abuse Enterprise is targeting and victimizing other AI service providers. is doing.”