Cybersecurity researchers found that malware was being distributed on Hugging Face by abusing Pickle file serialisation.
Researchers discovered two malicious ML models on Hugging Face exploiting “broken” pickle files to evade detection, bypassing ...
The technique, called nullifAI, allows the models to bypass Hugging Face’s protective measures against malicious AI models ...
Barely a week after DeepSeek released its R1 “reasoning” AI model — which sent markets into a tizzy — researchers at Hugging Face are trying to replicate the model from scratch in what they’re calling ...
The popular Python Pickle serialization format, which is common for distributing AI models, offers ways for attackers to ...
The top-ranked large language models on Hugging Face’s latest rankings showed they were all trained on Qwen’s open-source ...
Learn how to fine-tune DeepSeek R1 for reasoning tasks using LoRA, Hugging Face, and PyTorch. This guide by DataCamp takes ...
The initiative comes after R1 stunned the artificial intelligence community by matching the performance of the most capable models built by U.S. firms, despite being built at a fraction of the cost.
Hugging Face has launched the integration of four serverless inference providers Fal, Replicate, SambaNova, and Together AI, directly into its model pages. These providers are also integrated into ...
Dubbed “nullifAI,” a Tactic for Evading Detection in ML Models Targeted Pickle Files, Demonstrates Fast-Growing Cybersecurity Risks Presented by ...