safety

Hugging Face attacked

Hugging Face has revealed that hackers have comprised several of their users' accounts

Martin Crowley
June 3, 2024

Hugging Face, one of the industry’s biggest repositories of AI apps, created and submitted by the Hugging Face community, has detected “unauthorized access” to “Spaces”, its secure platform for creating, sharing, and hosting AI models.   

What happened and who has been affected?

While we don’t know how many users have been implicated, Hugging Face has confirmed that hackers broke into Spaces, and got access to some users’ “Secrets” which are private pieces of information that contain authentication credentials and tokens that unlock 3rd party accounts, developer environments, and tools.  

What has Hugging Face done about it? 

Hugging Face has expressed their deep regret over the situation, and has already revoked any compromised authentication tokens and notified those who were affected by email. They've also advised users to refresh their tokens and switch to “fine-grained” access tokens, which give users tighter control over who can access their AI models, making them more secure.  

They’ve also confirmed that they’re working with “outside cyber security forensic specialists” to investigate what happened and have reported the incident to law enforcement and data protection authorities. They will also be reviewing their internal security policies and procedures to mitigate any further risk. 

“We deeply regret the disruption this incident may have caused and understand the inconvenience it may have posed to you. We pledge to use this as an opportunity to strengthen the security of our entire infrastructure.”

This comes after a series of concerns raised by cybersecurity experts about how secure the Hugging Face platform is. Back in February, cybersecurity firm, JFrog found around 100 malicious AI models capable of executing malicious code onto people’s machines. Cloud security firm, Wiz, also found a vulnerability (which has since been fixed) that allowed hackers to upload malicious code, and security start-up, HiddenLayer has discovered that Hugging Face could be abused to create malicious AI models.