720-891-1663

AI Models Cracked Open by Security Researchers

No big surprise here.

Researchers have gotten full read and write access to Meta’s Bloom, Meta-Llama and Pythia large language models in a typical example of supply chain risk.

If they had been hackers or a non-friendly nation, they could have poisoned the training data, stolen the models and datasets and other unfriendly things.

AI security, apparently, is suffering all of the same security problems that the rest of us are dealing with.

The compromise – and I am not calling it an attack because if you leave the door open, you might expect someone to come in – is due to developers leaving tokens exposed on GitHub and Hugging Face.

The researchers found at least 1,500 similar tokens when they went looking. The tokens provided them varying degrees of access to 700+ other organizations. The organizations that might be compromised include Google, Microsoft and VMWare.

GitHub and similar platforms are not responsible for sanitizing the data that you put there. If you put the keys to the kingdom on your front porch, assume porch pirates will steal them.

If you need help, please contact us. Credit: Dark Reading

Facebooktwitterredditlinkedinmailby feather

Leave a Reply

Your email address will not be published. Required fields are marked *