Comments on the "Synthetic Cancer - Augmenting Worms with LLMs" Paper

27 Feb 2024 • Research

About a week ago, Benjamin Zimmerman and I have released the “Synthetic Cancer - Augmenting Worms with LLMs” paper for which we have won the first place at the AI Safety Prize.

Throughout the SCSD conference where we presented our work we have been asked a few questions by fellow researchers, journalists as well as the informed public. As a result I wanted to use this space to write down some of my thoughts.

If you haven’t read the paper the following might not make a lot of sense.

Is the prototype dangerous?

No, we have only constructed a prototype to demonstrate the two main features:

While the social engineering works really well, the code rewriting has about a 50% failure rate (however, by rewriting multiple times, this can be lowered). Thus, the prototype does not contain any malicious code that extorts users or encrypts data.

However, just because our prototype does not have such features does not mean a malicious actor could not add them.

Why did we publish the paper and not keep it confidential?

Good question. We believe that this threat, even though theoretically feasible now, will likely need a few months to mature as LLMs become smaller and better at code rewriting. The best way to prepare for a threat is to know how your attacker looks. In an effort to raise awareness, we have decided to release the non-technical part of our paper.

We do not want to release the technical parts of the paper. Furthermore, we are not releasing any source code. By heavily restricting what we publish, we strike a middle ground between informing the public and being secretive about the exact inner workings.

How can one protect against this new threat?

That’s a very good question. Luckily, there is already a lot of protection coming from requiring signatures on executables, as well as scanning email attachments for anything executable. These two strategies, which are already in place, make the attack quite nontrivial.

Nevertheless, we advocate that companies should provide training on novel social engineering attacks. Given that the emails can be tailored to a receiver, extra care must be taken before opening attachments.

From a technical perspective, proactive security software can likely pick up on LLM APIs being called or LLMs being executed locally, though it might take some time until these features are integrated into anti-malware applications.

As for the question of whether protections can be built into LLMs: To some degree, this can happen, as refactoring malware can already be detected by GPT-4. However, we think that especially the social engineering attacks are hard to identify, as there is very little that distinguishes them from legitimate email writing requests.

Ideas for the Future

One idea which might become interesting in the future is piggybacking on existing LLM infrastructure. For example, in a world where every computer has local LLMs to provide users with personal assistants, one might be able to engineer an attack where the malware uses that LLM for its proliferation.



Have any feedback?

Please feel free to send me a mail! I would love to hear from you.