using ai for bad...

What happens when AI drug discovery tools are (mis)used to find the most lethal human toxins instead of new therapies?


In this “thought experiment turned into computational proof” article published in Nature Machine Intelligence, the authors, in less than 6 hours and using open-source readily available software, generated 40,000 molecules that might be used as bioweapons.


While some of these molecules were previously known, the AI engine also designed new molecules that were predicted to be more toxic than existing chemical warfare agents.


Thankfully they didn't share their list...


As the authors mention, they’re "one very small company in a universe of many hundreds of companies using AI software for drug discovery and de novo design”, so the capabilities are out there.


Are we scared yet?


There’s also been an uptick in concerns that tools like synthetic biology and the use of existing gene editing tools (eg., CRISPR) could also exacerbate the threat of bioweapons.


This is commonly known as the "dual-use" debate.


It's been going on for decades (e.g, the fears voiced by nuclear physicists worried that their work could be used to accelerate an atomic bomb).


And frankly, it might be difficult to find a solution.


As the authors write, “By going as close as we dared, we have still crossed a grey moral boundary, demonstrating that it is possible to design virtual potential toxic molecules without much in the way of effort, time or computational resources. We can easily erase the thousands of molecules we created, but we cannot delete the knowledge of how to recreate them.”


All tools, whether they're a hammer at a construction site or a new way to deliver gene therapy into a tumour, have the potential for evil “dual-use”.


But what’s somewhat worrying is that in the past, the large scale manufacture of chemical weapons could be tracked (eg., via satellite imaging, monitoring of specialized equipment being purchased) and knowing the set of molecules being manufactured helped search for the right signals.


But with AI suggesting new molecular designs never seen before, it becomes much harder to follow the breadcrumbs and to predict who’s manufacturing what.


So while this dual-use dilemma isn’t going away, I’m hopeful that this article, presented at an international security conference, is raising the right level of awareness to stimulate actionable risk-mitigating conversations.


In other news - the surgeon’s scalpel.


Friend or foe?


Discuss amongst yourselves…

Recent Posts

See All

Having worked at the intersection of digital and pharma since 2008 (before it was sexy), I've had the opportunity to engage many talented executives trying to bring digital into an often risk-averse a

“Why should we invest in a digital therapeutic when it will never come close to generating the revenue we make from our drug portfolio? Why not reinvest into R&D to find the next blockbuster drug, whi

“It’s been 3 years of strategizing, ideating and experimenting with very little impact - unfortunately, our digital health folks have lost some credibility with the rest of the business” This unfortun