Last March, a group of researchers made headlines by revealing that they had developed an artificial intelligence (AI) tool that could invent potential new chemicals weapons. What’s more, it could do so at an incredible speed: It took only six hours for the AI tool to suggest 40,000 of them.
The most worrying part of the story, however, was how easy it was to develop that AI tool. The researchers simply adapted a machine-learning model normally used to check for toxicity in new medical drugs. Rather than predicting whether the components of a new drug could be dangerous, they made it design new toxic molecules using a generative model and a toxicity dataset.
The paper was not promoting an illegal use of AI (chemical weapons were banned in 1997). Instead, the authors wanted to show just how easily peaceful applications of AI can be misused by malicious actors—be they rogue states, non-state armed groups, criminal organizations, or lone wolves. Exploitation of AI by malicious actors presents serious and insufficiently understood risks to international peace and security
Read in full here:
This thread was posted by one of our members via one of our news source trackers.