Image by Milad Fakurian
Image by CDC
Dual Use of AI: How AI can be exploited to produce biological weapons

The development of computational technologies and the increasing use of AI tools unfold uniquely new opportunities for the fields of biology, microbiology, virology, pharmacy, and genetics not only to study and closely observe the genomics, and molecules of humans and other living creatures, most importantly, microbes and viruses, but also to detect new mutations, intervene with their genetic structures, predict new diseases and infections, and develop new medicines, vaccines, and other pharmaceutical products with an unprecedented speed and accuracy. These prospects ignite the hope that lots of yet incurable diseases will be defeated and public health-related issues, such as epidemics and pandemics will be addressed more efficiently in the near future. For example, the pharmaceutical company Novartis used AI to develop a vaccine in less than 3 months for the infection H7N9, which is one of the types of influenza virus. [1]


In parallel with these hopes, however, there is growing concern over the mounting threats coming from the possible abuse of AI tools in order to develop biological and chemical weapons, which pose a major threat to global human security. For instance, a few days ago, Congresswoman Anna G. Eshoo appealed to the National Security Advisor and the Office of Science and Technology Policy to address the dual-use issue of AI in the context of biosecurity and urged the introduction of relevant codes of conduct. The congresswoman stated that “AI models released without appropriate safeguards can lead to real-world harms.  These risks are particularly acute regarding biosecurity. The same AI models designed to assist in the design of new molecules for drug discovery can be easily transformed and directed to design new, lethal molecules. Open-source AI is the primary route for learning and creating new models, and the necessary datasets needed to create harmful toxins are readily available, creating significant biosecurity risks.” [2] In addition, researchers in the field of AI security are warning policymakers of the dangers posed by the possible application of AI in biological warfare, which is increasingly likely given the rise of authoritarian tendencies worldwide. [3]


According to the definition provided by the World Health Organization, biological weapons are “either microorganisms like viruses, bacteria or fungi, or toxic substances produced by living organisms that are produced and released deliberately to cause disease and death in humans, animals, or plants.”[4] Together with chemical, radiological and nuclear weapons, they are considered part of the weapons of mass destruction. The use of biological weapons cannot be confined to the borders of certain countries and can have dramatic consequences, such as a high number of fatalities, environmental disasters, food shortages, deterioration of public health, etc. Moreover, since the power of this weapon lies in the use of microorganisms, it is extremely difficult to identify the perpetrators and hold them criminally responsible. For these reasons, the use of biological weapons is fully banned, according to the Biological Weapons Convention, initiated in 1972 by the United Nations. [5] 


Despite the existence of the Convention, which has been signed by 183 countries[6], the implementation of its stipulations is far from ideal. The biological weapon, often referred to as the “poor man`s atom bomb”[7], can be developed more easily, secretly, and cheaply,[8] and its production becomes even more efficient when AI tools are also used. According to a recent article[9], the pharmaceutical company Collaborations Pharmaceuticals, which uses machine learning tools to discover, design, and test different chemical compounds for drug development, reversed the algorithm and ordered to generate different variations of toxic chemical compounds. Before the actual experiment, the company never thought about the potential misuse of AI technologies for drug discovery : “the thought had never previously struck us. We were vaguely aware of security concerns around work with pathogens or toxic chemicals, but that did not relate to us; we primarily operate in a virtual setting”, they emphasized. The results were troublesome, as the algorithm produced more than 40 000 new compounds, which were so extremely toxic that they constituted the destruction level of chemical weapons. This procedure took less than 6 hours for the machine learning model. [10] This research aptly highlights the immense scale and scope to which Artificial Intelligence can be exploited to harm humanity. 


There are different ways to exploit AI to produce biological weapons. For example, pharmaceutical companies increasingly use machine learning tools that decrease the toxicity of different chemical materials to produce better medicaments. This algorithm can be reversed and reorganized to start increasing the toxicity of chemicals. [11] The powerful synthesizing capacity of AI algorithms to interfere with DNA sequences can be used to develop new lethal viruses and toxins or make existing ones more infectious and resilient. [12] This abuse of AI tools can be done in two ways. First, the perpetrator can be pharmaceutical companies or biological researchers who either deliberately or mistakenly rewind the process of scientific research toward manufacturing biological weapons. The second way is adversarial attacks, which manipulate data, sensors, and AI networks to contaminate genomes, vaccines, antibiotics, chemical compounds of medicaments, or other biological materials to create biologically destructive weapons. [13] 


Several recommendations can facilitate the mitigation of the biosecurity threats coming from the dual use of AI tools. First and foremost, considering the immense significance of the matter, public discussion on this issue is strikingly rare. Therefore, it is of utmost importance that the awareness of politicians, policymakers, and general society should be increased. Also, public institutions should invest more money to contribute to the ethical use of AI tools in biological, chemical, and microbiological research. Besides, international organizations and individual states should introduce a new, more rigorous framework for conducting the research which will be specifically designed to ensure the proper use of AI tools during the research. Moreover, pharmaceutical companies and biological research centers should take additional special measures to encrypt the data they possess, not to allow adversarial attacks to perpetrate and exploit viruses, germs, and genomes for producing destructive biological weapons. Furthermore, special attention should be paid to the education of young researchers and scientists about not only the ethics of scientific research in general but also the ethics of using AI tools too. Last but not least, generally, defensive measures for countering the threats coming from bioterrorism and dual use of AI should be based on the cooperation between different states and public and private enterprises, since the danger of biological warfare has a global reach and everybody, even the perpetrators, are vulnerable to its consequences. 




[1] Reinhold, Thomas, and Niklas Schörnig. Armament, Arms Control and Artificial Intelligence. Springer Nature, 2022.












[13] Reinhold, Thomas, and Niklas Schörnig. Armament, Arms Control and Artificial Intelligence. Springer Nature, 2022.


Publication date


Have Questions?

Talk to us. AdalanAI is management consulting & SaaS platform for AI Governance, Policy and Ethics.

  • LinkedIn
  • Twitter
Image by Milad Fakurian


Let’s Work Together

Your contact person: Ana Chubinidze, CEO