We’re excited to share that the European Commission’s AI Office has chosen SecureBio to develop biological threat evaluations (‘evals’) to support the implementation of the EU’s landmark Artificial Intelligence Act. This is SecureBio’s second major engagement by a government to build such evals, following the award of a contract by the US government’s Center for AI Standards and Innovation.
As part of a consortium led by FAR.AI, SecureBio and other leading AI research organizations successfully completed a bid for Lot 1 of the EU AI Act’s “Technical Assistance for AI Safety” tender. In line with the protections set out in the EU AI Act, the consortium will monitor how AI might pose risks by expanding access to chemical, biological, radiological, and nuclear (CBRN) threats. Other members of the consortium include SaferAI, GovAI, Nemesys Insights, and Equistamp.
Over the next three years, SecureBio will be focused on:
Delivering pre-made biological evaluations: We’ll integrate and deliver established, publicly available biological evaluations like our Virology Capabilities Test into the Commission’s assessment framework.
Developing custom evaluations: We’ll design and build new AI evaluations to address gaps in current coverage of biological threat scenarios.
Performing quality assurance and human baselining: We’ll establish rigorous quality standards for biological evaluations, including human baseline studies to calibrate AI performance against experts.
Building evaluation infrastructure: We’ll help streamline and simplify the biological evaluation process for the EU’s AI Office, enabling consistent assessment of frontier models as they emerge.
Ben Mueller, Executive Director of SecureBio, said: “AI is poised to bring about tremendous progress in the medical and life sciences. At the same time, the technology generates risks that need to be better understood. We are proud that SecureBio’s AI team has been selected by the European Commission to support its efforts to understand risks posed by advanced AI. Our staff of scientists, researchers, and software engineers has a strong track record of producing rigorous, balanced evaluations to understand the capabilities and risks of frontier models in biology and associated fields, and we are pleased to contribute to this important undertaking.”


This is important, quiet work. The EU AI Act actually has teeth — it requires evaluation before deployment, not after. But that's only useful if the evals themselves are rigorous and fast enough to matter. Most public AI discussion is hype vs skepticism. This is the middle: people building the actual tools to understand what these models can and can't do. Three years to develop evals for CBRN threats is a short timeline. Hope they're not under pressure to rubber-stamp the models.
big moves !!!