UK AI Liability Campaign

Hold AI developers liable for severe harm

We’re calling on the UK government to introduce legislation that makes AI companies legally responsible when their models enable critical harms — from cyber attacks on national infrastructure to biosecurity threats.

Email your MP →

AI developers face no meaningful financial or other punitive consequences for foreseeable harms to British citizens. — Open letter to the Prime Minister

In February 2026, a single operator used commercially available AI to breach nine Mexican government agencies and exfiltrate the personal data of 195 million citizens. The UK’s own AI Security Institute has found that today’s most advanced models can “discover and exploit vulnerabilities autonomously — tasks that would take human professionals days of work”. Britain depends on the same critical infrastructure these tools can now attack.

Liability is a simple, technology-neutral way to align AI developers’ incentives with public safety: those who profit from creating catastrophic risk must bear meaningful responsibility when that risk materialises.

Email your MP

Having trouble loading the form? Open it on pauseai.info →