Research
At LawZero we are researching how to build non-agentic AI that learns primarily to understand the world rather than to act in it, giving truthful answers to questions based on transparent and externalized probabilistic reasoning. Such AI systems could be used to accelerate scientific discovery, provide oversight for agentic AI systems, and advance our understanding of the risks posed by AI and how to avoid them. Our approach stands apart from that of frontier AI companies, which are increasingly focused on developing agentic AI systems. As these agents grow more capable and autonomous, many experts warn that their behaviour may become less predictable, less interpretable, and potentially more dangerous.
Featured Publication
The leading AI companies are increasingly focused on building generalist AI agents — systems that can autonomously plan, act, and pursue goals across almost all tasks that humans can perform. Despite how useful these systems might be, unchecked AI agency poses significant risks to public safety and security, ranging from misuse by malicious actors to a potentially irreversible loss of human control. We discuss how these risks arise from current AI training methods.