I have launched a new substack, entitled Law in Crisis, which some might find interesting. As the masthead explains, it is a forum for “musings on how law can help our response to crises, and how to respond to the crisis in law itself – use of force and armed conflict, climate change and security, AI and security, constitutional and int’l human rights, and the rule of law more generally.” The plan is to publish an essay of between one to two thousand words at least once a week.
International Law
The Global Risk in Trump’s AI Action Plan
In this blog post Mike Kelly and I examine how Trump’s AI Action Plan, issued in July of 2025, contributes to the risks associated with artificial general intelligence, and explain why global governance of the research and development of AI that is at the frontier of artificial general intelligence is so important. Some are skeptical of how close we are to AGI, or how much of a risk it really posses, but as we explain, there is a non-trivial possibility that AGI could pose an existential threat to humanity, and thus the precautionary principle requires that take steps now to address that risk.
Can Climate Trigger Armed Conflict? A Discussion with Ayesha Malik of Dawn News
It was a pleasure to discuss in a podcast with Ayesha Malik of Dawn News in Pakistan a range of climate change issues, and in particular the ways in which contributions to climate change, and particular responses to climate change, may contribute to the risks of armed conflict.
Autonomous Weapons Systems and Proportionality – New Article
Published a new law review article, “Autonomous Weapons Systems and Proportionality: The Need for Regulation,” in the Case Western Reserve Journal of International Law, Vol. 57:1 (2025). The full text is available for download at SSRN here, and the abstract is below:
This article examines the question of whether International Humanitarian Law (IHL) requires modification to effectively govern autonomous weapons systems (AWS). While extensive scholarly discourse has focused on whether AWS can comply with existing IHL principles, and whether the development and deployment of AWS should be constrained through weapons treaties, insufficient attention has been paid to why and how IHL itself might need adaptation to regulate AWS as a method of warfare. Given that the imminent development and deployment is unlikely to be prohibited, and that AWS may not comply with IHL in certain circumstances, the question of why and how IHL needs to be adjusted is important. The analysis focuses on the principle of proportionality as a means of exploring and illustrating the issue.
The article first traces the evolution of AWS governance debates, highlighting the impasse at the Convention on Certain Conventional Weapons (CCW) regarding constraints on AWS development. It then conducts a detailed examination of the distinct elements and operation of the principle of proportionality, a principle whose implementation demands complex, contextual, and sophisticated judgment.
The article examines the evidence of certain weaknesses of AI-operated AWS, explains why the operationalization of the principle of proportionality would present challenges for such systems, and argues that AWS would likely struggle to implement the principle reliably and predictably in certain operational contexts. The final section supports and defends proposals for the certain adjustments or modifications of IHL to better regulate AWS, thereby ensuring AWS operations remain constrained by the core principles of IHL. It ends by briefly exploring the mechanisms that might be available for developing such constraints.