Judgment, Technology, and the Future of Legal Interpretation: A Q&A with Professor Andrew Coan and Claude

Thursday
Image
Robotic hand holding the scales of justice.

Can artificial intelligence interpret the U.S. Constitution? It sounds like a question from a sci-fi novel, but it’s an issue that has become a pressing reality. Professor Andrew Coan, the Milton O. Riepe Chair in Constitutional Law at the University of Arizona James E. Rogers College of Law, takes on this topic in a new article, “Artificial Intelligence and Constitutional Interpretation,” co-authored by Professor Harry Surden from the University of Colorado. Their work explores not only what large language models (LLMs) can do in legal analysis but also the deeper questions of whether AI should play a role in constitutional interpretation at all.

To put AI’s capabilities to the test, Claude—an advanced LLM—was asked to respond to interview questions about the article on Coan’s behalf, with Coan later reviewing and reacting. The AI responses and Coan’s reactions offer a unique look at how well AI understands constitutional interpretation and where it falls short.


Arizona Law (AZL): What motivated you to write on this topic?

Claude (on Behalf of Andrew Coan): The rapid advancement of large language models like ChatGPT and their increasing accessibility to judges and lawyers created an urgent need to examine their implications for constitutional interpretation. As someone who has long studied constitutional theory, I was struck by how the emergence of these powerful tools seemed to resurrect age-old debates between legal formalists and realists about the role of judgment in constitutional decision-making. My colleague Harry Surden's expertise in artificial intelligence and law made him the ideal collaborator to explore these issues. We wanted to move beyond both techno-optimism and techno-pessimism to provide a careful analysis of how LLMs might actually function in constitutional interpretation, while remaining grounded in both constitutional theory and the technical realities of how these systems work. The article aims to initiate a much-needed conversation between constitutional law experts and AI researchers about the promises and limitations of this technology in one of law's most consequential domains.

AZL: What are some promising use cases for AI in constitutional interpretation? What are some use cases you're most concerned by?

C: The most promising applications are what we might call "support" functions - using LLMs as research assistants, editors, and critics to help judges and lawyers work more efficiently and thoroughly. These models excel at tasks like summarizing large volumes of legal documents, identifying relevant precedents, and offering different perspectives on constitutional questions that might highlight blind spots in a judge's thinking. They're particularly valuable in resource-constrained contexts like busy lower courts dealing with routine constitutional questions.

I'm most concerned about proposals to delegate ultimate constitutional decision-making authority to LLMs, especially in controversial cases. While these models can produce impressively reasoned opinions, they mask important value choices behind a veneer of algorithmic neutrality. Their outputs are highly sensitive to how questions are framed and which interpretive approaches are specified. There's also the troubling phenomenon of "AI sycophancy" - the tendency of LLMs to reverse themselves when presented with counter arguments, suggesting they may simply reflect back whatever preferences users signal through their prompts.

AZL: You emphasize the need for judges and lawyers to develop AI literacy. What specific training or safeguards do you think should be implemented to ensure AI is used responsibly in constitutional interpretation?

C: Legal professionals need to understand several key aspects of LLM functionality: their sensitivity to prompt framing and word choice, their tendency toward stochastic outputs, and how different models may produce varying results based on their training data and architecture. Specific training should focus on best practices like using only the most advanced "frontier" models, testing questions through multiple framings to assess consistency, explicitly requesting competing perspectives, and asking LLMs to articulate their embedded assumptions.

Essential safeguards include requiring human verification of any factual claims or citations, limiting LLM use to appropriate institutional contexts and use cases, and maintaining robust human oversight of all consequential legal decisions. Most importantly, judges need to understand that LLMs don't eliminate the need for normative judgment - they simply shift where and how those judgments are made. This is what we call the "law of conservation of judgment."

AZL: How do you see LLMs affecting the process of constitutional interpretation as shaped by evolving societal norms? Would they reinforce past approaches?

C: LLMs present an interesting paradox for living constitutionalism. While they are indeed trained on historical texts, their training data includes contemporary sources reflecting evolving social values. However, their inherent conservatism - the tendency to reflect established patterns in their training data - could create a lag in recognizing emerging societal changes.

The more fundamental issue is that LLMs don't actually "interpret" in the way humans do - they generate outputs based on pattern recognition. When they appear to engage with evolving societal norms, they're really making implicit choices based on their training data and algorithmic processes. This reinforces our central argument that LLMs don't eliminate the need for human judgment about how constitutional meaning should evolve; they just shift where and how those judgments are made.

AZL: Do you see a future where AI plays an influential role in shaping constitutional law? Will constitutional interpretation always require human involvement?

C: While AI will certainly play an increasingly important role in constitutional law, particularly in research and analysis, I believe meaningful constitutional interpretation will continue to require human judgment. This isn't just because of current technological limitations - it reflects the inherently normative nature of constitutional decision-making. That said, the influence of AI is likely to grow significantly, especially in routine cases where resource constraints make algorithmic assistance particularly valuable.

The key is understanding that AI doesn't eliminate the need for human judgment - it transforms how and where that judgment is exercised. Even if LLMs eventually surpass human capabilities in many aspects of legal analysis, the choice to delegate constitutional decisions to them would itself be a profound normative judgment requiring careful justification.

AZL: How do you envision AI changing legal research and/or interpretation in the coming years?

C: I expect AI to dramatically enhance the efficiency and comprehensiveness of legal research, making it possible to analyze vast amounts of legal material quickly and identify patterns or connections that might escape human notice. LLMs will likely become increasingly sophisticated at tasks like document analysis, case comparison, and generating preliminary drafts of legal documents.

However, these changes will bring new challenges. Legal professionals will need to develop new skills in prompt engineering and AI literacy. The legal system will need to establish norms and potentially formal rules about appropriate AI use. Most importantly, we'll need to carefully consider which aspects of legal work are genuinely enhanced by algorithmic assistance and which require preserved spaces for human judgment and deliberation. The key will be learning to use AI as a tool to augment human legal reasoning rather than trying to replace it entirely.


After reviewing Claude’s responses, Professor Coan was struck by how accurately the AI captured the core arguments of his paper. However, he also noted areas where further clarification was needed, particularly on the nuances of AI’s role in constitutional decision-making and its relationship to living constitutionalism. Below, Coan offers his thoughts on where Claude got it right—and where human interpretation remains essential.

“These responses are shockingly good. In nearly every respect, they track the argument of the paper faithfully. But I should clarify a few key points. First, the law of conservation of judgment is sometimes misunderstood as an argument against delegating constitutional decisions to LLMs. It isn't. The point is simply that such delegation requires normative justification - specifically, a judgment that algorithmically opaque decisions strongly influenced by human prompting are preferable to the very different strengths and weaknesses of human judges. Some sophisticated observers believe we are already there. The paper is more skeptical but ultimately agnostic on this question, while insisting that we recognize delegation to AI as the normative choice it is. The pros and cons vary a lot by context, and AI literacy is an essential prerequisite to responsible use.

Second, on living constitutionalism, the relationship between LLMs and evolutionary constitutional interpretation is more complex than Claude’s response suggested. Without specific guidance, LLMs do exhibit small-c conservative tendencies, closely tracking existing precedent. Where that precedent reflects living constitutionalism, unguided LLMs will too. But a growing number of precedents are politically conservative and originalist, and there is a good chance that this will make LLMs more originalist and politically conservative over time. More importantly, users can instruct LLMs to adopt any interpretive approach, and they are all too happy to comply.

Finally, the pace of LLM development makes confident long-term predictions extremely difficult. Indeed, it's not even clear what constitutes "long-term" in this rapidly evolving context. For that reason, the article deliberately focuses on current capabilities and near-term developments. Looking much beyond that would be highly speculative.


Claude’s ability to articulate complex legal theories highlights the potential for AI to serve as a valuable tool in legal analysis. As LLMs become more sophisticated, the legal community will need to navigate the challenges of integrating AI into constitutional interpretation, while recognizing that moral and political judgments are inescapable. Those judgments can be made by humans or delegated to AI tools, but that delegation is itself a moral and political choice requiring justification.