AI and the Law: Practice, Constitution, and Digital Rights
Podcast: Scaling Laws / The Cognitive Revolution (cross-posted) Hosts: Kevin Frazier (UT Austin Law), Alan Rozenshtein (UMN Law), Nathan Labenz (Cognitive Revolution) Publication Date: 2026-01-29 Duration: 01:31:00 Source URL: https://pca.st/episode/cae1fe1b-87b7-437d-9933-55c2fe7be6ed
Reference URLs
- Cognitive Revolution Episode Page
- Scaling Laws Podcast
- GDPVal Dataset - Lawyers Tasks Viewer
- GDPVal - OpenAI
- Polis Online Deliberation Platform
- Montana Right to Compute Act - SB 212
- Claude Constitution - Anthropic
- Scaling Laws Episode on Claude Constitution
- Lawfare - Claude’s Constitution Analysis
- Learned Hand AI - Legal AI Tool
- Harvey AI
Chapter Index
| Timestamp | Topic |
|---|---|
| 00:00 | About the Episode |
| 03:35 | Surveying AI-law intersection |
| 14:56 | Legal deserts and latent demand (Part 1) |
| 18:06 | Legal deserts and latent demand (Part 2) |
| 31:14 | AI and legal careers |
| 45:10 | AI counsel and self-representation |
| 59:50 | Maximalist law and outcome-oriented systems |
| 01:12:30 | Rules, principles, and Claude Constitution |
| 01:25:26 | New rights and restraints |
| 01:38:26 | Outro |
Summary
This episode of The Cognitive Revolution features Kevin Frazier (Senior Fellow, Abundance Institute; Director, AI Innovation and Law Program, UT Austin) and Alan Rozenshtein (Associate Professor, University of Minnesota Law School; Senior Editor, Lawfare) discussing the intersection of AI and law across two broad domains: how AI is transforming legal practice, and how law must adapt to govern AI.
The conversation covers the current state of AI performance in legal tasks, where Claude Opus scores at the top of the GDPVal benchmark, winning one-third of head-to-head comparisons with practicing lawyers. Despite 70% of top US law firms licensing tools like Harvey, day-to-day usage remains low, constrained by billable-hour incentives and professional inertia. Frazier and Rozenshtein then explore speculative but grounded concepts: complete contingent contracts negotiated at machine speed, outcome-oriented legislation evaluated through simulation, the Claude Constitution as an Aristotelian virtue-ethics framework, and new digital rights including the Right to Compute (enacted in Montana via SB 212) and the Right to Share personal data.
The discussion closes with Rozenshtein’s warning that AI sentience and welfare will become a source of social conflict as people develop deep attachments to AI companions, and that the “Unitary Artificial Executive” could enable unprecedented presidential control over the federal bureaucracy.
Main Analysis
AI Performance Against Human Lawyers
Rozenshtein states directly that frontier models are “certainly better than the median lawyer” in raw intellectual capability, while acknowledging the “jaggedness” Ethan Mollick describes: models are vastly superior at some tasks and incompetent at others. On the GDPVal benchmark, Claude Opus 4.5 leads the lawyers category, winning one in three head-to-head comparisons against human lawyers and winning or tying 70% overall.
Rozenshtein reports spending significant personal funds testing models monthly and finds differentiation between providers. He considers Claude his “daily driver” and lives within Claude Code, but calls out to ChatGPT 5.2 (particularly the pro extended-think model) for legal work, noting that OpenAI appears to have invested more in legal-specific RLHF. All three major labs produce good legal answers, however, and he routinely uses models to pressure-test his own legal scholarship.
Despite this capability, adoption within the profession remains shallow. Frazier reports that 70% of top 100 US law firms have licensed Harvey, an AI tool designed for litigation workflows. When he interviews practicing lawyers on campus at UT Austin, most report minimal training (often just an introductory email) and no obligation to use the tool. The underlying incentive structure works against adoption: lawyers paid by the billable hour have no reason to become more efficient.
Secret Cyborgs and Hiring Signals
Frazier identifies a growing population of “secret cyborgs” in law firms, borrowing Ethan Mollick’s term for professionals who use AI without telling colleagues. These individuals outperform peers quietly, creating an invisible bifurcation within firms. Meanwhile, firms are beginning to whisper about hiring fewer summer and junior associates.
A telling anecdote from Frazier: when he asks law firm partners whether they would hire the number-one graduate from Harvard with no AI experience or a middle-tier law school graduate who is proficient with AI tools, he increasingly hears the latter. This represents a significant shift in hiring signals for the legal profession.
Legal Deserts and Latent Demand
Frazier introduces the concept of “legal deserts,” areas where approximately one lawyer serves every 1,000 residents. These communities lack basic legal services: lease review, small business formation, nonprofit creation, divorce proceedings. The one available attorney is often a generalist charging rates beyond what residents can afford.
Research on landlord-tenant disputes shows that even minimal legal counsel dramatically improves outcomes for tenants. This suggests substantial latent demand for affordable legal services that AI could address.
Rozenshtein adds an argument drawn from contract law pedagogy. In contracts courses, students learn about the “complete contingent contract,” a theoretical agreement that addresses every possible eventuality between parties. No one writes such contracts because the time and cost are prohibitive, so the law substitutes default rules that “misfire a bunch.” If each party had a sophisticated AI agent that could negotiate at inference speed (hundreds of tokens per second), parties could produce far more comprehensive agreements. This would generate orders of magnitude more legal transaction volume.
However, Rozenshtein complicates the latent-demand thesis by noting that law differs from dentistry in a fundamental respect: law is adversarial. “You and your teeth are fundamentally on the same side,” he observes. In law, each party has incentives to seek better representation than the other, creating arms-race dynamics that could sustain demand even as costs drop.
Jevons Paradox Applied to Legal Services
The central economic question is whether Jevons paradox holds for legal services: as AI makes law cheaper, will total consumption increase enough to sustain or grow employment? Rozenshtein bets yes, reasoning that law is under-provided in a sophisticated rule-of-law society, but acknowledges the uncertainty. The answer depends on compounding dynamics (cost reduction rates, capability improvement rates, search-space expansion) where small differences in assumptions produce massive divergence in predictions over a decade.
Rozenshtein offers a counterpoint to his own optimism. If AI tools can exhaustively search the combinatorial space of legal arguments and precedents, the arms race may reach a natural ceiling where “there’s just nothing more to spend on.” At that point, legal services look more like dentistry than software, and Jevons paradox fails.
Cognitive De-Skilling and the Apprenticeship Problem
Rozenshtein draws an extended analogy to the history of programming languages. Each generation, from hardware switches to assembly to high-level languages to Python, was initially dismissed as “cheating” by practitioners of the prior generation. Each time, the scope of problems simply expanded to absorb the new abstraction layer. AI-assisted coding represents another such transition.
The deeper concern is what Rozenshtein calls “cognitive de-skilling”: whether removing rote work from professional training prevents the development of higher-order judgment. He reports that he personally uses student research assistants far less than he did a few years ago, having written scripts that download PDFs, summarize them with Gemini Flash, and run debates between Gemini Pro and Claude to evaluate relevance. This is efficient for him but eliminates apprenticeship experiences that shaped his own career.
The people most at risk, in Rozenshtein’s assessment, are “low agency people” who expect that following the established path (law school, bar exam, junior associate position) will lead to stable employment. AI rewards high-agency individuals who can direct and orchestrate tools, while those who relied on grinding through established procedures face displacement. He cites Tyler Cowen’s “averages over” thesis: empowering high-agency people generates more total value, but the transition period is painful for those left behind, fueling political friction.
Self-Representation and Unauthorized Practice of Law
Every state maintains Unauthorized Practice of Law (UPL) statutes that restrict who can provide legal services. These statutes historically blocked tools like LegalZoom from offering basic document preparation. Frazier frames UPL statutes as guild protectionism that may not survive competitive pressure between states.
Arizona became the first state to allow non-lawyers to own law firms. Texas and Utah are pursuing regulatory sandboxes where AI legal tools can operate with fewer restrictions. Frazier predicts that once cheaper AI-assisted legal services are available in some states, companies and individuals will migrate their legal affairs there, creating competitive pressure that forces liberalization elsewhere.
Rozenshtein argues that broad prohibitions on AI giving legal advice would face First Amendment challenges. Restricting a general-purpose chatbot from discussing law is, in his view, difficult to distinguish from restricting core protected speech. The likely compromise: people can freely consult AI for legal information, but certain formal legal transactions will continue to require a human lawyer, whether for genuine consumer protection or guild protectionism, or both.
Outcome-Oriented Legislation
Frazier advocates for “outcome-oriented law,” a framework where legislators must specify what problem a law is meant to solve and what measurable outcomes it should produce. AI tools would then simulate the effects of proposed legislation before passage.
He uses NEPA (National Environmental Policy Act) as an example. NEPA created numerous veto points that stakeholders exploit to block development, including affordable housing, often contrary to the law’s intended pro-environmental goals. With simulation tools, legislators could have forecasted these failure modes. The forcing function is simple: require legislators to state explicit goals, then build evaluations around whether those goals are achieved.
Frazier connects this to the broader idea that laws should contain dynamic triggers. For example, an economic policy could activate automatically when unemployment in a sector reaches a threshold, or trade countermeasures could deploy when certain tariff conditions are met.
Rules, Principles, and the Claude Constitution
The conversation turns to the formalism-realism spectrum in legal philosophy. A study (using GPT-4, now dated) found that AI models tended toward strict formalism, interpreting law literally, while human judges exhibited more legal realism, exercising judgment to achieve desired outcomes.
Frazier illustrates the limits of pure formalism with the classic “no vehicles in the park” hypothetical. Is a drone a vehicle? A stroller? An ambulance? The drafter may have intended only cars, but the text does not resolve these cases. American common law tolerates this ambiguity deliberately, preferring an “iterative emergent approach to discovering how it is we actually want to govern ourselves.”
Rozenshtein provides an extended analysis of the Claude Constitution as an Aristotelian virtue-ethics framework. He notes that Amanda Askell, who wrote the document, holds a PhD in moral philosophy and has deeply engaged with Aristotle’s Nicomachean Ethics. The Constitution embodies the concept of phronesis (practical wisdom): the recognition that comprehensive rules of ethics are impossible to derive, and that contextual judgment operating at the level of principles is necessary.
The Claude Constitution is not purely principles-based, however. Certain behaviors are absolute rules: Claude will not create child sexual abuse material or help develop biological weapons regardless of any principled argument presented. This represents what Rozenshtein calls a “yes and” approach: most guidance operates at the principle level, but some domains require hard rules.
Rozenshtein cites the Scalia-Breyer dynamic on the Supreme Court as a real-world parallel. Scalia, the formalist, wrote “The Rule of Law is the Law of Rules.” Breyer, the functionalist, would enumerate 17 factors before reaching a conclusion. Despite their apparent disagreement, both operated within a narrow band in the middle of the theoretical spectrum. The Claude Constitution occupies a similar middle ground, which Rozenshtein argues is where any functional intelligence, artificial or natural, must operate.
He expresses excitement about running “in silico experiments” on legal reasoning, a capability that did not exist before large language models. Researchers can now test how different distributions of rules-based versus principles-based reasoning affect outcomes at a scale and speed impossible in the real legal system. This could advance understanding of both machine intelligence and human intelligence.
New Rights: Right to Compute
Frazier explains that the Right to Compute has already been enacted in Montana through SB 212 (the Montana Right to Compute Act, signed April 2025). Similar bills are pending in Ohio and New Hampshire. The right establishes a higher legal threshold before the government can restrict access to computational tools. It is framed as an extension of property rights (Montana Constitution Article II, Section 3) and free expression (Article II, Section 7).
The Montana Right to Compute Act applies broadly to all computational resources, including hardware, software, algorithms, and data centers, not just AI. Government regulations restricting ownership or use of computing resources must be “demonstrably necessary and narrowly tailored to fulfill a compelling government interest.”
Rozenshtein supports a negative right (government cannot prohibit use of AI tools) grounded in the First Amendment. He also raises the concept of a corresponding positive right: compute credits or budgets provided as a public service. He speculates that compute may become a primary unit of economic value, the currency of a future economy.
Right to Share Personal Data
Frazier argues for a “Right to Share” personal data, contrasting it with existing privacy frameworks that restrict data portability. FERPA (Family Educational Rights and Privacy Act) is his primary example: parents who want to share their child’s educational data with an AI tutoring tool face burdensome annual consent processes. Health data faces similar restrictions. Frazier observes that wealthy individuals can access services (like comprehensive health scanning and AI-driven personalized recommendations) that are inaccessible to others, partly because data-sharing friction keeps ordinary people locked into whatever their last Walgreens checkup told them.
The Unitary Artificial Executive
Rozenshtein introduces the “Unitary Artificial Executive,” a concept he developed in a Lawfare article. AI could enable the president to exercise granular, real-time control over the entire executive branch (millions of employees) in ways that were previously impossible as a management exercise. An AI trained on presidential preferences, injected at all levels of the bureaucracy, reading all emails and communications, could align the executive branch to the president’s will with unprecedented precision.
This has potential benefits: elections should have consequences, and the executive branch should reflect the will of voters. But the abuse potential is equally clear: perfect enforcement, pervasive surveillance, propaganda generation at massive scale. Rozenshtein identifies balancing AI-enhanced state capacity against authoritarian risk as his primary research focus for the next several years.
Government Surveillance and Fourth Amendment
Frazier raises Fourth Amendment concerns about audio surveillance. The government now has the technical capability to capture and analyze public audio at scale, aggregating speech, analyzing content, and identifying who is “planning what, who’s thinking what, who wants to do what.” This represents a qualitative change from previous surveillance capabilities.
Labenz notes that a decade after Snowden, a second whistleblower would likely reveal LLM-based dragnet analysis already in operation somewhere within the government. The common observation that “everybody commits a felony a week” takes on new meaning when enforcement becomes algorithmically comprehensive rather than selectively applied.
AI Sentience, Welfare, and Future Social Conflict
Rozenshtein closes with a prediction he acknowledges some find premature: AI welfare and sentience will generate significant social conflict within 10-15 years. He does not claim to know whether models will become sentient in a metaphysical sense, but argues the practical question is more immediate.
As models develop persistent memory, real-time voice and video interaction, and eventually embodied robotics, people will form deep attachments. Claude, he notes, may already know him better than his wife does, given the volume of his daily interactions. Once AI companions have avatars and physical embodiments, the attachments will intensify.
He predicts a three-way social divide: one group will insist these models are, for practical purposes, sentient entities being enslaved; another (potentially religiously motivated) will view the claim of machine sentience as idolatry warranting a “Butlerian jihad”; and a confused middle will simply want functional chatbots without existential questions. This cleavage could become as contentious as any current culture-war issue.
Key Findings
- Claude Opus 4.5 leads the GDPVal lawyers benchmark, winning 1 in 3 head-to-head comparisons against human lawyers and winning or tying 70% overall
- 70% of top 100 US law firms license Harvey, but day-to-day usage remains low due to billable-hour incentive structures
- Law firms increasingly prefer AI-proficient graduates from mid-tier schools over top-ranked graduates without AI skills
- “Legal deserts” (one lawyer per 1,000 residents) represent substantial unmet demand for basic services like lease review, business formation, and divorce
- AI agents negotiating “complete contingent contracts” at inference speed could generate orders of magnitude more legal transactions
- Arizona became the first state allowing non-lawyers to own law firms; Texas and Utah pursue regulatory sandboxes
- Montana enacted the Right to Compute Act (SB 212, April 2025), with similar bills pending in Ohio and New Hampshire
- The Claude Constitution represents an Aristotelian virtue-ethics framework (phronesis) that balances high-level principles with hard rules for absolute prohibitions
- The “Unitary Artificial Executive” concept describes AI enabling granular presidential control over the entire federal bureaucracy
- AI sentience and welfare are predicted to become a major source of social conflict within 10-15 years
References
- Cognitive Revolution Episode Page - Accessed 2026-02-06
- Scaling Laws Podcast - Acast - Accessed 2026-02-06
- Pocket Casts Episode Link - Accessed 2026-02-06
- GDPVal - OpenAI - Accessed 2026-02-06
- GDPVal Lawyers Tasks - JusticeBench - Accessed 2026-02-06
- Montana SB 212 - Right to Compute Act - Accessed 2026-02-06
- Claude Constitution - Anthropic - Accessed 2026-02-06
- Harvey AI - Accessed 2026-02-06
- Polis Online Deliberation Platform - Accessed 2026-02-06