Here are the top 5 takeaways:
Access to legal data is a fundamental justice problem. Downloading all motions, briefs, and pleadings from PACER would cost ~$2 billion. Damien argued this creates a two-tiered justice system where only well-funded parties can afford the data needed to use predictive AI tools effectively — and that the courts themselves bear responsibility for that inequity.
AI can already build a statistically optimized legal strategy. Damien demonstrated live that tools exist today to analyze thousands of judicial opinions, identify which arguments “tickle a judge’s brain,” and generate motions statistically tailored to win in front of a specific judge. This has been possible for six years — most lawyers just haven’t caught up.
The “associate bot” pipeline is trivially buildable right now. Damien described a fully automated AI workflow: issue spot → research → draft → partner review → opposing counsel attack → judge simulation — running hundreds of iterations before a human ever sees it. His point: people dismissing AI as “not ready” don’t see the train coming.
AI could expose and reduce judicial bias. Data already shows female litigators win ~10% less often regardless of the judge’s gender. As both sides start using predictive tools, judges will be more accountable and scrutinized — potentially forcing more consistent, data-grounded rulings. Greg noted judges could even use the same data to self-audit their own biases.
Human advocacy and persuasion still matter. Damien used the tobacco cases as a counterexample: no amount of data would have won the day — it took a skilled litigator’s ability to convince a single judge. The panel agreed that while AI levels the playing field on data, uniquely human persuasion and judgment still have a role, especially in high-stakes, novel situations.



