Trending Now

Key Takeaways from LFJ’s Virtual Town Hall: Spotlight on AI & Technology

By John Freund |

Key Takeaways from LFJ’s Virtual Town Hall: Spotlight on AI & Technology

On Thursday, February 27th, LFJ hosted a virtual town hall on AI and legal technology. The panel discussion featured Erik Bomans (EB), CEO of Deminor Recovery Services, Stewart Ackerly (SA), Director at Statera Capital, David Harper (DH), co-founder and CEO of Legal Intelligence, and Patrick Ip (PI), co-founder of Theo AI. The panel was hosted by Ted Farrell, founder of Litigation Funding Advisers.

Below are some key takeaways from the discussion:

Everyone reads about AI every day and how it’s disrupting this industry, being used here and being used there. So what I wanted to ask you all to talk about what is the use case for AI, specific to the litigation finance business?

PI: There are a couple of core use cases on our end that we hear folks use it for. One is a complementary approach to underwriting. So initial gut take as to what are potentially the case killers. So should I actually invest time in human underwriting to look at this case?

The second use case is a last check. So before we’re actually going into fund, obviously cases are fluid. They’re ever-evolving. They’re changing. So between the first pass and the last check, has anything changed that would stop us from actually doing the funding? And then the third more novel approach that we’ve gotten a lot of feedback

There are 270,000 new lawsuits filed a day. Generally speaking, in order to understand if this lawsuit has any merit, you have to read through all the cases. It’s very time consuming to do. Directionally, as an application, as an AI application, We can comb through all those documents. We can read all those emails. We can look through social and digest public information to say, hey, these are the cases that actually are most relevant to your fund. Instead of looking through 50 or 100 of these, these are the top 10 most relevant ones. And we send those to clients on a weekly basis. Interesting.

I don’t want you to give up your proprietary special sauce, but how are you all trying to leverage these tools to aid you and deliver the kind of returns that LPs want to see?

SA: We can make the most effective use of AI or other technologies – whether it’s at the very top of the funnel and what’s coming into the funnel, or whether it’s deeper down into the funnel of a case that we like – is that we try to find a way to leverage AI to complement our underwriting. We think about it a lot on the origination side just making us more efficient, letting us be able to sift through a larger number of cases more quickly and as effectively as if we had bodies to look through them all, but also to help us just find more cases that may be a potential fit.

In terms of kind of the data sources that you rely on. I think a question we always think about, especially for kind of early stage cases is, is there enough data available? For example, if there’s just a complaint on file, is that going to give you enough for AI to give you a meaningful result?

I think most of the people on this call would tell you duration is in a lot of ways the biggest risk that funders take. So what specific pieces of these cases is AI helping you drill down into, and how are you harnessing the leverage you can access with these tools?

DH: We, 18 months ago or so, in the beginning of our journey on this use case in law, were asked by a very, very big and very well respected personal injury business in the UK to help them make sense of 37,000 client files that they’d settled with insurers on non-fault motor accident.

And we ran some modeling. We created some data scientist assets, which were AI assets. And their view was, if we had more resources, we would do more of the following things. But we’re limited by the amount of people we’ve got and the amount we get per file to spend on delivering that file. So we developed some AI assets to investigate the nearly 40,000 cases, what the insurers across different jurisdictions and different circumstances settled on.

And we, in partnership with them, improved their settlement value by 8%. The impact that had on their EBITDA, etc. That’s on a firm level, right? That’s on a user case where a firm is actually using AI to perform a science task on their data to give them better predictive analysis. Because lawyers were erring on the side of caution. they would go on a lowball offer because of the impact of getting that wrong if it went to court after settlement. So I think for us, our conversations with financiers and law firms, alignment is key, right? So a funder wants to protect their capital and time – the longer things take, the longer your capital’s out, the potential lower returns.

AI can offer a lot of solutions for very specific problems and can be very useful and can reduce the cost of analyzing these cases, but predictive outcome analysis requires a lot of data. And so the problem is, where do you get the data from and how good is the data? How unstructured or structured are the data sets?

I think getting access to the data is one issue. The other one is the quality of the data, of course, that you put into the machine. If you put bad data in a machine, you might get some correlations, but what’s the relevance, right? And that’s the problem that we are facing.

So many cases are settled, you don’t know the outcome. And that’s why you still need the human component. We need doctors to train computers to analyze medical images. We need lawyers and people with litigation experience who can tell a computer whether this is a good case, whether this is a good settlement or a bad settlement. And in the end, if you don’t know it because it’s confidential, someone has to make a call on that. I’m afraid that’s what we have to do, right? Even one litigation fund or several litigation funders are not going to have enough data with settlements on the same type of claim to build a predictive analytical model on it.

And so you need to get massive amounts of data where some human elements, some coding is still going to be required, manual coding. And I think that’s a process that we’re going to have to go through.

You can view the full panel discussion here.

Secure Your Funding Sidebar

About the author

John Freund

John Freund

Commercial

View All

A Framework for Measuring Tech ROI in Litigation Finance

This article was contributed by Ankita Mehta, Founder, Lexity.ai - a platform that helps litigation funds automate deal execution and prove ROI.

How do litigation funders truly quantify the return on investment from adopting new technologies? It’s the defining question for any CEO, CTO or internal champion. The potential is compelling: for context, according to litigation funders using Lexity’s AI-powered workflows, ROI figures of up to 285% have been reported.

The challenge is that the cost of doing nothing is invisible. Manual processes, analyst burnout, and missed deals rarely appear on a balance sheet — but they quietly erode yield every quarter.

You can’t manage what you can’t measure. This article introduces a pragmatic framework for quantifying the true value of adopting technology solutions, replacing ‘low-value’ manual tasks and processes with AI and freeing up human capital to focus on ‘high-value’ activities that drive bottom line results  .

A Pragmatic Framework for Measuring AI ROI

A proper ROI calculation goes beyond simple time savings. It captures two distinct categories:

  1. Direct Cost Savings – what you save
  2. Increased Value Generation – what you gain

The ‘Cost’ Side (What You Save)

This is the most straightforward calculation, focused on eliminating “grunt work” and mitigating errors.

Metric 1: Direct Time Savings — Eliminating Manual Bottlenecks 

Start by auditing a single, high-cost bottleneck. For many funds, this is the Preliminary Case Assessment, a process that often takes two to three days of an expert analyst's time.

The calculation here is straightforward. By multiplying the hours saved per case by the analyst's blended cost and the number of cases reviewed, a fund can reveal a significant hard-dollar saving each month.

Consider a fund reviewing 20 cases per month. If a 2-day manual assessment can be cut to 4 hours using an AI-powered workflow, the fund reallocates hundreds of analyst-hours every month. That time is now moved from low-value data entry to high-value judgment and risk analysis.

Metric 2: Cost of Inconsistent Risk — Reducing Subjectivity 

This metric is more complex but just as critical. How much time is spent fixing inconsistent or error-prone reviews? More importantly, what is the financial impact of a bad deal slipping through screening, or a good deal being rejected because of a rushed, subjective review?

Lexity’s workflows standardise evaluation criteria and accelerate document/data extraction, converting subjective evaluations into consistent, auditable outputs. This reduces rework costs and helps mitigate hidden costs of human error in portfolio selection.

The ‘Benefit’ Side (What You Gain)

This is where the true strategic upside lies. It’s not just about saving time—it’s about reinvesting that time into higher-value activities that grow the fund.

Metric 3: Increased Deal Capacity — Scaling Without Headcount Growth

What if your team could analyze more deals with the same staff? Time saved from automation becomes time reallocated to new higher value opportunities, dramatically increasing the value of human contributions.

One of the funds working with Lexity have reported a 2x to 3x increase in deal review capacity without a corresponding increase in overhead. 

Metric 4: Cost of Capital Drag — Reducing Duration Risk 

Every month a case extends beyond its expected closing, that capital is locked up. It is "dead" capital that could have been redeployed into new, IRR-generating opportunities.

By reducing evaluation bottlenecks and creating more accurate baseline timelines from inception, a disciplined workflow accelerates the entire pipeline. 

This figure can be quantified by considering the amount of capital locked up, the fund's cost of capital, and the length of the delay. This conceptual model turns a vague risk ("duration risk") into a hard number that a fund can actively manage and reduce.

An ROI Model Is Useless Without Adoption

Even the most elegant ROI model is meaningless if the team won't use the solution. This is how expensive technology becomes "shelf-ware."

Successful adoption is not about the technology; it's about the process. It starts by:

  1. Establish Clear Goals and Identify Key Stakeholders: Set measurable goals and a baseline. Identify stakeholders, especially the teams performing the manual tasks- they will be the first to validate efficiency gains.
  2. Targeting "Grunt Work," Not "Judgment": Ask “What repetitive task steals time from real analysis?” The goal is to augment your experts, not replace them.
  3. Starting with One Problem: Don't try to "implement AI." Solve one high-value bottleneck, like Preliminary Case Assessment. Prove the value, then expand. 
  4. Focusing on Process Fit: The right technology enhances your workflow; it doesn’t complicate it.

Conclusion: From Calculation to Confidence

A high ROI isn't a vague projection; it’s what happens when a disciplined process meets intelligent automation.

By starting to measure what truly matters—reallocated hours, deal capacity, and capital drag—fund managers can turn ROI from a spreadsheet abstraction into a tangible, strategic advantage.

By Ankita Mehta Founder, Lexity.ai — a platform that helps litigation funds automate deal execution and prove ROI.

Burford Capital’s $35 M Antitrust Funding Claim Deemed Unsecured

By John Freund |

In a recent ruling, Burford Capital suffered a significant setback when a U.S. bankruptcy court determined that its funding agreement was not secured status.

According to an article from JD Journal, Burford had backed antitrust claims brought by Harvest Sherwood, a food distributor that filed for bankruptcy in May 2025, via a 2022 financing agreement. The capital advance was tied to potential claims worth about US$1.1 billion in damages against meat‑industry defendants.

What mattered most for Burford’s recovery strategy was its effort to treat the agreement as a loan with first‑priority rights. The court, however, ruled the deal lacked essential elements required to create a lien, trust or other secured interest. Instead, the funding was classified as an unsecured claim, meaning Burford now joins the queue of general creditors rather than enjoying priority over secured lenders.

The decision carries major consequences. Unsecured claims typically face a much lower likelihood of full recovery, especially in estates loaded with secured debt. Here, key assets of the bankrupt estate consist of the antitrust actions themselves, and secured creditors such as JPM Chase continue to dominate the repayment waterfall. The ruling also casts a spotlight on how litigation‑funding agreements should be structured and negotiated when bankruptcy risk is present. Funders who assumed they could elevate their status via contractual design may now face greater caution and risk.

Manolete Partners PLC Posts Flat H1 as UK Insolvency Funding Opportunity Grows

By John Freund |

The UK‑listed litigation funder Manolete Partners PLC has released its interim financial results for the half‑year ended 30 September 2025, revealing a stable but subdued performance amid an expanding insolvency funding opportunity.

According to the company announcement, total revenue fell to £12.7 million (down 12 % from £14.4 million a year earlier), while realised revenue slipped to £14.0 million (down 7 % from £15.0 million). Operating profit dropped sharply to £0.1 million, compared to £0.7 million in the prior period—though excluding fair value write‑downs tied to the company’s truck‑cartel portfolio, underlying profit stood at £2.0 million.

The business completed 146 cases during the period (up 7 % year‑on‑year) and signed 146 new case investments (up nearly 16 %). Live cases rose to 446 from 413 a year earlier, and the total estimated settlement value of new cases signed in the period was claimed to be 31 % ahead of the prior year. Cash receipts were flat at about £14.5 million, while net debt improved to £10.8 million (down from £11.9 million). The company’s cash balance nearly doubled to £1.1 million.

In its commentary, Manolete emphasises the buoyant UK insolvency backdrop — particularly the rise of Creditors’ Voluntary Liquidations and HMRC‑driven petitions — as a tailwind for growth. However, the board notes the first half was impacted by a lower‑than‑average settlement value and a “quiet summer”, though trading picked up in September and October. The firm remains confident of stronger average settlement values and a weighting of realised revenues toward the second half of the year.