Trending Now

What Lloyd v. Google Means for UK Class Actions and Litigation Funders

What Lloyd v. Google Means for UK Class Actions and Litigation Funders

The Lloyd v. Google claim has given rise to some thought-provoking questions:
  • Has Google breached its duties as a data controller? If so, have class members of the ensuing collective action suffered quantifiable damages?
  • How exactly should “same interest” be determined in a case regarding the misuse of data?
  • Do individual members of a class have to demonstrate material harm in order to receive recompense?
In the following article, we will explore the answers to these and other questions that have arisen from Case UKSC 2019/0213, otherwise known as Lloyd v. Google. What Exactly Happened? Richard Lloyd, sought to file a claim against tech giant Google, asking for compensation pursuant to section 13 of the Data Protection Act of 1998. The accusation involves the use of cookies in a ‘Safari workaround’ that ultimately collected, then disseminated, user data into metrics that were then used to employ targeted advertising to users. This alleged misuse ostensibly impacted over four million iPhone users in England and Wales, whose data was unlawfully accessed by Google. Google’s use of the data was found to be a breach of DPA1998. Lloyd sued not only on his own behalf, but on behalf of others whose data was treated similarly. Google fought the suit, saying that class members could not demonstrate material harm from the misuse of data. In a case like this one, ‘material harm’ could include monetary losses or mental anguish stemming from the illegal harvesting or dissemination of data. Lloyd’s claim was backed by Therium, a prominent litigation funder specializing in tech-related cases. Lloyd’s legal team argued that the ‘same interest’ mandate had been satisfied, and that awarding all class members the same sum in damages is reasonable—without a need to delve into the personal circumstances of every individual claimant. The Decision  Initially, the High Court ruled in favor of Google. When the court of appeal reversed the ruling, Google appealed again to the Supreme Court. In the majority decision, Lord Leggatt determined the following:
  • The determination of “damage” must include verifiable, material damages such as financial or mental anguish. Mere illegality of an action is not enough to necessitate financial recompence.
  • Damages must be demonstrated.
Why are the Facts Here so Important? Obviously, there is reason to be concerned when a tech company in control of an extremely large amount of user data is accused of illegally managing that data. In this instance, Google allegedly sold or used user data for commercial/money-making purposes. This was done without the knowledge or consent of its users. One could argue that any user who utilized Google on an Apple iPhone has reason to be dismayed (indeed, a similar case settled before going to trial). The case also illustrates the importance of opt-in versus opt-out models, as well as what can happen when the majority of class members choose to abstain from involvement in the case proceedings. Under Lord Leggatt’s ruling, an opt-out model is not feasible in any instance requiring that class members be able to show tangible losses. Ultimately, tech giants like Google are required to abide by their own user agreements. However, users must prove suffering beyond the violation of their right to privacy. Ironically, one area of doubt in such a case arises over how shares of a payout (to litigation funders, for example) can properly be calculated without consent of all class members. Just as many class members in an opt-out proceeding may not know the details of the case, they also may be totally unaware of the claim, or of how any proceeds are to be divided. What Do These Developments Mean for Litigation Funders and Potential Claimants? The idea that a claimant must demonstrate damages in order to receive compensation is neither new nor controversial. But it does put a damper on collective actions with high class member counts. Especially when looking at cases against huge companies like Visa/Mastercard, Apple, or Google. Many would argue that it’s simply not feasible to collect information about losses from millions of potential claimants. So, while this line of thinking is reasonable under English law, it may well discourage litigation funders from taking on cases requiring that all class members demonstrate individual losses. This, in turn, will make the pursuit of justice more difficult for potential members of a wronged class. For litigation funders, the difference between one potential claimant in a case and the millions who could have been class members in Lloyd v Google is significant. While we know that funders ultimately back cases to increase access to justice and give claimants a day in court—we also know that this relies on investors, whose motivation to invest is profit-driven. In short, litigation finance only works in the long term, when it’s financially advantageous to investors. The question of privacy rights is a tricky one. Having one’s privacy violated is, as the phrase suggests, a violation. But as it typically has no financial component beyond the negative feelings associated, it is unlikely to serve as a demonstrable loss in a case involving user data (unless, of course, a further demonstrable loss can be proven). At the same time, it is clear that Google misused user data, intentionally and without consent—with an eye toward financial gain. Surely it makes sense that Google should share some of that income with the users whose data was breached? Not according to the UK Supreme Court, apparently. A Missed Opportunity  Had Lloyd vs. Google succeeded in the way Lloyd intended, it could have changed the way class actions in data cases were handled by the courts. Essentially, opt-out class actions could have flourished as individual class members wouldn’t be required to demonstrate financial damages. This has particular relevance to data cases, because when data companies use information in ways that are not in keeping with their own TOS, users may not be damaged financially. But this lack of demonstrable damages doesn’t necessarily mean a) data companies don’t have a moral obligation to offer users recompense, or b) that users aren’t deserving of a payout when they are wronged. Had Lloyd’s legal team instead used a bifurcated approach to the proceedings, a smaller opt-in class could perhaps have enabled a stronger case through the gathering of evidence—specifically evidence of damages. Similarly, a Group Litigation Order (GLO), which, despite what some see as high administrative costs, would have better determined eligibility for class members. This, in turn, would have allowed for a better test of the case’s merits. In Conclusion Lloyd vs. Google demonstrates the importance of several aspects of class action litigation, including how opt-in versus opt-out impacts the collection, as well as ability to bring evidence of damages. This promises to be a factor in future tech cases—not just in the UK, but globally. Will the failure to secure damages for those whose data was misused embolden Big Tech? Will it serve as a warning? Could it discourage litigation funders from backing such cases? We’ll have to wait and see. For now, it’s clear that Lloyd vs. Google has left its mark on the UK legal and litigation funding worlds—and on Big Tech as a whole.
Secure Your Funding Sidebar

Commercial

View All

A Framework for Measuring Tech ROI in Litigation Finance

This article was contributed by Ankita Mehta, Founder, Lexity.ai - a platform that helps litigation funds automate deal execution and prove ROI.

How do litigation funders truly quantify the return on investment from adopting new technologies? It’s the defining question for any CEO, CTO or internal champion. The potential is compelling: for context, according to litigation funders using Lexity’s AI-powered workflows, ROI figures of up to 285% have been reported.

The challenge is that the cost of doing nothing is invisible. Manual processes, analyst burnout, and missed deals rarely appear on a balance sheet — but they quietly erode yield every quarter.

You can’t manage what you can’t measure. This article introduces a pragmatic framework for quantifying the true value of adopting technology solutions, replacing ‘low-value’ manual tasks and processes with AI and freeing up human capital to focus on ‘high-value’ activities that drive bottom line results  .

A Pragmatic Framework for Measuring AI ROI

A proper ROI calculation goes beyond simple time savings. It captures two distinct categories:

  1. Direct Cost Savings – what you save
  2. Increased Value Generation – what you gain

The ‘Cost’ Side (What You Save)

This is the most straightforward calculation, focused on eliminating “grunt work” and mitigating errors.

Metric 1: Direct Time Savings — Eliminating Manual Bottlenecks 

Start by auditing a single, high-cost bottleneck. For many funds, this is the Preliminary Case Assessment, a process that often takes two to three days of an expert analyst's time.

The calculation here is straightforward. By multiplying the hours saved per case by the analyst's blended cost and the number of cases reviewed, a fund can reveal a significant hard-dollar saving each month.

Consider a fund reviewing 20 cases per month. If a 2-day manual assessment can be cut to 4 hours using an AI-powered workflow, the fund reallocates hundreds of analyst-hours every month. That time is now moved from low-value data entry to high-value judgment and risk analysis.

Metric 2: Cost of Inconsistent Risk — Reducing Subjectivity 

This metric is more complex but just as critical. How much time is spent fixing inconsistent or error-prone reviews? More importantly, what is the financial impact of a bad deal slipping through screening, or a good deal being rejected because of a rushed, subjective review?

Lexity’s workflows standardise evaluation criteria and accelerate document/data extraction, converting subjective evaluations into consistent, auditable outputs. This reduces rework costs and helps mitigate hidden costs of human error in portfolio selection.

The ‘Benefit’ Side (What You Gain)

This is where the true strategic upside lies. It’s not just about saving time—it’s about reinvesting that time into higher-value activities that grow the fund.

Metric 3: Increased Deal Capacity — Scaling Without Headcount Growth

What if your team could analyze more deals with the same staff? Time saved from automation becomes time reallocated to new higher value opportunities, dramatically increasing the value of human contributions.

One of the funds working with Lexity have reported a 2x to 3x increase in deal review capacity without a corresponding increase in overhead. 

Metric 4: Cost of Capital Drag — Reducing Duration Risk 

Every month a case extends beyond its expected closing, that capital is locked up. It is "dead" capital that could have been redeployed into new, IRR-generating opportunities.

By reducing evaluation bottlenecks and creating more accurate baseline timelines from inception, a disciplined workflow accelerates the entire pipeline. 

This figure can be quantified by considering the amount of capital locked up, the fund's cost of capital, and the length of the delay. This conceptual model turns a vague risk ("duration risk") into a hard number that a fund can actively manage and reduce.

An ROI Model Is Useless Without Adoption

Even the most elegant ROI model is meaningless if the team won't use the solution. This is how expensive technology becomes "shelf-ware."

Successful adoption is not about the technology; it's about the process. It starts by:

  1. Establish Clear Goals and Identify Key Stakeholders: Set measurable goals and a baseline. Identify stakeholders, especially the teams performing the manual tasks- they will be the first to validate efficiency gains.
  2. Targeting "Grunt Work," Not "Judgment": Ask “What repetitive task steals time from real analysis?” The goal is to augment your experts, not replace them.
  3. Starting with One Problem: Don't try to "implement AI." Solve one high-value bottleneck, like Preliminary Case Assessment. Prove the value, then expand. 
  4. Focusing on Process Fit: The right technology enhances your workflow; it doesn’t complicate it.

Conclusion: From Calculation to Confidence

A high ROI isn't a vague projection; it’s what happens when a disciplined process meets intelligent automation.

By starting to measure what truly matters—reallocated hours, deal capacity, and capital drag—fund managers can turn ROI from a spreadsheet abstraction into a tangible, strategic advantage.

By Ankita Mehta Founder, Lexity.ai — a platform that helps litigation funds automate deal execution and prove ROI.

Burford Capital’s $35 M Antitrust Funding Claim Deemed Unsecured

By John Freund |

In a recent ruling, Burford Capital suffered a significant setback when a U.S. bankruptcy court determined that its funding agreement was not secured status.

According to an article from JD Journal, Burford had backed antitrust claims brought by Harvest Sherwood, a food distributor that filed for bankruptcy in May 2025, via a 2022 financing agreement. The capital advance was tied to potential claims worth about US$1.1 billion in damages against meat‑industry defendants.

What mattered most for Burford’s recovery strategy was its effort to treat the agreement as a loan with first‑priority rights. The court, however, ruled the deal lacked essential elements required to create a lien, trust or other secured interest. Instead, the funding was classified as an unsecured claim, meaning Burford now joins the queue of general creditors rather than enjoying priority over secured lenders.

The decision carries major consequences. Unsecured claims typically face a much lower likelihood of full recovery, especially in estates loaded with secured debt. Here, key assets of the bankrupt estate consist of the antitrust actions themselves, and secured creditors such as JPM Chase continue to dominate the repayment waterfall. The ruling also casts a spotlight on how litigation‑funding agreements should be structured and negotiated when bankruptcy risk is present. Funders who assumed they could elevate their status via contractual design may now face greater caution and risk.

Manolete Partners PLC Posts Flat H1 as UK Insolvency Funding Opportunity Grows

By John Freund |

The UK‑listed litigation funder Manolete Partners PLC has released its interim financial results for the half‑year ended 30 September 2025, revealing a stable but subdued performance amid an expanding insolvency funding opportunity.

According to the company announcement, total revenue fell to £12.7 million (down 12 % from £14.4 million a year earlier), while realised revenue slipped to £14.0 million (down 7 % from £15.0 million). Operating profit dropped sharply to £0.1 million, compared to £0.7 million in the prior period—though excluding fair value write‑downs tied to the company’s truck‑cartel portfolio, underlying profit stood at £2.0 million.

The business completed 146 cases during the period (up 7 % year‑on‑year) and signed 146 new case investments (up nearly 16 %). Live cases rose to 446 from 413 a year earlier, and the total estimated settlement value of new cases signed in the period was claimed to be 31 % ahead of the prior year. Cash receipts were flat at about £14.5 million, while net debt improved to £10.8 million (down from £11.9 million). The company’s cash balance nearly doubled to £1.1 million.

In its commentary, Manolete emphasises the buoyant UK insolvency backdrop — particularly the rise of Creditors’ Voluntary Liquidations and HMRC‑driven petitions — as a tailwind for growth. However, the board notes the first half was impacted by a lower‑than‑average settlement value and a “quiet summer”, though trading picked up in September and October. The firm remains confident of stronger average settlement values and a weighting of realised revenues toward the second half of the year.