On December 6th, 2023, Litigation Finance Journal produced its final event of the year: Legal Tech and LitFin: How Will Tech Impact Litigation Finance Globally?
Tets Ishikawa moderated an insightful and pertinent discussion on the use of legal tech in the litigation finance industry. Panelists included Nick Rowles-Davies (NRD), Founder of Lexolent, Isabel Yang (IY), Founder of Arbilex, and Joshua Masia (JM), Co-Founder and CEO of Dealbridge.ai.
Below are some key takeaways from the event (answers have been truncated for the purpose of this article):
Legal tech is quite a broad term. What does the legal tech landscape mean to you, and how does it fit into your business?
IY: We’re in a very exciting time in legal tech. Where I sit, I primarily deal with the underlying technology being artificial intelligence (AI). The primary advances in advanced AI have primarily occurred out of language being the source data. A lot of these text-based AI advancements all hold great significance for the practice of law.
At Arbilex, we are taking advantage of large language modeling (LLM) to reduce the cost of data acquisition. When we take court briefings and unstructured data and try to turn that into structured data, the cost of that process has dramatically decreased, because of Chat GPT and the latest LLMs.
On the flipside, because AI has become so advanced, a lot of off-the-shelf solutions have tended towards a black box solution. So the model’s output has become a more challenging task. At Arbilex, we have always focused on building the most stable AI—so we focus on how we can explain a particular prediction to our clients. We are increasingly investing a lot of our time and human capital into building that bridge between AI and that use case.
How relevant has legal tech been, and will it be, in the growth of the litigation finance sector?
JM: When we look at scaling operational processes, a lot of times we have to put our traditional computer science hat on and ask, ‘how have we historically solved these problems and what has changed in the past several years to evolve this landscape?’
A lot of the emphasis with technology has been about normalizing and standardizing how we look at these data sets. There’s a big issue when you look at this approach and what existing platforms have been doing—this is a very human business. Because of that, there’s a lot of ad hoc requests that get mixed in. So what gen-AI is doing, we’re getting to a point where you don’t have to over-structure your sales or diligence process. Maybe the first few dozen questions you’re asking of a given data set are the same, but eventually we want to be able to ask questions that are specific to this deal. So being able to call audibles and ad-hoc analysis of data sets was really hard to do before the addition of generative AI.
NRD: Legal tech is becoming increasingly relevant, but the real effect and usefulness has grown over time. It makes repetitive tasks easier, and provides insights that are not always readily apparent. But in terms of the specific use of AI to triage outcoming matters, we identify matters in different areas—is this something we simply aren’t going to assess, will it be sent back for further information, does it fit the bucket of something we would fund per our original mandate, or does it go on the platform for the purpose of others to look at and invest in that particular matter.
AI is having an increasing impact and is being used with more regularity by litigation funders who are funding they can increase efficiency and get to a ‘yes’ much more quickly.
A lot of lawyers would say, this is fascinating, but ultimately this is a human industry. Every circumstance will be different, because they will come down to the behaviors of human beings in that time. Is there a way that AI can capture behavioral dynamics?
IY: In general, we need to have realistic expectations of AI. That comes from, what humans are uniquely good at are not necessarily the things that AI is good at. AI is really good at pattern-spotting. Meaning, if I train the model to look for recurring features of particular cases—say, specific judges in specific jurisdictions, when coming up against a specific type of argument or case—then AI in general has a very good ability to assign the weighting to a particular attribute in a way that humans instinctively can come to the same place, you can’t really quantify the impact or magnitude of a specific attribute.
The other thing that we need to be realistic about, is that cases are decided not just on pattern, but on case-specific fact attributes (credibility of a witness, availability of key evidence). If you train AI to look for things that are so specific to one case, you end up overfitting the model, meaning your AI is so good at looking for one specific variable, that it loses it general predictive power over a large pool of cases.
What I would caution attorneys, is use AI to get a second opinion on things you believe are a pattern. In arbitration, attorneys might use AI on tribunal matters—tribunal composition. AI models are way better at honing in on patterns—but things like ‘do we want to produce this witness vs. another witness,’ that is not something we should expect AI to predict.
For the full panel discussion, please click here.