Following our recent game-changing work with ICLR, we thought we’d introduce one of the key members of the team who worked on the project and delve into how 67 Bricks do things just a little differently. Meet Rhys Parsons, Technical Lead.

Hi Rhys, can you start by telling us about yourself and your role at 67 Bricks?

Hello, I’m Rhys Parsons and I’ve been working for 67 Bricks for seven years. Previously I’ve worked in various fields, including Web Analytics, dealing with large databases and presenting the data in a meaningful way; and radio and telephony communications, primarily for the emergency services, trying to get fire engines to the right place as quickly as possible. At 67 Bricks my role is to provide a technical architecture for the projects I’m heading, to lead the team to implement it and to ensure the overall quality of the code we produce. Essentially, I am the technical expert on my projects. One of the nice things about working for 67 Bricks is the variety of projects that we get to work on, from content delivery systems to tools for standards agencies, recommendations features and similarity features — anything that helps our customers make the most of their (mostly) document-based data. Every customer has their own unique needs and working closely with them to find solutions provides great variety!

The most recent work you’ve been a part of was for ICLR.4 – in particular the new ‘Case Genie’ functionality. Can you briefly describe what the new functionality does?

Since 1865, ICLR (the Incorporated Council of Law Reporters) have published Case Reports of the most important cases in English and Welsh law. In Common Law legal systems, where a judge’s legal judgment can be used as precedent for future judgments, having access to indexed Case Reports is essential.

ICLR made their Case Reports and Indices available online over a decade ago. ICLR.online makes it easy to see which important cases have been Affirmed or Overruled (there are, in fact, seventeen variations on these!). The contents of the Case Reports can also be searched, and this has been an important resource for judges, barristers, solicitors and magistrates.

What Case Genie adds to this is the ability to prime the standard search with a starting document. As part of preparing a case, solicitors and barristers create formal legal documents (such as skeleton arguments) that summarise the case and its arguments. Such documents can be uploaded to ICLR.4, either whole or selected extracts, to find existing cases that are conceptually similar. These similar cases can be further expanded to include linked cases (important cases that cite or are cited by the similar cases). The results of these can be filtered using more traditional search facilities in ICLR.4. The aim is to help the lawyer find cases that they might not otherwise have considered.

In addition, Case Genie finds cases cited in the uploaded document and lists them with their subsequent treatment (Approved/Overruled etc). This can save the lawyer significant time.

Superadded to those, Case Genie facilitates finding similar paragraphs. When reading the judgment section of a Case Report, the user can click on a paragraph to find paragraphs from other judgments that are conceptually similar.

Natural language processing is a really interesting tool and one which requires careful planning to avoid bias and other issues – were there any idiosyncrasies in the legal language the tool would need to parse that had to be considered? How did you and the team deal with that?

Machine Learning (ML) falls into two categories: supervised and unsupervised. Supervised learning uses example data that teach the ML algorithm how to predict a given outcome. Supervised learning has all of the pitfalls of bias. Usually the data have come from real world decisions made by people, so it embodies the biases of those decisions. Unsupervised learning, however, is not based on previous human decisions. The models created by unsupervised learning map connections between data, but do not try to predict a specific variable. Case Genie uses unsupervised Machine Learning to build document embeddings for each Case Report and judgment transcript. The only biases it embodies are those of the judges in their language (but not their verdict or sentence), which are actually what we are most interested in! Language use changes over time, so two cases from a specific period are more likely to have high similarity if they cover similar topics. This is beneficial, because it means more recent cases are likely to match new case material; whereas important, older cases will still surface in the results when added as linked results.

Much of the work when using Natural Language Processing (NLP) is in preparing the text. The goal is to make tokens in the text meaningful. Legal cases have many citation forms (e.g. “[2021] 1 WLR 12”, “3 Law Rep 34”, “[2021] EWCA Crim 1165”), partly because they have changed over time, and partly because some cases also cite foreign cases. We also need to be able to find complete case references, which generally include the case name, one or more citations and sometimes a court. There are lots of variations, however! The challenge is in the vast variation in the corpus.

One goal of using NLP and ML in this project is to surface surprising cases. We have worked very closely with ICLR to validate the results of Case Genie, because it requires a very high level of domain knowledge to determine whether a result is a surprising but good result; or just noise. A related challenge has been to explain what the Machine Learning aspects of the project are actually doing, without it sounding like magic! At the same time, people’s expectations of AI systems are often overblown by popular discourse and sci fi. It occurred to me, while considering these questions, that AI might better stand for Artificial Intuition. I then, in one of those strange coincidences, read exactly that in a book published in 1979 by Douglas R. Hofstadter, called Gödel Escher, Bach: an Eternal Golden Braid. I think the important thing to take away from this is that Case Genie is driven by an intuition model; and like all intuition, it can be both brilliant and stupid. It’s a little like having a crazy genius in your computer whose results can never be quite explained. This inability to directly explain the results of an AI model is one that the industry is currently struggling with.

The challenge going forward is to make the genius even more inspired!

There must be a lot of technical challenges creating a tool like this for an industry that has such a need for data privacy – how did you and the team approach that?

Privacy was highlighted as a concern very early on in the project by practicing barristers that ICLR consulted. The architecture of ICLR.4 is therefore built around the need to secure sensitive legal information in ongoing cases. Uploaded document data and information derived from it is never stored unencrypted. Two keys are required to unencrypt the data, one is stored in the system database and is unique for each user; the other is transient and created by the user’s browser for each user session. As soon as the original document has been processed, it is deleted. Derived ephemeral documents are deleted as soon as the initial processing pipeline has completed. The remaining data, required to show the user the results of the processing, is available only for the duration of the user’s session. For example, if they close their browser and use a fresh log in, they will not be able to access their own results. That’s how seriously we have taken security!

The machines processing the data are also protected from unauthorized access. It is impossible to connect to them directly, and they are moreover secured by public/private key authentication.

Were there any other technical issues that arose while working on something so innovative? Or new things you had to create to achieve this?

One thing I wanted to achieve from the beginning was to create a framework for a system that would be adaptable and expandable. We have created a processing pipeline that is secure, can be built upon and is scalable.

Scalability is essential for a system like this because NLP can be very processor intensive. We also have some large data indices that sit in memory so that they are blisteringly fast. We adapted the architecture to fit the changing needs as the system was built. It surprised me just how much data we ended up with! The end result is that the system can easily be scaled out to support a larger number of users.

In my opinion, the tools around building scalable architectures of this sort are still quite immature. This may be because the rate of change of cloud-based offerings is so high and the features they offer are incredibly rich. The challenge, then, has been to write the configuration for maintaining such a system when the project team’s knowledge was limited. Rich Brown, our Head of Technical Delivery, was indispensable in solving these challenges.

One surprise for me was that I had to write some C++ code — something I hadn’t done for over twenty years!

At the beginning of the project, I wanted the project to be a starting point, not a final goal. We have put ICLR at the cutting edge of legal tech, and I am hoping that we have the framework to keep them there!

Finally, what advice would you give anyone embarking on a similar project?

I was in a good position on this project because I’d been working closely with ICLR for several years already. Some advice is common to all software projects, but I think is worth highlighting for ML projects too:

  • Know your customer and what they are trying to achieve. With AI and ML, it’s easy for customers to get excited by the hype without understanding what is actually possible. Thankfully, Daniel Hoadley of ICLR (at the start of the project) had already experimented with the kind of Machine Learning we use.
  • Understand the data. You need to know what you’ve got to work on, that it’s reasonable and consistent, and what you need to do to process it.
  • Get feedback from domain experts. As I say, only an expert can tell you whether a result is intuitive or noise. I built a UI at the beginning of the project purely so that ICLR could evaluate the AI models I had trained.
  • Understand how the technology works. It’s not enough just to read how to get something out of a particular tool. You need to know how the whole thing fits together, even if you don’t understand all the maths!
  • Expect to iterate, refine and re-build the AI model many times.
  • Experiment! (where time permits)
  • Read about what other people are doing. A lot!