AI and the Justice System

Presenter:
Amjad Karim
Amjad Karim
Guests:
Ceri Hyde-Vermonde
Cari Hyde-Vaamonde
Series:

AI In the Real World

Episode: 001
Welcome to AI in the Real World. In this captivating podcast, we delve into the exciting and transformative world of Artificial Intelligence and explore its practical implementations in our daily lives. Join us as we bring you thought-provoking conversations with experts, innovators, and industry pioneers who are leveraging AI to make a tangible impact, solve real-world challenges, and shape the future.
RECORDED April 27, 2023

AI and the Justice System

Excerpt:
The legal system is close to a crisis of confidence. Notice the phrase "crisis of confidence". This is not just a challenge of processing a backlog of cases but ensuring the public feels the system is doing what it is meant to do: deliver justice.
Listen Now:
Guest Bios:
Picture of Cari Hyde-Vaamonde

Cari Hyde-Vaamonde

Fascinated by the potential for code and AI to reform how law and justice function, Cari Hyde-Vaamonde is an experienced lawyer and court advocate. Having practised in diverse fields including technology, Cari became increasingly interested in systematic analysis. Her focus on research in the field recently culminated in a UKRI 4-year award to research the impacts of AI in justice settings at King's College London, where she is also a Visiting Lecturer. She is engaged in several interdisciplinary collaborations and presented research at the International Conference on Artificial Intelligence and Law in 2021.

Amjad (00:03):
Welcome to the Keen AI Podcast. I’m Amjad Karim. I’m interested in understanding how AI can be applied in the real world, but also its social and societal implications. Today I’m talking to Carrie Hyde-Vaamonde. She’s an experienced lawyer and court advocate. She’s also an academic at Kings College London and an enrichment student at the Turing Institute. Her research focuses on the potential for code and AI to reform how law and justice are delivered. We talk about her work and how you could trial AI in a justice setting.

(00:43):
Welcome to the podcast, but we’ll talk about what you do. It’s quite interesting cuz you’re a, a lawyer entering the AI uh, uh, research space. It’s really interesting and be quite, people can be quite controversial. So really interesting to understand what, um, what you’re doing and why you actually think the work that you’re doing is, it is important, why you think it’s important, what’s, what’s relevant about it.

Cari (01:03):
Thank you. You know, very much for, uh, having me on. I’m at Kings College London. I’m researching how algorithms and AI can be used in judicial decision making. So I’m looking at how one might be able to replace or augment judicial parts or whole elements of the judicial process. I think it’s important to build understanding in this area. I think that, um, it’s an area where there’s a lot of kind of people working in one area. They, you know, lots of people are experts in AI and lots of people who are experts in law, and it’s actually quite a difficult breakthrough to come across the two. Um, you know, there’s, there are people doing it, but it’s certainly has its challenges. And I think it’s important more broadly because in, certainly in the justice, uh, sphere, because while I, my, my previous life was as a advocate, I used to be in court and, uh, and, um, address judges and, and things like that. And, um, I met with a lot of people there who were going through the justice process. And certainly it seemed to me that something was going on there quite apart from the result. So you might get somebody who won or lost a case, but the result itself, you know, was, uh, they, they, they had a sort of human reaction, which was not necessarily correlated to what the result was. So you might have somebody who lost the case, but actually was, you know, not sort of just distraught, but actually, you know, sort of felt they had done something. They’d been through, they’d been in front of a judge and they’d been experienced there. And I thought, well, okay, so this is a, this is not a straightforward thing. It’s not straightforward. Let’s just, you know, replace things with algorithms. But it’s not like we can just go away from it. We can’t just, uh, say, well, no, no, it, it’ll never be appropriate because look at the justice system. It’s really under strain in many ways. Uh, and not just in, you know, in, across the world we have this. Uh, so yeah, I mean, that’s kind of a long answer.

Amjad (03:14):
So just listening to what, what you said, so when you said like, when people are going through a judicial process or, or a trial, there’s the, there’s the outcome whether they, they’ve won or, or lost, which is one thing, but from what I’m picking up, the actual process of going through that procedure, whether that’s in a, in, in a court or something that’s of value to people because they feel what that, that they’ve been heard or they’ve had an opportunity to, to speak or to raise their grievances or, or vice. Is that when you said that people feel something about going through that, about when they go through that process? Is that what you were referring to?

Cari (03:52):
Yeah, I think you’ve, uh, yeah, hit the nail on the head there. Some people will say no, it’s just all about the results. Okay. And then maybe people for whom that’s true, you know, that it doesn’t matter what the experience of the process was, if they’ve lost, then it’s worthless and they, you know, they have no, um, uh, sense of, you know, that something has been done there, um, that has external value. But research suggests that the actual process of justice itself adds to people’s sense that the system is, um, listening to them. It’s fair, it’s, uh, and it actually translates to whether they would follow the law or whether they have a sense of that they, I mean, people follow law for many different reasons. Okay. And that’s supportive with evidence as well. You know, you have, people may do it because they fear that there will be negative consequences or something of that. Right. But one core issue is that they follow it because they, uh, feel a sense that the whole system has some kind of, you know, kind of logic and, and, and I suppose you could say justice, although, you know that word needs for the definition. Yeah. Has my voice been heard? Will it, it impact on later interactions with the justice system as to whether they then want to be a witness or be, interact or report a crime. And so it’s not just the straightforward of, you know, look at that one snapshot and, and it may be that, you know, you personally haven’t had that interaction, but you know, people who have, and that again, influences it. So I suppose where I’m coming from is that, uh, there’s a broader, you know, it’s not just looking at that one individual has a broader connotations for how our countries, you know, works and how different bits of the state interact with individuals. It’s much broader than just a one off court hearing.

Amjad (05:47):
And this might sound like a bit of a stupid question, but you used the word justice system quite a lot, right? Or the phrase, how do we define the justice system? What, what, what do we, what, what, what, what do we mean when we say that? I mean, to me it’s like going through a court and there’s a judge, but there’s much more than that. More when we say justice system.

Cari (06:06):
What? Yeah, that, that’s a really good question. I, I won’t go for an academic definition, but I’ll explain to you what the kind of various segments of it could be seen as. So yeah, I think one obvious example is the judge in the court hearing, and within that would be all sort of people who work in the court, et cetera. But it goes down to the prosecution, the, the police even. So it connects to all of that broader thing. And it’s not just criminal, of course it includes civil courts and civil procedures, you know, so civil, uh, when I say civil, I just mean non-criminal. So, you know, like contract law, employment law, things like that. Yeah. So all of these things are interconnected and that system obviously has a huge impact on, you know, how the country is run, et cetera.

Amjad (06:56):
So you pointed out that, that at the moment then that it’s under a lot of stress and it’s under a lot of pressure. Do you wanna talk about what, what that is, what that means? What are the consequences of it? And then that probably then, as a potentially, drops into where maybe technology comes into play.

Cari (07:11):
Yeah. So it’s, it’s under pressure in many ways, but I think that one key issue is delay. So it would manifest in the fact that you could be waiting a long time for a serious criminal offence, for example, to get to court. And, uh, you know, what does that mean? Not only does it mean, you know, uh, this sort of vague concept of justice being delayed is justice denied, but what does that mean practically is that witnesses’ memories will fade, people may not be available later on the person accused, you know, may that that will affect their life more generally. So in, in terms of, and I don’t have the statistics to hand right now, but you might be waiting at least two years, you know, for a serious case. And that then it would, you know, sort of, this is has a huge impact on your, um, on, on and, and it’s backing up. So we’ve got, you know, more and more delays in terms of these things happening. Now, why is that happening? Um, mm-hmm. <affirmative> for a huge raft of reasons which maybe are not so relevant for us to consider at this point. Cause we’re looking at how do we go, you know, what’s the point of looking the past if, if we can, we, we now need to look forward. Um, but of course not to say it’s not valid to look at those reasons, um, more generally. But one, of course one major thing is, is Covid that has had an impact, clearly has had an impact. Cause there was a period of time when the courts weren’t sitting.

Amjad (08:35):
Okay. Is that backlog still washing through? Is that, is. Yes. That still still hasn’t washed through. Do you have any anecdotal sense of how severe the delays are becoming or the pressures are becoming in terms of the amount of work not being delayed or anything like that? Or is that. Um. In the UK not, I guess, I’m sure it’s, like you said, it’s potentially a global issue, but in the United Kingdom, cuz.

Cari (08:55):
If you follow kind of court reporters, for example, like on Twitter or whatever, you, you, you get the sense that it’s uh, uh, stacking up in terms of the delays. I was at a conference recently, you know, ministry of Justice and the court service are trying to work on that through these kind of Nightingale courts, which are like a bit like the Nightingale hospitals, really big efforts to try and clear through things. So it’s not that things are not being done. I I don’t think it’s controversial to say that globally there’s um, a huge issue in terms of getting through these cases, uh, uh, you know, with, and insuring that justice, people experience justice due to that.

Amjad (09:34):
Yeah. And so I guess, so I think we’ve set, set the groundwork, right? So there’s a, justice is really important. There’s a, people need to engage with the justice system and feel like it’s, this is like a, I mean, we end up into terminology when they say that that justice has been served, you know, what, what would they mean by things like that. But that also then touches on what you are doing now, right? So you are, you are, you are working on AI and machine learning to help, I guess one increase. Well, you can tell us what, what, what, what is a specific thing that you’re working on and, um, what is it trying to achieve? So I, I found it really, when I met you, I found it really, really interesting because I really found it interesting because you normally hear about how I read about reading books like the, I mentioned to you before, like the alignment problem when they talk about, uh, software being used in, in the us like Compass, et cetera. And it has been, not because anything, any deliberate fault of the, of the manufacturers or the developers, but there’s just a very, AI seems to reflect the biases that we have in society if we use that data set to then train it to make future decisions, right? So it’s, it’s really potentially, uh, it’s a controversial minefield. But what was interesting was that you were starting on something potentially not so controversial and it’s a, it’s really nice place to start. So do you want to introduce what that is?

Cari (10:49):
Yeah. So my, um, perspective is, uh, that I’m actually, you know, excited by the possibilities of technology. I think that there’s, there’s kind of a binary reaction sometimes, certainly amongst lawyers to say, well, you know, or either blindly say it’s all brilliant or blindly say it’s all terrible. And, and, um, and I I really wanted to probe, you know, the limits and the yeah, what, what could be done, uh, in a kind of positive way. And I’ve had, I mean, I was legal counsel for a company, you know, it’s like a technology company for a while. I found that great, you know, I was like in constant contact with the programmers, et cetera, and, you know, then communicating with regulators. Really nice dynamic. But then when you see things that are translated into kind of these, these national project government projects, sometimes that fluidity is not there. So I was very conscious about what’s going on in this process of development. Okay. So I focused on quite a basic or, you know, limited area to start off with just looking at traffic and really basic of offence, um, of not stopping at a stop sign. Okay. So, you know, how can we take the process as it is at the moment and see what elements of it could be streamlined, which elements could be improved? Okay. And, and when you said like, how are we measuring that? Right? So that’s a really important question for me. What metrics are we using? What are we, you know, there’s…

Amjad (12:21):
Before we go there, you said the process at the moment. What, what is that process at the moment? So I haven’t stopped at a stop sign, or I haven’t, yeah, is it stop sign or is it traffic light? I haven’t stopped. So what, what, what is the process at the moment?

Cari (12:32):
Yeah, it’s a stop line or traffic lights. Very, it’s kind of the same kind of process. At the moment that comes under what’s called the, the single justice procedure. And that means it goes in front of a magistrate, but instead of going in front of, you know, the classic would be a magistrate’s court is normally three magistrates. A magistrate is not a legally trained person. They call lay magistrates because they’re haven’t had legal training, but they’re assisted with a, a clerk who, who does have legal training, who does sort of assist them. That’s the classic magistrate’s court. Uh, and it’s full court, but single justice procedure is one justice, one, one magistrate look, looking at the case on their own, on the papers and seeing, you know, okay, is this a guilty plea and how do I, you know, how, how do I then, uh, sentence them? Okay. So that’s the process as it’s in the moment, it’s kind of paper based, although it can be online, so it can be paper or online. And so I was very interested in how I could take this, where you’ve got essentially just textual inputs, you know, and whether you could actually perhaps improve the process as well, you know, by using different techniques, uh, to deal with that information.

Amjad (13:47):
So I’m driving my car and I haven’t allegedly, I haven’t stopped at a stop, stop sign, right? Or, or a red light. So I’d get a, um, notification through the post or, or something like that, or the, the legal owner of the vehicle would get that, right. And then assuming it’s me, I have to, what typically I just say yes, uh, hands up that, that was me, I’ll pay the fine. Or I can say, no, I disagree because for these reasons, is that what that statement is? And then that would potentially go to the magistrate or decision. If I say, yes, it was me, then the process is over, so I just pay the fine or, or get the points. Is that…

Cari (14:24):
So essentially if you really do contest it, then it will go to full court. But if you say, for example, yes, it was me, I, I drove through the red light, but I did it maybe for safety reasons, or I did it because, um, there was an ambulance or something like that. So you might say, well, I’m guilty, but there were these circumstances which I think should reduce my, you know, overall liability. So, so I’m taking those kinds of cases, but we still have to decide, you know, uh, well, ultimately there should be a point in the process where somebody reads that and then makes a decision whether, okay, they’ve said they’ve pleaded guilty, is that a valid guilty plea? And given they’ve pleaded guilty, what is the, um, you know, what should they be liable for?

Amjad (15:14):
Okay. And what, and at the moment, what, what’s that mean? So I say if I, is, it normally points on a licence and a fine?

Cari (15:20):
Yeah. So you get like three points on your licence and a fine, um, you know maybe, you know, I think it’s up to a thousand, uh, Pounds fine.

Amjad (15:30):
Okay. And then you are now the, the experiments, the trials that you are doing are…

Cari (15:36):
At the moment. So, so what we’re doing is we are, we are going through a kind of filtering stage. You’d look at whether the person has filled in the form correct, correctly. You know, they, um, if they’ve, if they’ve ticked that they’re not guilty, then it would deflect the process out of, you know, uh, and, and just go, uh, suggest being, you know, sent to trial. There, there’s a few checks that go on. So if you’ve already got quite a lot of points on your licence, that’s another reason why you wouldn’t go through the process. So there’s lots of things that filter you out, for example. And then you are looking at that driver’s statement. It’s kind of knowledge based as well, because I’ve got my, you know, I’ve used my legal training to assess what things are mitigating and aggravating circumstances, but I’d have to use NLP to assess the statement. Um, so once the, it’s identified those elements, then we have different weightings that are attached to different things, and then it will suggest level of fine based on, you know, the legal background.

Amjad (16:36):
Okay. So, so the person gets the, the, the notice they say, um, yes, it was me, or they accept it. If they say, no, I’m not guilty, full stop, then it goes off to the maybe a traditional court or maybe some, some other process. But if they say, yes, it’s me, but they make a statement saying, oh, you know…

Cari (16:57):
Yeah

Amjad (16:58):
There’s an ambulance behind me, so I need to move out the way, or, or whatever reason, you know, um, I need to take somebody to hospital. I, I, there’s many, many reasons that could be, yeah. So they fill that in as a, as a statement. So the algorithm gets this and what sort, and then it reads a statement and it decides well, or it suggests an appropriate, appropriate penalty. Is that, yeah. Is that the, the model?

Cari (17:20):
The way I’ve got it set up at the moment, it’s just, it’s just, it’s not necessarily suggesting, it’s setting it out as this is the appropriate penalty. Yeah. So it’ll, it’ll say, you know, it’ll take your weekly wage, then, you know, just apply the correct percentage to that and, you know, and then, and give you a penalty. We might rail against it because we just don’t like the fact that you’re getting three penalty points. And that’s part of my research. I’m, you know, kind of doing an experiment where I also run it as a human being is this made this decision. What do you think about the justice then? Because I think that you need to have, you know, you, you need, you, you can’t look at this in a vacuum. You can’t just say, okay, well, I don’t like the algorithm if actually maybe the underlying thing is the, the legal position itself.

Amjad (18:08):
It’s a bit like, uh, VAR, right? There’s all these arguments going on. Like, oh, it, it shouldn’t be, it should be, that’s, it doesn’t seem right. You quite hear that word. It doesn’t, it doesn’t seem right, right. That…

Cari (18:19):
Yeah, like when it’s a, you know, I don’t know, like an offsite decision, and you just think, well, okay, technically, like I can see where the line is drawn, but is that right in the context of that game, you know, to apply the letter of the law essentially, you know? And that’s, that’s an interesting, you know, um, perspective. Yeah. I think that it is something that we, it’s a conversation that needs to be had. And in a way that’s why this is so interesting yeah. To look at algorithms, because it really does make us look, I mean, what is law? Law is law. Law is a set of, of, of rules. Yeah. So it, it causes to think, actually, are we setting out the rules in the way we should, you know, for society, if we are going to have 100% kind of application of those rules, or has it been built into the rules we’ve had so far that there’s always going to be a human in, you know, interpreting the rule, you know, and potentially lessening it in those, you know, circumstances. And that’s a, yeah, that’s an interesting discussion in, in law as well. It’s sort of been ongoing for a long time.

Amjad (19:25):
What’s your interpretation? So there are law, there’s statutes, right? They’re on, they been passed. That’s the law. So do you think they, I get the sense that you don’t, but I’m asking anyway, you couldn’t, they’re not just there to be automated. They have to be applied with wisdom and kindness by somebody who’s skilled at interpreting the law and then trying to understand what the law was trying to achieve, rather than what it might might say.

Cari (19:51):
So as a good researcher, I try and stay open-minded about this, and I’m trying to find the answer through my work. Yeah. So I’m, I’m, I’m really interested in that. I have a sense that it may well be the case that we’ve have rules that inherently, even though the word of the, you know, the letter of the law says one thing, you know, they’re written with the expectation that a human being will be interpreting them and kind of lessening the impact based on kind of empathy and various other things that might flow in mercy, or, you know, what, whatever. But these are conversations, you know, these are maybe theoretical conversations that have been had, but now we’re at the point where we actually need to grapple with this because if we’re going to implement them, we have to know which way we’re going. You know, do we need this kind of intermediary? And if we do need it in some circumstances, and others, we’re only going to know by testing. And that’s kind of where I’m, I’m, I’m coming from and there is a whole movement called the rules as code movement, which suggests that when we legislate, we should be legislating into code as well at the same time. So, you know, that, uh, it is more for, for example, like, uh, like social security or something of that where you could apply for a benefit that the legal requirements for that benefit are kind of automatically written into the systems in which you apply.

Amjad (21:13):
What what’s got me thinking is, um, I’m from a Muslim background and, um, in the suny school, there’s four accepted orthodox schools of, uh, of, of jurisprudence. Mm-hmm. And they, I dunno, uh, maybe a hundred, 200 years after the, the death of the, the prophet, some scholars came and wrote down what they felt the rules of the religion were, right? Mm-hmm. And they took that from interpretations of people that they were living with or they were encountered, that were recognised as being followers and things. But the reason it’s con- often, sometimes, maybe, yeah, it’s not controversial, I guess, but the question is, has it taken away the spirit and become a set of, of rules, right? And it’s, it’s quite hard when people do, cause everything becomes codified. So every little thing, so some, some books you’ll see, they’ll, you know, they’ll debate like very, situations that will never occur. Cause they’re trying to understand what the legal implication of, of that is. So what you’re ending up with is, is potentially, uh, there was a way of living and a philosophy. You’ve tried to guide people to how to implement that. But then it ossifies in it, by adding all of these rules, right? So it’s similar to, to to law. There’s, I guess that’s exactly the same, same problem, right? You’re trying to inculcate a set of behaviours and then you put those rules in place to, to do that. But then that maybe ends up, you follow the rules and you forget, um, the behaviours. But that’s just a, that’s just a general aside. I think that’s what, I’m just…

Cari (22:36):
Yeah. It’s this constant push and pull between consistency, you know? Cause he was say, well, I want it to be consistent, and therefore, you know, the, there used to be a phrase when I was learning law for the first time, you know, that, you know, law varied with the length of the law, chancellor’s foot or something, you know. So for a different kind of person at the head of the, okay, the legal system, you know, it would change. And that’s considered a bad thing. A huge issue is consistency in law. So people can rely upon it when they, you know, for example, if you’re starting a business or even, you know, if you’re, if you are an individual and how, you know, how you, uh, in society, you know, this idea that we need to know the rules so that we can adhere to them. But clearly there’s always been this area of fuzzy kind of interpretation in legal scholars for a long time have said, you know, they need to have that area of flexibility because, because it’s so hard to have hard and fast rules. And now as we come into the time when we can actually implement them, you know, almost quite easily, you know, if, if we had enough rules and we have enough, you know, we can actually make that. So it’s not just a philosophical debate anymore. It’s, it’s something that we need to grapple with, you know, and I can only, my response is, let’s get data on this. Let’s understand how people respond to these things. So my, you know, what I’m doing at the moment is describe the kind of, uh, the process that, you know, I’m working on. Um, I’m gonna be showing it to people and, you know, recording their responses to that and seeing how they, you know, are they thinking, yeah, this, this is listening to me, it’s taken into account what I’ve said, and therefore I feel like, you know, yeah, I can, I feel like I’ve been through a just process. I’ve been through a, you know, reasonable process or, you know, what are their reservations and, and why, you know, and, and yeah. So that’s the stage that I’m at now.

Amjad (24:37):
Have you run trials or I guess just describe where you’re at and you know. What’s happening today?

Cari (24:41):
So at that point now, I’m actually launching trials in the next, um, couple of weeks and just gathering data on how the people respond to this. And then I say people, I mean, I, I’ll be looking at a cross section of the UK public, um, but also talking to, um, people in their different fields. So lawyers, um, you know, magistrates, judges, you know, that’s the, that that’s the sort of idea is to, uh, to try and get the legal perspective and the also the kind of broader public perspective. Um, running a few, uh, focus groups and, you know, group interviews and seeing how people respond to this kind of thing.
Amjad (25:23):
Is it going to be used at all the, the, what you’ve developed in terms of during the trial to make a decision? Or would it be this is what it would, what would’ve happened? How does this compare to what happened?

Cari (25:33):
For the trials, I’ve set it up as a kind of online kind of experiment. So it’s, it’s not necessarily having data flow through it at the time, but I have got another version which can be used, um, which I might then, you know, show to, if you’ve got judges who are interested in seeing how it would function, I can, I can do that as well.

Amjad (25:54):
I was gonna say, oh, how are you going to measure it? And I thought that was maybe the wrong question. Again, we’re always trying to make something very hard and very objective, and potentially it isn’t, right. Mm-hmm. So it just maybe forces you down that whole measurement route again when it’s, it’s, it’s difficult to do, but I suppose you have to have some mechanism of evaluating whether you think this is a, a good idea or not. So

Cari (26:18):
Yeah, and that’s why I agree. And, and I think that, um. A. yes, metrics in of themselves are not necessarily helpful, you know, like things like accuracy, you know, how would, how do you measure accuracy when you don’t have a ground truth? You know, in law where you don’t know for certain if someone is guilty or not guilty, you know, you end up reinforcing stereotypes. And we’re, we are going into those kind of bias issues that we’ve discussed before. I mean, one of the major ones is just throughput. They seem to just be interested in how many cases we can get through. But that in that is not in of itself. Uh, very helpful. So my focus is, um, I’ve taken it from that perspective of what do we want from a justice system? What do we need? And, um, and it’s, it’s, I’ve used a metric of legitimacy, which is something that is, um, it’s not something you can automatically test on a system and see, you know? So I can’t just run the system and go, okay, well this has got a high legitimacy factor, you know, whatever, I have to test it with human beings. And that’s, you know, that’s the, um, the plan is to see, you know, a justice system doesn’t just need to come out with, you know, a right answer if we would ever know that it was the right answer, you know, this is the trouble. Um, what what we need is for people to have, you know, have faith in it, essentially. Although I think, I always think faith is almost too strong. We, we kind of, you know, um, needs to, uh, accept it, you know, kind of tolerate it.

Amjad (27:46):
You have trust in it.

Cari (27:47):
Yeah. Trust, uh, yeah. But I feel like a lot of people say the word trust, but you know, it, it’s very, again, it’s a really hard thing to have a metric on, but yeah, so, so I’m asking a series of questions which establish, do people feel they’ve been listened to? Do they feel like, you know, this is making the decisions in the right kind of way? Um, and I’m really interested to see, you know, what kind of results I get. My thought is that there’s likely to be certain circumstances and certain scenarios where it’s, you know, it’s absolutely fine. And, and other scenarios where it, it, it might have concerns, but I, I think that we are duty bound to plumb that, you know, and really understand what’s going on. But it’s not really a question about the algorithm at all. It’s the, it’s a question about the justice system, you know, really what we want from it. So that’s why I chose a metric that’s not dependent on, you know, whether it’s an algorithm or not, or an AI or not. It’s, it’s dependent on the response of people to it.

Amjad (28:50):
Is it related in some sense to like things like sentencing guidelines in some ways, so that, so those guidelines are set and they give it to the judge to interpret, but rather than the judge interpreting it here, we’re trying to use, uh, a machine.

Cari (29:04):
The sentencing guidelines are in effect, a series of rules. It’s like a decision tree, you know, or, you know, you are, you are being guided, and that’s, that’s almost a relatively new development in of itself. So you already have processes that are in the, in, in the system and yeah, it’s, it’s, it’s going further down that route. You know, it’s like, I think that another example is things like risk factors where there are, uh, a police officer thinks that they’re in a violent scenario or not, you know, there’s likely a further violence. I, I think that, you know, you could be filling out form. I think originally, obviously they would just go on their feelings, then it was processed into, you know, a tick box form to say, right, have I observed these things? Then it kind of transmutes into an online, you know, process where they’re kind of answering dropdown questions or something. You know, is, it’s kind of a continuous process or further automatization, you know? And, uh, I, I think that the fact that it’s done digitally is not, not really the relevant question, although it forces us because it’s, there’s less opportunity for the human being to kind of fudge those, uh, answers game the system.

Amjad (30:17):
If you could imagine what it would be like in a, in a few years, how would you see this being implemented or used? And do you have a sense of where it could be used and where it perhaps shouldn’t be used? And…

Cari (30:26):
Yeah, I think that, um, the nature of research is that you are, you are focused on this one instance, but you are looking beyond to see, you know, where it might take you. You know what, what, what, obviously everything we do, we, we want it to have broader implications or at least inform future decisions. I feel like that it will show up the complexity of the scenario that, you know, that there are ways to interrogate how people interact with an algorithm that’s not to do with yeah, metrics. Like, is it in some way accurate compared to some other data that we’ve got that’s not necessarily reliable. If you are asking me where do I think this will, you know, that the future of greater involvement of AI and, and, and algorithms is taking us, you know, I think that it could be very good. I’d like to see it, or at least explore it, whether it could in like, prevent us from making kind of logical fallacies, arguments on logical fallacies where we’re misinterpreting the data. Um, you know, say the, it’s like the, the, the classic is the prosecutor’s fallacy where somebody’s layering on facts to say, you know, well they, you know, the, the shoe size is the same as the shoe print. You know, that they live in that area and they go to work in that area, and therefore all of these things build us towards it being that person was the, you know, the, the, the perpetrator. Okay. But they’re all, each one explains the way the other, the other fact that, that they live near and they work near well, that’s, that’s not adding to it. I’d love to see that kind of symbiosis of like, judges and, you know, human judges and, and algorithms, meaning that decision making was more transparent and that we could interact more easily with the justice system as a, as a, as a member of the public. We need to ensure this ability to see justice being done. And that’s, and I think that’s been so far has not been fully grasped, although I think some people in the, just in the legal community are seeing that. I think that using those kinds of things where you are having a, a, a, a team really where you are with, um, you know, technology rather than technology up to this point and no further think that there’s probably a, you know, going to be a partnership.

Amjad (32:51):
But one of the, what what I wanted to ask you was, so we work quite a lot in deep learning, right? And deep learning models that the Chat GPT, except there’s a whole debate about does it understand concepts, does it understand, uh, relationships, right? So, and explainability, and you’ve talked about justice being seemed to be done. I guess it’s part about understanding or feeling that you can understand how that decision was arrived at, right? So in your mind, do you see that the models that we might use or the algorithms that we might use in, in the criminal justice or in in, in the justice system with the legal system, are they likely to be more heuristic models so that you can explain the decisions that were made? So they’re decision trees and you are using the natural language processing deep learning models to make sure that they, well not make sure, but to understand what’s being said. Do you see that it’s possible that we could use a, like a deep learning model like Chat GPT to do this, but that would mean that it’s perhaps not sort of transparent as to, as to what’s going on? That’s a, actually, that’s a really big piece of research right now, isn’t it? Trying to help AI understand the relationship between things in the world. And that’s the same thing as relationship between things. Conceptually,

Cari (34:00):
I think that, you know, at this stage, in this stage of development, yeah, you probably are looking more deterministic. But I think that it’s interesting. I don’t think we really fully understand what can happen as we have these further GPT models that are self-correcting, that they’ll, you know, that they can self-improve. Um, it’d be interesting to see how that develops, because I suppose the difficulty at this moment in time would be, you could ask it a question to say like, you know, well how, how, um, you know, it’s this explainability thing. The issue with explainability is we know that, that a reasonable answer could be given, but whether that reasonable answer would match with how the decision was actually made, you know, that’s, that’s a, they may not be matching up. Does that make sense? Yeah. Mm-hmm. So, but that’s true of humans as well. So that’s human judges. There’s always been a, a long debate, there’s been a long debate about whether human judges are using, you know, when they explain themselves and say, well, I made the decision because of this, this, this, and they write a whole judgement explaining it. You know, are they really following that, you know, process?

Amjad (35:11):
Yeah, it’s really relevant. So the idea of why did, uh, humans develop the idea of rational, the, the ability to think rationally. I, I always thought that, um, it, it helped us to, to make better decisions and then survive, right? But the person that I was listening to their research or their idea was that actually we developed rational thought because we live in a, as a, as a social animal, and we need the ability to explain to other people why we’re doing things and get, get them to do what we need them to do. So given, so going back to the, the judges is rationalising something, is after the event, right? So not necessarily before the event. So it was interesting that you say that judges may not even understand why they’ve decided or, um, come to the decision they have, and then they’ve come, look, they’re looking for reasons to explain how they’ve made that decision, but even to them, it might be hidden as to actually why that happened. Cuz that ra-, that rational explanation may not be actually the true unconscious reason.

Cari (36:12):
That’s certainly one perspective. Like, you know, and, and, and obviously we don’t know for certain necessarily at this point exactly how they’re making the decisions. You know, we, we have very detailed law judgements and very detailed requirements as to how you should make the decision, right? And, and so, and when they describe how they’ve made the decision, it follows that process. Okay? So, uh, on the face of it, it looks like judges are making rational, uh, decisions, uh, in the main, you know, obviously there’s, uh, um, there will be exceptions. But there’s so many micro decisions that are being made there. So it’s like, well, I’ll take that piece of evidence, or I won’t take that bit, that unpicking all of that and being sure that it is, you know, fully reasoned in that way, as you might, you know, if you were doing a kind of decision tree or something like that, you know? Right. It’s, it’s not so easy because you, you know, you are already making decisions about what to remove from that consideration and, and what you, and how you waive various pieces of evidence. So I think that this crossover between law and, you know, technology, it, it really exposes these issues. It really causes us to question, you know, are human beings making rational decisions. And I, I really like what you say about the social aspect of, uh, rationality, you know, cause of persuasion really, you know, there’s an element of persuasion there. It’s saying, uh, trust me because I’ve made logical decisions and, you know, you need to follow this because this is in your interest because of this, this, and this. It’s really exposing these questions about human beings and how their logic works, et cetera. So, which is why I think that looking at it from perspective of, is this system something I can, I feel is legitimate? Yes. So for me, legitimacy is support. And so it kind of, not side steps, but it, I’m, I’m trying not to get bogged down in that kind of mechanism so much, because I think there’s so much to learn already about that area that it’s kind of, you know, whole sub as a whole sub-, sub- uh, kind of issue. But I think it’s really interesting and, and certainly when we drill it down into something like this, then we can expose those differences. You know, if you don’t trust it, why don’t you trust it? Tell me why you don’t trust it. And we can understand it in more detail.

Amjad (38:28):
Who are the people that, that you engage on this?

Cari (38:30):
I’m the kind of main, main, you know, researcher on it. But I, I talk to people in, for example, criminology here at Kings and Professor, uh, Ben Bowling, and he’s a very respected criminologist and, um, individual in, in kind of our science department.

Amjad (38:45):
Cause it seems, it’s putting together quite a lot of different facets, right? There’s lots of…

Cari (38:49):
Yeah. Yeah. Yeah. I’ve had long conversations as well in preparation, you know, to try and do kind of a, a holistic, so I, you know, spoke to people from many different fields and, and, um, judges, bayesian experts and, and various other things. So, you know, try to keep a broad look at this so that I’ve got a, a good basis for the research.

Amjad (39:11):
And just listening to you, right, I know this whole, the whole conversation was meant to be about how AI is going to be used in the legal system, right? And, and it, and, and it is, but it’s surprising how little the conversation really is just about the technology. We’re not talking a great deal about what type of algorithm- algorithms are you using? What sort of data sets are you using? That would be interesting actually, just to know what they are. But there’s much more wider questions that are much broader than, than AI. And then, and the technology bit is a small part in terms of volume. How do you, how do you see it increasing or reducing the load?

Cari (39:49):
Almost all of these reforms of the justice system based on kind of digitization and, and technology is, the assumption is that it’s always going to be faster and more cost effective. You can understand why, you know, cuz obviously you’re not having human being clicking through and and checking every little element. So you’ve got fewer humans involved at that end of the process. But it, I think that it’s not necessarily the case if, you know, if you take into account development and constant, I think, you know, you need constant monitoring, I still think it, it could give you definite advantages. I’d hope that it would speed things up. You know, I’d hope that it would give people faster justice, um, and take off the pressure from individuals. Uh, but I think that there’s a great deal that people that that sort of judges and lawyers and people in that system need to engage with, uh, to understand technology that’s being used. You know, we’re talking about algorithms and what, what’s being used to make these kinds of decisions. But ultimately the technology is gonna be constantly evolving, isn’t it? You know, it’s not going to be like, you can’t just take a snapshot and go, right, let’s test this one technology. Okay, that’s fine. You know, that’s, that’s, uh, that all technology’s good or bad, you know, it, it’s gonna be constantly evolving. We need to develop methods of ensuring that, you know, when people interact with it, that they are still getting the same kind of sense. You know, that the justice system is the justice system, and it’s not something that’s just being put upon people, because ultimately we know that it depends what your viewpoint is. If you are, um, interested in human rights or interested in, you know, ethics, et cetera, then obviously, you know, we want it to be just right. But even if you are somebody who isn’t particularly of that mindset and wants just quick decisions being made and, you know, good, good government and quick government, it’s also really important because if you don’t get the cooperation of the people, then you’re not going to be able to achieve very much at all. You know? So it’s, it’s kind of, I, I, you know, I’m trying to look at it from a point of view that this is something that is of broad importance to many people, from depending, no, irrespective of even your viewpoint on power to the people or not power to people, you know, that kind of thing.

Amjad (42:08):
Um, seems to me that the actual algorithms themselves for this particular project, maybe it’s not as valuable as the actual framework that you develop to evaluate whether or not an algorithm based tool for deciding, uh, a penalty for a particular offence is a good one. And I mean that good, as vague as it sounds, and you’ve got metrics to define that. But actually then if you have that framework that you’ve created, and if it’s an accepted framework and it works quite well, you could use that for different things, right? So you could, you looking at traffic offences, you could look at lots of different things. You have a framework for deciding whether or not the use of the technology serves the purpose of justice or not. So it’s a broader, then you’ve got, uh, I’m sure people have been thinking about that, but that would be very valuable, right? That’d be really useful.

(42:52):
Going to the idea of the framework rather than the tech, you know, the, the specific AI or algorithm. I think that that’s, that’s true. You know, I think that I made a lot of, uh, put in a lot of work to ensure that everything that I’ve said is possible and functional and would work and can work. I, I think that what we need is a way of having different voices, you know, going into the development process. And that’s something, you know, that I’ve been, been working on how to incorporate that into model building, et cetera, and, and documenting how the process has gone so that other people can build on what I’ve been doing. And yeah. And, and a metric that is not dependent on it being either algorithm or pro or human, you know, that it’s, it’s something that crosses over those two boundaries and we’re able to look at the justice system in all its facets. I just think it’s, it’s really restrictive to just go, right, well, we’re gonna test this algorithm and if it’s good, it’s good, and if it’s not, it’s not, um, on these various metrics that are questionable in various ways. You know, have something that has, why, why are we even having this dichotomy? We just need to know, does the process work, the whole process? Does it work? Does it do what it needs to do? And that’s, you know, that’s more of the fo- focus. So yeah, I’d agree with you that the framework is, is really important here. And, um, you know, that that’d be very happy if that was something that came out of, of this research.

(44:17):
You’re starting with traffic offences, do, do you see a world where you’d walk in, like for any type, any type of trial, whether it goes up to like murder, you know, really, really serious offences and people would plead to an algorithm or, or a computer that can listen or, and then, so you wouldn’t have, I suppose we always need the jury, cause that’s, it’s based on being judged on your, by your peers, right? But rather than on the sentencing, et cetera, being done completely by, rather than having a judge, you just have a, an algorithm, you know, that might be personified in, in some way. I was just wondering whether you thought that that was a, a future that, that is even that is possible or something that we should strive for? It could be dystopian, it could, I saw Star Wars and think something happened on there and exactly like this, like judge says no, and it was a machine and they ended up smashing the machine or something like that. But yeah. Um…

Cari (45:09):
You know, everything I’m doing is to try and bring nuance into this discussion. So I definitely wouldn’t label one particular thing as a, a as dystopian. I mean, if you are, if you’re going to be sent to prison, you know, that’s a, that’s a horrific thing to happen to a person, you know? I mean, I mean it’s, it’s, it’s horrific what they may have done as well, you know, but I’m saying like from a looking, you know, it could say, well through human dignity, you know, it should always be a human that that does that kind of thing. But, you know, in the, in the pandemic, uh, there was a sentence, not in the UK, it was, uh, in another jurisdiction, but that I believe it was a death sentence held over Zoom. Yeah. So like a Zoom death sentence. So somebody’s condemned someone to death and they’re only hearing of it through a computer screen. Uh, okay. A Human being is saying on the other side, but, you know, there is that qualitatively a bet-, I’m not saying I, in any way, that whole thing is, is good, but I’m saying like, these things are happening already. Yeah. So like, we’re already trying to probe these issues. Whether the fact that that person cured that sentence from a human being does, does that make it a lot, you know, a lot better. And these are questions that we will kind of continue to probe, I suppose. You know, certainly I think that we need to be listening to what people consider necessary for their dignity, you know, and I think that’s the, that that’s the main, that’s the crucial message here. We need to listen. It is not like there’s a line that’s been drawn somewhere, and we can certainly uncover it and go, oh yeah, that’s where, the line between humans and computers is, you know, that’s where we, we never cross that, you know, it’s not there is it, it’s it’s in our societies, it’s in the way in which we, um, are develop, you know, interacting and developing, you know, uh, across the, uh, years. So it’s, I think that I’d kind of shy away from saying absolutely no, this…

Amjad (47:07):
Where the boundary might, like, we dunno, we’re gonna discover it as we go along.

Cari (47:11):
It might move, but I don’t think that’s necessarily, that the boundary might move is necessarily a bad thing, if human, you know, human dignity, et cetera, is, is is respected, you know, in that sense. So, and the only way we know that is through, you know, continually finding that out, you know. And um, uh, but I’d say yeah, caution, but not caution where you sort of go and hide away from it. You need, need to press on and see, right, okay, ask people understand, understand, you know, from people where these boundaries are, um, as you might be different for different age groups and different, you know, sort of backgrounds and various other things. So there’s lots of factors that’ll go into this.

Amjad (47:54):
You could have been, you could have said something about, you know, I have an algorithm that we’ve trained and we’ve got it ready for dealing with, uh, you know, serious fraud, all this sorts of stuff, and it’s coming out next week and all of these things. That would be fascinating. There is a, there is a concern I suppose, that like with the systems that we put in place as a society, like we know with, with, with, with laws, with corporations, with algorithms, you end up in a place where you have less and less human agency as a society to determine which way your society is going, because all the systems that you’ve put in place force you to go down, uh, particular channels, right? You know, the big, you know, arguments around capitalism, communism, kind of, they, the individual doesn’t, is not able to, they may, might make decisions that are, uh, correct for them locally, but not, not at a global level, right? So when you start introducing technologies like this, that’s where you’re heading, how do you think it’s going to affect the legal profession? Like the large language models or people becoming, uh, training as lawyers today who are, uh, who are quite young or who are thinking about training as lawyers? Where do you think it will change and, you know, how will it affect, affect that profession itself?

Cari (49:03):
Some within the legal profession have been saying this for quite some years, you know, like Richard Suskin for example, that, you know, that, that there’s, it’s going to be huge change in the legal profession. I think that, um, and that’s pre, you know, chat, Chat GPT, right, but, um, I think that it’s, I’d say to lawyers, you know, and, uh, people coming into law now, um, to remain very flexible, you know, and, and be very ready to pivot and change direction as you know, as necessary. Um, it’s certainly changed the face of being a trainee lawyer, I would’ve thought, because, you know, already, I mean, it’s not even talking about Chat GPT, but you know, that, that the ability to, you know, it used to be the job of a trainee to go through masses of legal documents, you know, and, and process them and look for kind of various things. Well, we, we can do that now without, um, you know, without, without the same amount of human hours working into it. Um, so it’s kind of, uh, I think that what’s changing is, you know, there, there may be opportunities though for people to work, you know, as I said, like together with these foundation models and, you know, be, be more working with it rather than, you know, seeing isn’t necessarily a, a, a challenge. But I think the place we’re at is that it’s, it’s difficult to see how you could continue as things were, even in the short term, really in the next few years. I think that the preserve of law to the lawyers is going to be really challenged. I think that, you know, that there’s also, you know, I think, well, there’s a law is always quite a, you know, a specialised area and, you know, there’s that, that potential now for other corporate, you know, say like accountancy and various other areas to come into law, what was laws domain, um, with, you know, supercharged with, you know, uh, these kinds of tools. So it’s, it’s an interesting time. I think I’m, I’m not trying to hype up Chat GPT itself because, you know, I think that we know there’s limitations, you know, hallucinations, various sort of things that are going on there that make it not ideal at, at this time. Mm-hmm. But clearly there’s, you know, I mean, what is the job of a lawyer you was, one of the things is, you know, um, be able to read a large amount of information, distil it into short, you know, into a smaller amount and convey that, you know, in a kind of persuasive way or something. Well, you know, that that’s not the preserve of a lawyer anymore. You know, most people could be able to do that with the help of Chat GPT. So, yes. Yeah.

Amjad (51:44):
Okay. Thank you. I’ve really enjoyed this, I did.

 

Presenter:
Amjad Karim
Amjad Karim
Guests:
Ceri Hyde-Vermonde
Cari Hyde-Vaamonde