Picture Me Coding

Predicting the Future: Law, Software, and Attorneys Using AI

Erik Aker and Mike Mull Season 3 Episode 71

Today Mike and Erik are joined by John Benson, an attorney with a background in digital forensics who has been at the forefront of integrating LLMs into legal practice. The conversation ranges over the practice of law, digital security, and AI

Find out more about John Benson's work here: https://john-benson.com/

Send us a text

Erik:
Hello, welcome to Picture Me Coating with Eric Aker and Mike Maul. Hi Mike, how you doing?

Mike:
Hey there, doing okay. Got a bit of a cold, so if I cough, I apologize to our listeners.

Erik:
Sorry to hear that, Mike. Today, I invited John Benson on the show. John's here to talk about the intersection of law and software. He's an attorney and a hacker, and he's actually been someone who has been an early adopter of LLMs for his work in his legal practices. Welcome, John. Thanks so much for coming on the show today.

John:
Yeah, thank you for having me. I've been really looking forward to having a chat with both of you guys.

Erik:
Now, John, would you mind telling us a little bit about your background?

John:
I have an interesting path that I've followed to get to where I am right now. I think if you look at me, I've got a number of different hats or masks that I could possibly wear at any given point. I think a lot of people look at me as an attorney in some regards because I went to law school. I have the law degree. I did 17 years working at a big law firm, one of the biggest law firms in the country. But I also kind of wear the hats of somebody that's a digital native nerd generalist, frankly.


John:
I mean, to describe me as a hacker, I mean, I think that's actually probably perfect, right? I mean, my work with technology has always been, for the most part, self-directed. It never really took formal computer classes in college or anything like that. It took what you could get when we were growing up in the 90s out in a small town school district kind of thing. But I didn't take any real formal steps to solidify certifications or anything like that until I pivoted all the way into getting my digital forensic certification. Oh, okay. To lawyers, lawyers look at me as almost like a pure nerd, and nerds look at me like almost a pure lawyer.


Erik:
Oh, so you're like always on the other side of the table for everybody you talk to.

John:
No, I think that's exactly right. And I think that kind of never necessarily fitting in, but always kind of fitting in everywhere. And being able to fit in everywhere is a trait that I've had going back a long time, going back a long time. And I think it's a trait that a lot of people share that are in technology because we kind of gravitate there because of curiosity and excitement instead of necessarily just getting a job done.

Erik:
So you mentioned digital forensics. What is that in law?

John:
Ah, yeah. So digital forensics, it can be as exciting as it sounds. So really, you're talking about looking at somebody's computer, looking at somebody's phone and figuring out what happened to it and what happened on it. So, you know, you see things, you know, things on the news about, you know, let's say Alex Jones's text messages or something like that. Or if you've got a theft of trade secrets case, right, where somebody leaves an employer and they take a thumb drive full of stuff with them. Well, as hopefully people know, you can't do that. generally speaking in the U.S. if all that work belongs to the employer and you can't take that. So when you take it, that's the trade secrets. So digital forensics, as I was working in it, in that context involved a whole lot of taking somebody's computer after they'd been terminated, going through, taking a look at what they had done in the times leading up to their departure to see if
they had plugged in a thumb drive, to see if they emailed themselves, to see if they'd been, you know, sometimes you'd have situations where somebody had gone to work with a competitor. So you'd end up finding evidence that the competitor was conspiring with them to take, like, customer lists or things like this. It's more of a twist on human investigation inside of digital forensics. the other side of the coin in digital forensics that is very big out there is kind of incident response, which is kind of the malware, information security kind of stuff.

Erik:
Yeah, that's what I was thinking of. Yeah. Sounds real like a kinship, close kinship with hacking. Yeah.

John:
Yeah. And it's all very much part of a spectrum. I always found the human element of it to be much more exciting of a chase. The thrill of the chase in forensics and investigation is absolutely real for sure. But I've always found that chasing a human scratched more itches for me than trying to reverse engineer how something popped a shell.

Mike:
Are there companies that investigate everybody who leaves as part of like a compliance program, or is this a case of somebody has to be under suspicion before you look into them?

John:
It kind of depends. For the most part, if people were calling me, they were already at the point that they had some level of suspicion, usually some strong level of suspicion. And I think, honestly, it's one of those things where it's so common that if people look, they're probably going to find something in kind of a post-termination situation. You know, you've got your laundry list of policies that you're supposed to do with that computer. But really, that specifically is an issue that I wish companies would deal with more openly and honestly during the offboarding process, frankly. I mean, I know that it's always – those are always awkward and not fun situations. But I definitely ran into a few instances and then just talking to people out there in the world where people just – people simply flat do not understand that you just can't take a thumb drive with you whenever you go.

John:
I think, you know, before somebody walks out that door, if that HR person says, you know, hey, did you take anything with you? And there's this kind of glint in their eye. It would always be helpful if that person kind of looked them square in the face and said, you know that I know a guy that can tell the make, model, and serial number of every thumb drive that's been plugged into this computer and can also tell you what you might have been listening to on Spotify while you were copying things off the drive. and you have one more chance to go ahead and put any thumb drives on the table or go home and think about that i think if pete if it was laid out there just like that there would be a lot fewer instances uh where i would have been kind of engaged in those situations and also sometimes you get in there and you're like you like you see the things happening and it's like sometimes you you're like because you end up learning a lot about the person when you look at their computer so you get a sense pretty quickly of like oh okay you know well this guy was up to something or perhaps oh no this guy seems totally innocent kind of thing and and and this guy just truly didn't know that that kind of work comes with a lot of ethical quandaries and a lot of a lot of kind of cognitive dissonance


John:
have you can imagine with me being in security that uh or being involved in security and kind of hacking that I'm always kind of a privacy forward kind of person. And really in the forensics, it's a lot of that kind of work was working in the, you're doing the work of the bad guy after he's broken into your computer kind of thing. The only difference is that I've got permission to do it.

Erik:
So John, I'd like to talk a little bit about your use of LLMs and your work with LLMs as an attorney. But before we do that, I feel like we have to give our listeners some sense of what the work of an attorney is traditionally like. When Mike and I and other people who are probably listening to the show think of the work of attorneys, we think of courtroom legal dramas. They're in the courtroom, but most of the day-to-day work of an attorney is probably not in the courtroom. And what is it like?

John:
 The work of an attorney is probably very similar to that of a lot of kind of white-collar kind of knowledge workers out there these days, in that you show up at a lot of meetings, you do a lot of writing. The legal system and the legal industry can kind of fork all lawyers into maybe two different piles if we're going to be a little bit too broad. And you've got litigators on one side and you've got transactional people on the other. Litigators would be the people that do the fighting, that handle the courtroom kind of drama, the conflict. But then you've got the transactional folks that are the ones that you call to facilitate a, you want to go buy a company. I don't want to go buy a company.

John:
You can go hire some lawyers and they'll help you write the contracts. Contracts, yeah. All of that kind of business end of stuff. So the thing of it is there are lawyers that you should look forward to calling because they're helping you build something. And then there's lawyers that you call because you have to. The firm I worked at for a very long time did a lot of both big firms, especially. They want to diversify because, you know, as litigation comes and goes, then the transactional side can kind of rise and fall too. But I worked primarily on that litigation side. And it's people I'm sure have a lot of perceptions of what that process looks like based on what you see in, you know, popular culture and stuff like this. And I think one of the biggest misconceptions that lays out there is that anything in the law happens quickly.

John:
Nothing in the law happens fast. One of my favorite shows that is related to legal, not because it's accurate at all because of the legal system, but just because of the portrayal of some of the personalities in law firms was Boston Legal with – Shatner. William Shatner, yeah. Tremendous TV show. But it always presented litigation as something that you could like get through in like an afternoon or something like this. That's not the case. That's just not the case. So, you know, trial is something that happens at the very end. And trial is something that is massively expensive, if you can imagine that, right? I mean, next

John:
time you see, you know, somebody, some executive hauled up there in front of Congress or something like that, you've got a bunch of people sitting next to him. Think about the meter that's running during that, right? Everybody, you know, depending on the context, you think about at least every head that's sitting at a table is going to be billing at $350 an hour currently. And all those guys are doing it sitting at the table.

Erik:
Never did think about that.

John:
Yeah, right. So by the time you get in there and you're going to want to show up and it's like, oh, we're going to put on a big show and we want a whole AV team and we want four paralegals and we want all this. It's like, oh, well, hell yeah, you got to pay for all that and you got to put everybody up in the rooms and you got to rent all the monitors and rent all the cable. And so it's trial is big. It's expensive. It's what you want to avoid. But the litigators are there to do that fight through the system. Right. And
the system running up to it.

Erik:
Well, I guess I was curious, too, when I think of the work of attorneys, I think of these massive numbers of boxes of mysterious documents inside the whole thing about discovery. And there's a lot of text to
read and synthesize.
Right. And I wonder if you could talk about that a little bit, too, as a part of the work of an attorney.

John:
Yeah, well, that and that actually was where the bulk of my my work at the big firm was before was, you know, I would do the digital forensics at kind of that micro level. But then my broader role was handling e-discovery. And e-discovery is kind of forensics at a macro level, at almost an institutional level, you can think of it. So discovery, when two companies sue each other or two individuals sue each other in the States, both parties are obligated to share documents and share evidence with the people on the other side. in advance so that everybody's operating on a level playing field whenever you get into court, whenever you're trying to present all this evidence to whoever's going to make a finding of fact, right? Going to make a judgment call one way or the other. And if you go deep enough back into the common law system, it didn't used to be like that. It did end up being a little bit more like you would see on TV where you would have surprise evidence and the shocking kind of witness kind of thing at the end. That kind of, I think it was a Latin phrase for it, but it called the trial by surprise kind of thing. And the U.S. court system changed that over time. So now we exchange stuff. And that's a big deal if you're a big company. It's a big deal even if you're one individual and you're just talking about text messages and email, right it's probably a lot of stuff right a massive

John:
you're talking about terabytes and petabytes of text and an increasing rate every single year absolutely

Mike:
i have this image in my head of like people wheeling in file boxes full of paper is that uh is that like a thing of the past that's just used for tv now or is everything digital now oh

John:
that's an interesting transition I think it depends on the case, the attorney, and everything like that. When I started practicing in 2006 or so, we were still very much in a transition where you'd work with some senior partners that had preferences where they'd say, oh, pull all the key documents, and I want them to be put in chronological order and put them in binders, which is a lot of work. And also something that makes a lot of sense that you'd want to see something like that. But also, if you've got a computer screen and a database there, and you can just click and then it sorts everything, then the alternative seems a little bit antiquated. So what happened was over time, everything has moved towards being kind of digital first. you know it is all living in databases the software that that runs the stuff has gotten bigger it's gotten better it's gotten more capable

Erik:
is there law specific software that you're talking about there john like specific applications that law firms run or they pay for yeah

John:
there are there are and and those pieces of software what it does it's not that much different than itunes frankly so so what happens is you you take a take a raw document let's say like a like a word document and you run it through one piece of processing software. And it's going to extract all the metadata. It's going to extract all the text and put it into a singular field in a database so it can be indexed. And then it gets all put into a larger database that shows you a picture of the document on one side, kind of like the underlying MP3 in iTunes back in the day. And then it'll show you all the metadata. And then the work of the attorney, right, the work of an attorney that's working in discovery, especially if they're a young associate, they're not being wheeled box, you know, boxes of documents to pour through like they were back in the 80s. They're logging into a review queue and they're going to sit there and they're going to look at somebody else's email or somebody else's word document and say, OK, that document is related to this case or it's not. It's related to this issue that somebody else mentioned in this case. It did not come from an attorney, so it's not privileged. And then they make those five or six clicks, and they click the Save and Next button,
and then they go on to the next document.

Erik:
How many documents do you think that they're reviewing in a day? Or I don't know. That doesn't sound that efficient, I guess, is what I want to say.

John:
It's not. No, it's not. You can effectively, depending on what kind of documents you're looking at, really a human can get through roughly 60 docs an hour. Oh, wow. Okay. Okay. Okay. And that's moving to a pretty decent clip. And that's making some assumptions

John:
about, you know, what's in that document. You know, if it's a spreadsheet or if you've got to do what's called redaction, right? If there's something in there that you're allowed to black out, either because it's privileged or because you've agreed with the other side that we're just going to be able to do this, more personally identifiable information. That kind of stuff will slow you down because now instead of just clicking and kind of classifying documents, now you're drawing boxes on the screen like the world's worst version of Command & Conquer. And yeah, fun fact, I actually did that job after law school. I finished law school during one of the first – in the grand scheme of things, it was a small economic blip, but a small economic blip for the local legal profession. And I ended up doing document review for, oh, goodness, probably 10 months, I think. And it's as bad as it sounds.

Erik:
It doesn't sound very fun.

John:
It's not fun and it's not – it sure as hell isn't why people go to law school. And the quality and the accuracy that comes out of it is really obviously not all that great either.

John:
You're talking
the statistics around that are – there are studies that go back into like the 70s and some into the early 80s about accuracy and precision, which are the same kind of data science terms that we – statistical terms that we use today for measuring the effectiveness of LLMs or AI systems generally. But it took a look at how good are paralegals at this or how good are senior attorneys at this. And when you were looking at humans doing the work alone in kind of a serial fashion, even when they knew they were being looked at, really they're only – call it maybe 40%, 50% accurate at best. Okay. That's even higher than that.

John:
Yeah, yeah, that's it at best. And your presumption is probably to be to be there or lower is probably accurate.

Erik:
I guess I would imagine that if I'm in one of these cases and maybe it's going to go to litigation and maybe my strategy is I'm going to send over to the other attorneys, the other side, I'm going to send over just a huge flood of documents. And that makes

Erik:
their job even harder. How do you know that you found the real gold nuggets in there? It seems like it'd be so easy to overlook stuff. It's such a massive volume of information.

John:
It is. And there are a lot of different factors that roll into deciding if and when, and maybe even to a degree what you're looking at. And one of the blind spots that exists within, I think, the e-discovery industry, and I think really from almost anybody that looks at legal through a purely technological lens, we make this presumption that, well, we're doing e-discovery and therefore every piece of evidence and every piece of knowledge needs to come out of this database. and we're doing this totally blind, right? That it's a single source of everything. And in reality, you do have humans, right? You still have a human that's going to get on the phone and be like, oh, no, this is the key guy, so it's going to help you filter on that. So this kind of concept of document review and that kind of document analysis, there's a review of the documents in that kind of a serial way to say whether it's related to the case or not. And that's kind of review for obligation and to decide what goes out versus what gets retained. But then there's the other aspect of, yeah, there's this expensive nonsense that's going on over here, but I also need to go win my case. Right. So when you are trying to go in there and win your case and find those other things, it's a question of it's a different mindset that you approach and to a degree a different tool set. And in e-discovery, it's a different way that we apply the AI technology
between those two different tasks.

Erik:
Okay, I want to get to that soon here. I think Mike had a question you want to jump in with.

Mike:
Yeah, I just wanted to ask if with all this data, if modern law firms need sort of the equivalent of data engineers.  boy. You know, people building pipelines.

John:
people have been trying to answer the question of what law firms need for a long time. A lot of people

John:
have come up with things that both they think that law firms need and that law firms actually need. Lawyers, I think, are a very unique social group in that they don't always necessarily adopt the things that are necessarily good for them in all instances very quickly. So the legal profession generally – and it depends on the sector, right? I mean, you're going to have certain areas where law firms and lawyers are highly incentivized to use technology that makes them the most efficient and the most effective, right? So think about a plaintiff's firm, right? Somebody that you hire because – think about a random company. Let's just say you went out and worked for Hallmark, let's say, okay? And you go work for Hallmark and you get fired. Well, Hallmark's going to have a big law firm and they've got all kinds of documents. You've got a small one-man shop law firm, right? Or you've got me, for example, and I'm just one dude. And suddenly here comes, you know, two million documents that all
relate to

John:
the policies related to something or else. It's like right now I'm at a strategic disadvantage. And historically, I've always been at a historical disadvantage because I just have fewer resources, both in terms of humans, in terms of technology. But that smaller guy, he's going to take that on. The bigger guys, there's almost power in the inefficiency and there's power to be wielded through that scale. So a large firm and large institutions that are looking at this kind of thing, whether it's with kind of malicious or specific intent or not, are still going to look at this in terms of we still have an advantage. We want to retain that advantage by sticking to potentially inefficient means of getting this done so that we can continue to have leverage over smaller people on the other side.

Erik:
Oh, interesting. I think there's kind of a nuts and bolts question underneath Mike's question, which is like,
how do you get all this information into your database? Like, how does it get transferred? How does it get handed over? Are there tools and technologies? Do you have to hire people? You have specialized roles just to sort of manage the massive movement of documents for every single case you take on. What's it look like? 


John:
Absolutely. It has always been the collection and preservation of data has been always a real thorn practically and legally. Luckily, that is actually getting easier while at the same time getting harder and more complicated. When I started, in almost any case, whether you're talking about international corruption or you're talking about a dispute between two neighbors about whether the fence got put in the right spot, almost any of those, when you get down to an individual level, we're always going to be talking about, hey, we're going to grab your email. We're going to grab your text messages. We're going to grab your loose documents. We're going to grab a catch-all of crap that lives on the web, and then we're going to talk about paper. Where things get hard is how do you take that email and get it from where it is today over to where I am now and then into my magic pipeline that puts it into the database. Before, that was a lot of interfacing with client technology staff, and that was another role that I played and one that I played fairly well because I can speak through the lens of legal but also through the lens of technology. And they're kind of bulk approaches. You can either do something as ham-fisted as just going in and collecting PSTs from everybody and then searching everything. But that sucks. But depending on the case, you may be able to do something that's more targeted, yet every bit as defensible in the eyes of the court. So you and I can sit down and hop on a Zoom meeting and I'll just sit there and you and I'll talk through your case and I'll help you folder your email, you know, compress all your stuff, get it all over to me. So the timelines and everything like that works.

John:
You've been supervised. I know that you didn't hide the ball and therefore that's going to be good enough. Therefore, I'm only collecting what I need. But really, the move to 365 has been the biggest change for a positive, I think, in terms of companies being able to respond to litigation. You're talking about

Erik:
Microsoft tools in particular. Is that right? Exactly. Exactly.

John:
And not necessarily because Purview or whatever they're going to call the email export sweep today, the name changes just constantly. it's not necessarily that that's good, but at least everybody is in kind of a uniform system. So if I call up a client, nine times out of ten, they're going to tell me, oh, we're on 365 or
we're in Google Workspace and I'm no longer dealing with, well, we still run group-wise and it's behind these different kinds of systems. But there was this vendor that came by a few years ago and he sold us this email archiving system. And it's been running. We don't really know what it is, but you can go look at that. And then you realize, oh, well, you guys set that up outside the spam gateway. So that's actually like 20 terabytes of mostly spam. Had that happen. Had that
happen before.  Yeah, there's a lot of complicated stuff. And so even as some technologies have made things easier, like email has gotten easier, other technologies become more difficult, like short messaging, right?

Erik:
Teams

John:
and stuff like this. Slack. Slack. Because the volume is high. The data formats are cloud-based. They're definitely not documents, you know? And all of these kinds of database systems and all of these models that we have for taking those kinds of native things that we interact with day to day and then turning it into what gets waved around in a courtroom. Things like text messages are just fundamentally different. And there's different kinds of metadata and different things that you've got to think about. Like, how do you decide whether it is a, if you've got a Word document, you know where Word document starts and ends. But if you and I are texting in an email chain, how do I split that up? Do I do it on a time basis? Do I do it on an arbitrary basis? Do I, it kind of depends. You don't really necessarily know. And from a practical perspective, you know, you get in there and you've got a couple choices. You

John:
I could either put in every text message I've ever had with my wife into one single PDF and then we could pay an associate to go draw out boxes on the screen. That doesn't sound very fun. Or break it up individually and do it page by page. I mean, it's just a – it's an exercise in compromise and suffering.

Erik:
That's interesting. So it's like the idea we have of these things, they don't cleanly mesh onto the metaphor of this is a document in a court case.

John:
That's right. That's right. And our methods of communication are moving so much faster than I would almost call it like almost the waste management system of
discovery. Right. You know, the software and the platforms aren't necessarily equipped to handle the new stuff that's coming in. You know, it's finally pretty great and pretty great at email and it's pretty great at handling Word documents. And we finally got there and, you know, call it 2010.

Erik:
And then the world changed.

John:
Oh, yeah. But you were worried it had already changed and the lawyers were like, hooray, we finally got Word documents figured out and, you know, we've moved on. And, but the, you know, I had, I had instances where, you know, you got to produce everything in PDF. Well, well, here's a PDF that has a 3d model in it. Oh, okay. Yeah.

Erik:
Yeah. Well, what, what, what do

John:
I do with that? So it's, it's, it's, yeah, it's layers.

Erik:
So you've mentioned LLMs a few times and I reached out to you because of the people I know whose work I'm a little bit familiar with. You seem like you've been working with LLMs longer than almost anybody I've talked to. When did you start using these tools and what were the early problems you started trying to apply them to, if you remember?

John:
Yeah, I've had a long path with LLMs, but also AI generally. In legal, we've been trying to leverage predictive algorithms and predictive machine learning techniques for quite some time to make us more accurate and more precise. but

Speaker 4
those are very

John:
different than large language models. And that's part of the thing. I think almost anybody now out there talking to anybody in AI, you know, you get a lot of the more seasoned folks that have been in machine learning and took statistics, you know, high-level statistics before the Obama administration, and they'll talk about how they've been doing AI forever. And yeah, that's true. I've been, I guess, to a degree doing AI forever, but really there's two kind of generations. There's that, and then there's LLMs.

John:
So LLMs kind of entered my world just like it did just everybody else whenever ChatGPT, I think 3.5, came out. And I was fairly skeptical at first. I kind of got in and I got in an argument with ChatGPT. And, you know, I found it to be – I put it through the paces that a lot of people did. And I was like, oh, wow, this is really smart. And then I'd push it on legal topics and I'd be like, oh, okay, I can see how that's a, you know, that'd pass a bar exam question. I can see that. But let me push it a little bit harder. And then as soon as I really started working inside of my circle of competence, that was when the shine kind of started to wear off. And that was at the early stages where people weren't necessarily sure what it was going to be good for or not good for. And I got kind of busy and kind of just put all that on the shelf for a little while. That was when you had people like Michael Cohen, the former attorney for the – I guess he's – I've been fairly out of the news. I'm assuming he's

Erik:
Oh, is it Trump's fixer?

John:
That was what they called him? Yes. Trump's fixer? Yes, exactly.

John:
Exactly. he decided that he was going to do a little bit of legal research and had cited, you know, some cases that didn't exist because of the yell and hallucinated. And, you know, and of course, you know, as soon as I saw these things, just like, oh, it's only a matter of time before some bozo comes along and ruins this for everybody. And lo and behold, it happened to be that bozo in particular, which was like, wow, how can we get Anthony Weiner involved in this too? So I kind of let it sit, and then I had – the following fall, I had been asked by the local bar, who I do a lot of speaking for, to give a talk on LLMs.

John:
And I was still kind of grouchy, and I was still pretty skeptical about it. And I actually said, heck yeah, this will give me an excuse to get in here. I'll play with some of this stuff. And in a thing that's fairly typical, I think, for me and a lot of people with that kind of hacker mindset and was definitely reflected in the first couple of years once I was out there in the hacker community was, oh, well, this is great. Now I'm here to show that I'm smarter than these bozos and show everybody that this stuff is nonsense and needs to go away. And I got in, and instead of working with ChatGPT, I started working with Claude. I knew nothing about Claude other than it happened to be the LLM that was in part backing Notion's AI system,
which I started using at work.

John:
I'd been impressed. And I'm like, well, this is interesting. This is at least a little bit more interesting than what I was getting out of ChatGPT six, eight months ago. So I started kind of playing with Claude. And I started truly from scratch. And I think my first thread with Claude really started with Hi there. At that phase, really, there's a lot of stuff out there now about prompt engineering and a lot of materials out there to get you going. There were some things out there at that time. But really, for the most part, it was raw experimentation. Nobody kind of knew really the way that things were going to work or how they were going to work. And a lot of trial and error, a lot of experimentation. For example, Claude would do things like if I tried to write a Python script with it, it would confidently come back at the time and say, oh, well, this code ought to work. I spun up a Docker VM and I ran it and it should be fine. So just go ahead and copy and paste it. and definitely at that time and this is how raw i started with going in and i'm like well let me double check is that a thing the cloud can do and like i'd check their actual documentation and be like is this a is this a feature that has been rolled out i'm like no that's that's that that is still at that point was still mostly science fiction

Erik:
no it's not right it's just confidently saying i i ran this code you can trust it oh yeah no to

John:
no to the to the point that if if If I went back in my search history, shit, I was trying to figure out what version of Python that Claude was running so that I could make sure I was running the same consistent environment, which also was a good illustrator for me at the beginning of how confident it can be and how, to a degree, how misleading it can be. So I continue to work with it, and I continue to be more kind of amazed over time. And I think one of the – I just kind of sit there at night chatting with it on my phone, and I got into a conversation with Claude that was interesting enough that I hit the daily limit. And it's like, well, you're done until tomorrow, or you could pay $20 and keep this going. And I sat there probably over the course of a couple nights, and I'm like, man, $20. $20, that's a lot. hell, Netflix is nine. And I'm like, okay, well, I got this talk. I'll go ahead and throw 20 bucks at it. I'll treat it like a business expense. It'll be fine. But then just kind of kept going, kept going. And then I started to see what it would do because it kept impressing me and challenging me. And I found that as I talked to it, it would engage in conversations and it was more steerable. And one of the biggest moments for me was when, just through experimentation, I was like, well, let me just take some raw output from NMAP. You guys know what NMAP is, the network port scanner?

Erik:
Maybe. Sort of. I mean, yeah, we do, but give a quick little description.

John:
Yeah, it's a basic network security tool or network engineering tool, I guess, that we'll just take a look at, you know, from your computer, it'll send out kind of like sonar. It'll reach out and figure out what ports are open on a computer on the other end. And it'll give you a little report, you know, your IP addresses and what happened. And I opened up a new Claude thread and I said, hey, Claude, I just found this on a laptop that was – there was a laptop sitting in the office and I found this. I'm going to paste this in. You tell me what the hell this is. And it interpreted it line by line. I was like, wow, that's really good. And this was also at a time where one of the memes everybody would do is, you know, let's have it do something in a funny way.

John:
I go like, well, why don't you narrate that log file, essentially, in the form of calling a horse race, like you're calling the Kentucky Derby.

John:
it did it. And I was like, that's hilarious, but also, oh, my God, accurate. So then I pulled the usual, well, explain it like to a five-year-old. I'm like, oh, that actually works really well to a five-year-old. And then I said, huh. And I did something that was uncomfortable. And I said, why don't you do a report on this from a digital forensics perspective and an incident response perspective and tell me what happened here and what people inside of management might need to be concerned about? And it did it instantly. And it was good.

Erik:
Tell me why that was uncomfortable.

John:
Well, I say it's uncomfortable. I think it's something that a lot of people will walk up to the aisle because they have this fear of, is it going to replace me,

Erik:
right?

John:
Or is it going to take my job? And you walk up with a sense, with that kind of sense that I had initially of, I'm going to walk up and I'm going to poke it and I'm going to find reasons to avoid this scary thing. Taking that raw output and turning it into human readable and human understandable prose or explanation was something that set me apart from the lawyers. It's what set me apart from the digital forensics people.


Erik:
 That was your specialty, right?

Erik:
This is a differentiator for you. Yeah,

John:
and not necessarily a specialty. It was a core to my being thing that made me unique and special and made me feel like it was what I brought to the world and it was what I brought to the situation. Wow. Okay.

Erik:
Right?

John:
Yeah, yeah. So what happens is I sit there and I watch this robot that is either free or $20 do my job, not only just my job, but what made me great at my job better than me. And that's a point where you do kind of sit down and say, sit. Now what? Because, you know, on one hand, you've got this sense of, you know, fear, but then you also get loaded up with this, oh, my God, I might have just been the first person to figure this out. How many other people have figured this out?

Erik:
Well. And

John:
how quickly do I need to figure this out before I am made worthless? Or do I need to go and raise alpacas? you know it's it's it's it's that kind of a moment can

Erik:
i can i ask you because when you're talking about the python script it came back and it said to you i ran this in a docker container and it works and you're trying to figure out a python version and there's the potential for confident yet inaccurate answers so it did this particular job really well but you were able to review it if it can do your job without you and you're talking about wow did this make me upset to leave. It's hard for me to imagine that you wouldn't need someone to review that output to make sure that it's correct, to make sure that it's appropriate.

John:
I think that's absolutely the case. And that's still going to be the case, I think, going forward for a lot of different things. You're going to want the human in the loop. You're going to want those kind of human connections. You're going to want those kinds of organic things, those organic components added to things over time, but there's a lot of stuff that we can take care of that's kind of first level review, right? Think about that, think about back to e-discovery and that kind of document by document kind of thing, you know, you're going to want a human to sit there and craft the argument that you're going to file as the complaint, right? Or your big briefs. But is there a reason that you need to be, because we didn't talk about how much we pay those people to review documents at 60 documents per hour, right? I mean, if you're paying an associate at a law firm, you're paying potentially the same as you would for them sitting at trial, looking pretty, right? So, you know, $350 an hour. You can do the math on that. Everybody's better served by delegating that right now to something that is faster, more accurate, everything like that, so we can focus on a lot of those other kind of human element kinds of deals. So when I came out of that, that was a little bit of my conclusion was, yeah, this thing can do this better than I can, this core thing. But what can I take away from this is, well, I can now take this and I'm going to A, make sure that my next step in the path is being able to drive this robot better than anybody else.

John:
I'll be first there. I want to replace somebody else before somebody replaces me.

Erik:
I was thinking you started all this work because you were going to give a talk to your local bar association, you said. I'm curious what

Erik:
that talk, how it was received. So what was the message you gave to that room and what was the feedback you got from the attorneys in that room? Were they as scared as you were? Did you tell them this thing's coming for a chunk of your job? You better be ready. What did you tell them?

John:
Well, seeing that moment and then continuing to work on that talk, the talk ended up being the big kind of capstone. So once I saw that, I approached the talk from the aspect of ethics, illegal ethics, and AI. You can imagine there's not a shortage of content discussing AI and legal, right? But there's a real shortage of stuff that is hands-on, that's really informed by knowledge and really kind of deep, deep research and work into it. And I took that long period and went back into the principles. And what I ended up presenting was kind of a model for evaluating each LLM technology and how effective it can be, whether it can be relied on, when it can be relied on, based on really what – in addition to things like statistics, but also what people see. Because we do get wrapped up so much in verifiability that we can often lose sense of just you can look at something and tell if it's true or not or accurate or not.

Mike:
Eric and I have been having this discussion with some friends of ours about sort of the point at which human judgment becomes important in certain processes.

Mike:
I think we have sort of varying takes on it, but it seems like it's really important in the legal profession. So I was looking in the news yesterday. I saw this company that is trying to automate the whole process of granting patents or at least doing the research for patents. And my sense of that is, okay, that's acceptable because the consequences of making a mistake are relatively low risk. But it seems like there's inevitably going to be this push toward, you know, let's let the AIs adjudicate cases or let's, you know, for small claim stuff, let's just have the AI do it. Do you have any thoughts on like where we should stop letting the computers make decisions?

John:
I don't know that there is a good stopping point. I think more these days in terms of trends and waveforms than points and definitions for lots of different reasons. But the way you put it is very, very apt. The more that you are delegating a decision and the more you're delegating something that's going to have an impact on somebody else, the more oversight and the more human involvement needs to be there.

John:
there's a sense i think especially from people with a technical background that the law is an imperfect system and let's say you take the take the whole jury system right i mean we find facts and we define we define truth to a degree through the lens of 12 random people that couldn't figure out how to get out of jury duty. And that method of finding truth and fact for people that live their lives in a world that's defined at the lowest level by zero or one is kind of hard to swallow. And it's also one that can be tempting to say, we do want to delegate things like the finding of facts to something like an algorithm.

John:
even without discussing things like bias and all of those other things,

John:
I think it's important that we always maintain that human element, especially when it is having a human impact. There's an element, even if you take a look at at viscerally red light cameras. You know, red light cameras have been around and have had different legal treatments depending on where you are because, you know, there's no person there, you know? I'm getting this ticket and I'm supposed to go into court and I'm supposed to face my accuser. And my accuser is this camera that just took a picture in lots of jurisdictions. People just said, that's constitutionally, that's not how we roll. So even if we correct for all of the issues of bias and stuff like that, it's something that makes me fundamentally uncomfortable. And I think should make a lot of people uncomfortable as well. One of the challenges when you're talking about AI broadly is that the way that you apply the technology is so different. And it can mean so many different things. One of the places that I had the privilege of sitting in the past year was at a thing called the Sedona Conference, which is kind of – it's a think tank for legal technology and has been around for a long time. a lot of judges, a lot of people from industry and a lot of thought leaders sitting around having good discussions at a place where you're not there kind of

John:
to puff up your ego and go have drinks on the vendor at the end. But you've got people in that room, and it was part of the AI working group. And I watched, and since I had a new job, I didn't have any other email to have to pay attention to. So I was actually fully engaged with the entire room the whole time. And you'd have people talk about the issue of bias in AI and bias in algorithms, which is a big, big, big deal. And you would have people talking back and forth and people issuing a counterpoint, and they would be appearing to have a good conversation. But then if you listen closely, you'd realize that one person was talking about AI classification in terms of like facial recognition. And the other person was talking about AI in terms of doing document review, right? So the discussion of bias in terms of is this camera system going to flag somebody as being suspicious, right? That kind of a decision is going to be more impacted by something involving bias historically in that model. than, say, me taking ChatGPT and saying, hey, ChatGPT, is this email related to the sale of this house or not? Right? Yeah,

Erik:
that second one sounds easier for me to accept, but I still can't get over this. Maybe it's because I'm reacting to some of the marketing of these tools or some of the enthusiasm I see where people say, they're just going to get better and better. And they either explicitly

Erik:
say or they're implying eventually they're going to be so good, they'll take our jobs. And that means they're going to work independently. But even in like your discovery example, or I mean, I can make a dumber example. It's like, I have no idea about contract law. I'm not an attorney. I could go and ask LLM to write a contract for me. That's going to be tested someday if I ever have to go and have a court case around it. And at that point, it's almost kind of a worthless document,

Erik:
I don't know the value. of that thing unless I have an expert to review the output of the tool.

Erik:
And even with discovery, it seems like you still need someone to review and go, yes, yes, you did discover something that is definitely linked to the case. So you have to have the expert involved. That's my concern with even these sort of more minimal examples. I didn't mean to detour from your point about bias though, John.

John:
No, that's a good transition too, because it's, especially for those of us in areas where we have specific expertise, you know, lawyers in particular, obviously going to feel squirrely about somebody getting legal advice from ChatGPT,


John:
A therapist is going to get squirrely about somebody talking to chat GPT about mental health issues. And there are issues that people get worried about there related to accuracy. But there's also issues in there related to, you know, kind of those bigger concerns about taking my job and stuff like that. So it's all kind of wrapped up in there together. The interesting thing is that as these things do get better and better, and as those of us who are professionals do continue to rely on these more and more, it becomes harder to tell people to treat everything that comes out of an LLM as being inherently untrustworthy. The difference here I make is this, you know, I sit here getting ready to launch the e-discovery and AI consulting company. And I'm using ChatGPT to draft all my operating agreements. It's going to handle all of that kind of back office stuff. And I'm very confident in the output for a couple different reasons. One reason is that I'm confident in the techniques that I use so that I know that when ChatGPT gives me something, what it gives me is informed by good primary sources and is likely to be very much mostly accurate. But I also have that other kind of gloss of I'm also an attorney, so I can proofread this to make sure that it's not grossly inaccurate. Then we have issues of if you are a novice, right, or let's say you're somewhere in the middle and you're trying to decide and you don't know whether or not this is good or it is not. One of the questions in my mind as we evaluate the kind of ethics of this decision is, well, is that information better than the alternative of having no information at all or what you had to turn to before? So in the legal context or let's talk about maybe financial advice. Okay. So let's say you're going to go out and buy a car. Are you more likely to get good advice on how to buy a car and how to avoid getting swindled by asking ChatGPT or by calling Joe's hothouse of car loans? Because Joe over at hothouse of car loans, I mean, he's a human. He'll give you something. He'll give you something that's going to land you in bankruptcy. But ChatGPT is going to give you something that's probably pretty good. So, you know, it opens up resources to people that were under-resourced before that gives them information that is extremely close to professional grade and they can rely on. Now, the other issue, the area there that I think is also very touchy where you've got professionals with very big, serious, and real concerns is also there with kind of mental health. And what the LLM gives you in terms of all of it, and I think that whether you are a well-resourced person or a person that is turning to ChatGPT for advice, guidance, input on stuff where you might consult a professional, it's going to at least prepare you better for a conversation with that eventual professional. And frankly, it will give you a different perspective and perhaps a perspective that you hadn't considered. And there are instances where that need of an independent perspective is actually fairly important. Oh, that's interesting. One of the things I've gone through recently is, which is a funny experience, is that the reactions have been all over the map. I was diagnosed with ADHD for the first time at 44 years old. So I've got to where I am today while swimming uphill against being neurodivergent and not knowing it. you end up seeing these kinds of things on like TikTok, right? And I would see these things on TikTok going back. It's just like, oh, why is this thing talking to me about ADHD? I


Erik:
The algorithm is digesting you.

John:
Well, you know, that's exactly right. But then you also know that, you know, that's what everybody talks about. So

John:
you sure as hell can't walk into your doctor's office, or at least in my mind at that time, for a very long time. It's like, well, I can't walk into my doctor's office and say, can we have a conversation about ADHD because I saw videos on TikTok?

Erik:
There's a lot of credibility in that question.

John:
Exactly. And you feel, and I felt at least, like if I lead off with that, I'm going to get shown the door as being either an idiot or who knows even worse. oh, well, this is now drug
-seeking behavior, so we're going to put down a mark in his deal. It turns out I do have ADHD. It's confirmed by multiple psychiatrists at this point and my GP. And then as I started telling people about that, either way, though, it would have gotten you in and having that conversation. And I think even if somebody is there right now and they're like, oh, well, TikTok says I might have ADHD, but I don't want to go talk to somebody. I would say that if you go talk to somebody and if you go to a therapist, your doctor, and they say, well, just because you heard about it on TikTok, these things that resonate with you are then not true. So get out. That's not a doctor and that's not a therapist you should be seeing.

Mike:
So I have a question going back to the LLMs. I've heard this phrase that the LLMs can't tell you things that nobody knows, but they can tell you things that you don't know. So if you're like analyzing something in a legal context and the LLM points at something that you might not have pursued otherwise, does the output of the LLM then become part of the legal record?

John:
No. No, it doesn't. That issue of work product and whether or not kind of who owns the output of that in terms of both ownership, but then also who can claim privilege on it. Let's say you and I are going to have a meeting, right? I'm your lawyer, and we're going to prep for the same meeting. You and I hop into Claude, and we happen to type in the exact same queries. My conversations, if somebody looked at my computer, well, that's all work product because I'm a lawyer. Well, you're just – you might be out there just Joe Blow. That's not privileged, and the other side can see all of that. It's tricky questions that are in there for sure, for sure. But in terms of like just the idea, I mean, no. I mean the attorney can take that, and it's that perspective I think that the attorneys are eventually going to really like with all of this kind of thing. Because you don't have to just get the perspective of the individual LLM, right? When you know how to drive the LLM, and this is some of the stuff why Derek, I think, wanted to talk about the LLM specifically, is that you can tune that thing that in a single thread, you can get multiple perspectives, independent perspectives that are useful. And you can learn tons just by engaging on any given subject and going at it with the specific purpose of gaining additional perspective.

Erik:
So I think that gets to really the last question that I wanted to try to approach. John, this is the hardest one. If you put your crystal ball on the table, what do you think the job of the future attorney is going to look like? Are they going to be using these tools in the future? What's that day-to-day work going to be like that's different from how it is now?

John:
Boy, that is a harder – it's simultaneously a harder question to answer for the legal sector specifically than – but also easier to answer than the rest of the world, I think. The impact of LLMs and LLMs ability to act as that kind of co-pilot and act as that accelerator to level that playing field. Remember when we were talking about, you know, the solo plaintiff's attorney taking on the big corporation. It used to be one of the reasons that you would join a big firm was because you needed the resources around you to accomplish the work that you wanted to do, right? You want to go fight corruption, then you've got to go work for a firm that's got enough resources to go fight corruption. And often that could even come with other trade-offs. You know, you've got to do, well, if I've got to work for a company that does that kind of work, then they also represent petrochemical industry, which is kind of gross. But I'm really here for the corruption. The technology now truly lets – it brings back the potential for people to

John:
independent counselors to people in a way that it hadn't been before. I think the solos, the solo practitioners, the small firm guys are the ones that are presently the biggest winners in this. I look at big law firms and I look at big industries and big business models. A lot of those big firms are built around models of the billable hour. And that's, I think, maybe one thing that is different about software development than legal is that we still cling so much to that billable hour output. Right now, you want to do big document review. You can scale it at 60 documents an hour per person, and then you've got to go find people. With a language model, I can take a million documents and I can have that GPT review a half million documents over the weekend. And the cost differences are so bananas. I mean, you're talking about 90% reductions in cost to the actual client in terms of document review when you're doing it with an LLM versus having an associate do it at a big firm.

Erik:
So you're saying that legal services are going to be a lot less expensive very soon. That's what you're saying. I

John:
think, well, yes and no. Yes, I think that's true. I think that's very true for companies. And I think that's a good thing. It's a good thing for everybody. Right. I mean, you think about those. Yeah, we're talking about job replacement and kind of role replacement here. And we're talking about big firms and big associates and stuff like this. But also remember what that work looks like. There are a lot of people with a lot of knowledge and a lot of things that they can bring to the world that are currently spending their time reading other people's email. It would be good if we can let those people go and go do something else because I think even at a big law firm, those associates, they'll go find something else to do. You know, the discovery and legal will be – it's – one of my friends put it – said that it's like a gas, right? I mean, it will expand to fill the container that it sits in
regardless.

John:
And that associate's going to find something to do. And he doesn't want to do that job anyway, right?

John:
we embrace – so legal will be well served, and it's going to be fundamentally changed, especially at the bigger scale level through the leveling of this by making this good technology available to level the playing field in terms of litigation and litigant.

Erik:
I think it's something that we are anticipating. We're hearing a lot about changes in software, law, health, and it's hard not to wonder where things are going to go.

John:
Yeah.

Erik:
Well, John, thank you so much for coming on the show and talking to Mike and I today. It really has been great listening to your experience and listening to you describe how you're using these tools. And I think I've learned a lot about law and potentially changes to the legal profession in years to come. Really appreciate you coming on and talking to us.

John:
Yeah, absolutely. Thank you very much for having me.

Erik:
So this has been Picture Me Coding with Eric Aker and Mike Mull. We'll see you again next week.

Mike:
See you next time. Bye


People on this episode