Hacker News new | past | comments | ask | show | jobs | submit login
Better Call GPT: Comparing large language models against lawyers [pdf] (arxiv.org)
389 points by vinnyglennon 9 months ago | hide | past | favorite | 264 comments



I run a startup that does legal contract generation (contracts written by lawyers turned into templates) and have done some work GPT analysis of the contract for laypersons to interact and ask questions about the contract they are getting.

In terms of contract review, what I've found is that GPT is better at analysis of the document than generating the document, which is what this paper supports. However, I have used several startups options of AI document review and they all fall apart with any sort of prodding for specific answers. This paper looks like it just had to locate the section not necessarily have the back and forth conversation about the contract that a lawyer and client would have.

There is also no legal liability for GPT for giving the wrong answer. So It works well for someone smart who is doing their own research. Just like if you are smart you could use google before to do your own research.

My feelings on contract generation is that for the majority of cases, people are better served if there were simply better boilerplate contracts available. Laywers hoard their contracts and it was very difficult in our journey to find lawyers who would be willing to write contracts we would turn into templates because they are essentially putting themselves and their professional community out of income streams in the future. But people don't need a unique contract generated on the fly from GPT every time when a template of a well written and well reviewed contract does just fine. It cost hundreds of millions to train GPT4. If $10m was just spent building a repository of well reviewed contracts, it would be a more useful than spending the equivalent money training a GPT to generate them.

People ask pretty wide range of questions about what they want to do with their documents and GPT didn't do a great job with it, so for the near future, it looks like lawyers still have a job.


> Laywers hoard their contracts and it was very difficult in our journey to find lawyers who would be willing to write contracts we would turn into templates because they are essentially putting themselves and their professional community out of income streams in the future.

I notice same things in other professions, especially where it requires a huge upfront investment in education.

For example (at least where I live), there was a time about 20 years ago where architects also didn't want to produce designs that would be then sold to multiple people for cheap. The thinking was that this reduces market for architecture output. But of course it is easy to see that most people do not really need a unique design.

So the problem solved itself because the market does not really care and the moment somebody is able to compile a small library of usable designs and a usable business model, as an architect, you can either cooperate to salvage what you can or lose.

I believe the same comes for lawyers. Lawyers will live through some harsh time while their easiest and most lucrative work gets automated and the market for their services is going to shrink and whatever work is left for them will be of the more complex kind that the automation can't handle.


I think you greatly underestimate this group to retain their position as a monopoly. A huge chunk of politicians are lawyers, and most legal jurisdictions have hard requirements around what work you must have a lawyer to perform. These tools may make their practices more efficient internally, but it doesn't mean that value is being passed on to the consumer of the service in any way. They're a cartel and one with very close relationships with country leadership. I don't see this golden goose souring any time soon.


I think what you are missing is businesses doing what businesses have always been doing: finding a niche for themselves to make a good profit.

When you can hire less lawyers and get more work done and cheaper and at the same (or better) quality, you are going to upend the market for lawyering services.

And this does not require to replace lawyers. It is just enough to equip a lawyer with a set of tools to help them quickly do the typical tasks they are doing.

I work a lot with lawyers and a lot of what they are doing is going to be stupidly easily optimised with AI tools.


Please elaborate with some examples of what legal work you think will be optimized with AI tools.


Lots of corporate and government law work.

It’s just like programmers and artists. As the tools improve, you’ll need fewer, smarter humans.


Sometimes it feels like people look at GPT and think "This thing does words! Law is words! I should start a company!" but they actually haven't worked in legal tech at all and don't know anything about the vertical.


A friend of mine is a highly ranked lawyer, a past general consul of multiple large enterprises. I sent him this paper, he played with ChatGPT-3.5 (not even GPT-4) and contract creation, he said it was 99% fine and then told me he's glad he is slowly retiring from law and is not envious of any up-and-coming lawyers entering the profession. One voice from the vertical.


General Consul? Is he a Roman general ?


That’s why he exited the market - it’s tough out there for Roman consuls and other Latin-based professions generally.


People started companies and succeeded for dumber reasons. Generalized "businessing" skills and placing yourself somewhere in the space where money is made counts for much more than actually knowing anything about specific product beforehand.


That's a very good point.

I would add, that sometimes being a newcomer is a benefit. Many times a particular industry is stuck in groupthink, having shared understanding on what is and what isn't possible. And it sometimes requires a person with a fresh perspective to be able to upend it. See example of Elon upending multiple industries by essentially doing exactly that.


It's a logical reaction, at least superficially, to the touted capabilities of Gen AI and LLMs. But once you start trying to use the tech for actual legal applications, it doesn't do anything useful. It would be great if some mundane legal tasks could be automated away--for example negotiation of confidentiality agreements. One would think that if LLMs are capable of replacing lawyers, they could do something along those lines. But I have not seen any evidence that they can do so effectively, and I have been actively looking into it.

One of the top comments on this thread says that LLMs are going to better at summarizing contracts than generating them. I've heard this in legal tech product demos as well. I can see some utility to that--for example, automatically generating abstracts of key terms (like term, expiration, etc.) for high-level visibility. That said, I've been told by legal tech providers that LLMs don't do a great job with some basic things like total contract value.

I question how the document summarizing capabilities of LLMs will impact the way lawyers serve business organizations. Smart businesspeople already know how to read contracts. They don't need lawyers to identify / highlight basic terms. They come to lawyers for advice on close calls--situations where the contract is unclear or contradictory, or where there is a need for guidance on applying the contract in a real-world scenario and assessing risk.

Overall I'm less enthusiastic about the potential for LLMs in the legal space than I was six months ago. But I continue to keep an eye on developments and experiment with new tools. I'd love to get some feedback from others on this board who are knowledgeable.

As a side note, I'm curious if anyone knows about the impact of context window on contract interpretation a lot of contracts are quite long and have sections that are separated by a lot of text that nonetheless interact with each other for purposes of a correct interpretation.


I think one of the biggest problems with LLMs is the accountibility problem. When a lawyer tells you something, their reputation and career are on the line. There's a large incentive to get things right. LLMs will happily spread believable bullshit.


In fairness some lawyers will too, haha. I take your point, though. Good lawyers care about their reputation and strive to protect it.


Lawyers are legally liable to their clients for their advice, it's a lot more than just reputation and career.



Yeah, lawyers will go away as soon as doctors do.

AI has outperformed radiologists for a while now, and I don't care how much better AI performs, radiologists aren't going away.

Radiologists get to decide the laws of the field essentially.

Why would they vote to kick themselves out of the highest paying job in the world?

Like the medical industry, the legal industry is designed around costing you as much as possible - not really anything related to your benefits.

I just can't see disruption here. The industry will wail against it tooth and nail at every chance.


I'm a doctor and I've been following this saga for a while. What you wrote does not match my experience. Firstly, you overestimate our political reach by a large margin. If you can have acceptable service for massively cheaper, it will happen regardless of any lobbying. What _is_ true, is that AI systems outperform docs for very select, often trivial situations (e.g. routine chest X-ray). I don't believe this would shrink the market for radiologists significantly. Secondly, the non-trivial work is exceedingly difficult to automate, because those cases currently have a prevalence to complexity ratio that make them impossible to train for. The US are making great strides towards that kind of thing, though. But not available yet.


If you're counting mammograms as routine chest X-rays, AI has been able to outperform and do this substantially cheaper since 2019.

Yet this is not an option for patients, and likely won't be in the next 10 years.


> Yet this is not an option for patients, and likely won't be in the next 10 years.

Happening in Austria since 2018-ish: https://contextflow.com/


The radiologist fee isn't even the majority of the cost of a mammogram in the US. There's a lot more to it that has nothing to do with any radiologist protectionism. AI outperforming on mammograms and a few other basic routine modalities and leading to conclude that this means the AI "outperforms" radiologists in general is like claiming programmers are obsolete because Makefiles can run build commands better. You don't even need AI for most automation. Somehow you still have a job.

As for Chest Xrays https://pubs.rsna.org/doi/10.1148/radiol.231236 AI has not even demonstrated superiority outside of "routine".

Maybe revisit this in a decade. But your entire argument was that we still have radiologists because of protectionism (and the lawyer case isn’t any more load bearing either). Seems a bit uninformed and premature.


> AI has outperformed radiologists for a while now

Refer me to the evidence where AI is outperforming radiologists in the entire body of cross-sectional imaging. Are you seriously claiming that there is AI technology today that can take a brain MRI or CT A/P and produce a full findings and impression without human intervention? You have a reference for that?


Do you work in the medical industry? (To be Frank it sounds like you’re spewing bs about a topic you have no intimate knowledge of - seems rather naive actually).


What is a realistic compromise?


“easiest and most lucrative work”

I think this overlooks a big part of how the legal market works. Our easiest work is only lucrative because we use it to train new lawyers, who bill at a lower rate. To the extent the easy stuff gets automated, 1) it’s going to be impossible to find work as a junior associate and 2) senior attorneys will do the same stuff they did last year. If there’s a decrease in prices for a while, great, but a generation from now it’s going to be a lot harder to find someone knowledgeable because the training pathway will have been destroyed.


Lawyers are uniquely well equipped to legislate their continued employment into existence.


Kinda? Lawyers are myopic, vain, and we don't really do much to innovate.

We wanted to make sure there would be no cross pollination between legal advisory services and other professional services in most jurisdictions, but the only thing that division did was significantly restrict our ability to widen our service offerings to provide more value.

The end result is that we protected our little nest egg while our share of the professional services pie has been getting eaten by consulting and multi-service accounting firms for the past 20 years.


Lawyers, yes. Junior associates, not so much.


> I notice same things in other professions, especially where it requires a huge upfront investment in education.

Doctors, for instance. You hear no end of stories about how incredibly high pressure medicine is with insane hours and stress, but will they increase university placements so they can actually hire enough trained staff to meet the workload? Absolutely fkn not, that would impact salaries.


University placements aren’t the problem. Medical residencies are funded through Medicare and have been numerically capped. You could graduate a million MDs a year and if none of them have a training pipeline, we still lose.


So, you will get the template for free. And then a lawyer has to put it on their letterhead and they charge you the exact same as they do right now for that because that will be made a requirement.


No. As a business owner you will hire couple lawyers, give them a bunch of programs to automate searching through texts, answering questions and writing legalese based on human description of what is the text supposed to do. These three are from my experience great majority of the work. The 2 people you hire will now perform like 10 people without tools. Then you will use part of that saved money to reduce prices and if you are business savvy, you will use the rest to research the automation further.

Then another business that wants to compete with you will no longer have an option, they will have to do this or more to be able to stay in the business at all.


Lawyers seem to be the prime group to prevent this outcome using some kind regulation. Many politicians are lawyers.


They were lawyers, they aren't still practicing attorneys representing clients.


They moved into the much more lucrative career of representing corporations.


I have no problem with your logic so long as it is applied uniformly to all of society. By that I mean society must abolish copyright, patents and all forms of intellectual property. Otherwise I'll be forced to defend lawyers in this case. Why can people hoard "IP" but lawyers can't hoard contracts?


I don't think that is the point he is making.

More that lawyers will hoard contracts but there will be financial incentives to defect early (the idea being to sell your contracts while the price is high).

Then that lawyer's contracts will become the mass-produced "good enough" boilerplate and be used widely driving down the value of hoarding contracts at all.


IIRC about 40% of us politicians are lawyers, unfortunately I’m sure they will find a way to gatekeep these revenue streams for their peers.


I’m assuming by the use of “us” and “they” you meant US - not that you are a politician.


I recently used Willful to create a will and was pretty disappointed with the result. The template was extremely rigid on matters that I thought should have been no-brainers to be able to express (if X has happened, do Y, otherwise Z) and didn't allow for any kind of property division other than percentages of the total. It was also very consumed with several matters that I don't really feel that strongly about, like the fate of my pets.

I was still able to rewrite the result into something that more suited me, but for a service with a $150 price tag I kind of hoped it would do more.


Our philosophy at GetDynasty is that the contract (in our case estate planning documents) itself is a commodity which is why we give it away for free. Charging $150 for a template doesn't make sense.

Our solution like you point out is more rigid than having a lawyer write it, but for the majority of people having something that is accessible and free is worth it and then having services layer on top makes the most sense. It is easier to have a well written contract that you can "turn on and off" features or sections of the contract than to try to have GPT write a custom contract for you.


I applaud the efforts to give away free documents like this. That is actually what happens when you have a lawyer do it: you pay pretty much nothing for the actual contract clauses to be written (they start with a basic form and form language for all of the custom clauses you may want), but you pay a lot for them to be customized to fit your exact desires and to ensure that your custom choices all work.

The idea of "legalzoom-style" businesses has always seemed like a bamboozle to me. You pay hundreds of dollars for essentially the form documents to fill in, and you don't get any of the flexibility that an actual lawyer gives you.

As another example, Northwest Registered Agent gives you your corporate form docs for free with their registered agent services.


Interestingly, this is almost exactly how I draft a contract as an attorney. Westlaw has tons of what you might call “templates” but which have tons of information concerning when and why a client might need a certain part of the template.

The difference is that when Westlaw presents me with a decision point and I choose the wrong option for my client, my client sues my insurer and is made whole. (My premiums increase accordingly).

If you make the wrong choice in your choose-your-own-legal-adventure, you lose.

(For some contracts, this is probably the right approach).


>>didn't allow for any kind of property division other than percentages of the total.

Knowing someone who works in Trusts & Estates, that is terrible. I've often heard complaints about drafting by percentages of anything but straight financial assets which have an easily determined value, because that requires an appraisal(s). Yes, there are mechanisms to work it out in the end, but it is definitely better to be able to say $X to Alice, $Y to Bob and the remainder to Claire.

You have to think of not only what you want, but how the executors will need to handle it. We all love complex formulae, but we should use our ability to handle complexity to simplify things for the heirs - it's a real gift in a bad time.


Heh, okay I guess what I wanted was going to end up as the worst of both— fixed amounts off the top to some particular people/causes, and then the remainder divided into shares for my kids.

I guess there's an understanding the being an executor is a best-effort role, but maybe you could specifically codify that +/-5% on the individual shares is fine, just to take off some of the burden of needing it to be perfect, particularly if there are payouts occurring at different times and therefore some NPV stuff going on.


> like the fate of my pets

Pet trusts [1]! My lawyer literally used their existence, which I find adorable, to motivate me to read my paperwork.

[1] https://www.aspca.org/pet-care/pet-planning/pet-trust-primer


Which is mostly what I feel also happens with LLMs producing code. Useful to start with, but not more than that. We've still got a job us programmers. For the moment.


Producing code is like producing syntactically correct algebra. It has very little value on its own.

I’ve been trying to pair system design with ChatGPT and it feels just like talking with a person who’s confident and regurgitates trivia, but doesn’t really understand. No sense of self-contradiction, doubt, curiosity.

I’m very, very impressed with the language abilities and the regurgitation can be handy, but is there a single novel discovery by LLMs? Even a (semantic) simplification of a complicated theory would be valuable.


You said: "However, I have used several startups options of AI document review and they all fall apart with any sort of prodding for specific answers. "

I think you will find that this is because they "outsource" the AI contract document review "final check" to real lawyers based in Utah ... so, it's actually a person, not really a wholy-AI based solution (which is what the company I am thinking of suggests in their marketing material)


> I think you will find that this is because they "outsource" the AI contract document review "final check" to real lawyers based in Utah ... so, it's actually a person, not really a wholy-AI based solution (which is what the company I am thinking of suggests in their marketing material)

Which company is that? I don't see any point in obfuscating the name on a forum like this.


fake it till you make it ;-) The company I suspect is "faking it" is https://www.lawgeex.com/ ... as an outsourced legal contract review company their "valuation" and trajectory is significantly lower than that of the same AI contract review company


> There is also no legal liability for GPT for giving the wrong answer

It was my understanding that there is also no legal liability for a lawyer for giving the wrong answer. In extreme cases there might be ethical issues that result in sanctions by the bar, but in most cases the only consequences would be reputational.

Are there cricumstances where you can hold an attorney legally liable for a badly written contract?


I believe all practicing attorneys carry malpractice insurance as well as E&O (errors and omissions) insurance. I think one of those would "cover" the attorney in your example, but obviously insurance doesn't prevent poor Google reviews, nor would it protect the attorney from anything done in bad-faith (ethical violations), or anything else that could land an attorney before the state bar association for a disciplinary hearing.


> I believe all practicing attorneys carry malpractice insurance as well as E&O (errors and omissions) insurance.

Nit: Malpractice insurance is (a species of) E&O insurance.


> It was my understanding that there is also no legal liability for a lawyer for giving the wrong answer.

There is plenty of legal, ethical and professional liability for a lawyer giving the wrong answer, we don't often see the outcome of these things because like everything in the courts they take a long time to get resolved and also some answers are not wrong just "less right" or "not really that wrong."


I think the reason you rarely see the outcomes is because those disputes are typically resolved through mediation and/or binding arbitration, not in the courts.

Look at your most recent engagement letter with an attorney. I’d bet that you agreed to arbitrate all fee disputes, and depending on your state you might have also agreed to arbitrate malpractice claims.


Yes. If the drafted language falls below reasonable care and damages the client, absolutely you can be sued for malpractice.

Wrong Word in Contract Leads to $2M Malpractice Suit[1].

[1]https://lawyersinsurer.com/legal-malpractice/legal-malpracti...


I mean, sure, if the attorney is operating below the usual standards of care -- it's exceptionally uncommon in the corporate world, but not unheard of. In the case of AI assistance, you run into situations where a company offering AI legal advice direct to end-users is either operating as an attorney without licensing, or, if an attorney is on the nameplate, they're violating basic legal professional responsibilities by not reviewing the output of the AI (if you do legal process outsourcing -- LPO -- there's a US-based attorney somewhere in the loop who's taking responsibility for the output).

About the only case where this works in practice is someone going pro se and using their own toolset to gin up a legal AI model. There's arguably a case for acting as an accelerator for attorneys, but the problem is that if you've got an AI doing, say, doc review, you still need lawyers to review not just the output for correctness, but also go through the source docs to make sure nothing was missed, so you're not saving much in the way of bodies or time.


It's called malpractice


We're building exactly this for contract analysis: upload a contract, review the common "levers to pull", make sure there's nothing unique/exceptional, and escalate to a real lawyer if you have complex questions you don't trust with an LLM.

In our research, we found out that most everyone has the same questions: (1) "what does my contract say?", (2) "is that standard?", and (3) "is there anything I can/should negotiate here?"

Most people don't want an intense, detailed negotiation over a lease, or a SaaS agreement, or an employment contract... they just want a normal contract that says normal things, and maybe it would be nice if 1 or 2 of the common levers were pulled in their direction.

Between the structure of the document and the overlap in language between iterations of the same document (i.e. literal copy/pasting for 99% of the document), contracts are almost an ideal use-case for LLMs! (The exception is directionality - LLMs are great at learning correlations like "company, employee, paid biweekly," but bad at discerning that it's super weird if the _employee_ is paying the _company_)


That makes sense, how are you guys approaching breaking down what should be present and what is expected in contracts? I've seen a lot of chatbot-based apps that just don't cut it for my use case.


> Just like if you are smart you could use google before to do your own research.

Unfortunately people stop at step #1, they use Google and that is their research. I don't think ChatGPT is going to be treated any different. It will be used as an oracle, whether that's wise or not doesn't matter. That's the price of marketing something as artificial intelligence: the general public believes it.


I'm in the energy sector and have been thinking of fine tuning a local llm on energy-specific legal documents, court cases, and other industry documents. Would this solve some of the problems you mention about producing specific answers? Have you tried something like that?


Your welcome to try, but we had mixed results.

Law in general is interpretation. The most "lawyerese" answer you can expect is "It depends". Technically in the US everything is legal unless it is restricted and then there are interpretations about what those restrictions are.

If you ask a lawyer if you can do something novel, chances are they will give a risk assessment as opposed to a yes or no answer. Their answer typically depends on how well they think they can defend it in the court of law.

I have received answers from lawyers before that were essentially "Well, its a gray area. However if you get sued we have high confidence that we will prevail in court".

So outside of the more obvious cases, the actual function of law is less binary but more a function of a gradient of defensibility and the confidence of the individual lawyer.


I spent a lot of time with M&A lawyers and this is 100% true. The other answer is "that's a business decision".

So much of contract law boils down to confidence in winning a case, or it's a business issue that just looks like a legal issue because of legalese.


How do you think organizations can best use the contractual interpretations provided by LLMs? To expand on that, good lawyers don't just provide contractual interpretations, they provide advice on actions to take, putting the legal interpretation into the context of their client's business objectives and risk profile. Do you see LLMs / tools based on LLMs evolving to "contextualize" and "operationalize" legal advice?

Do you have any views on whether context window limits the ability of LLMs to provide sound contractual interpretations of longer contracts that have interdependent sections that are far apart in the document?

Has your level of optimism for the capabilities of LLMs in the legal space changed at all over the past year?

You mentioned that lawyers hoard templates. Most organizations you would have as clients (law firms or businesses) have a ton of contracts that could be used to fine tune LLMs. There are also a ton of freely available contracts on the SEC's website. There are also companies like PLC, Matthew Boender, etc., that create form contracts and license access to them as a business. Presumably some sort of commercial arrangement could be worked out with them. I assume you are aware of all of these potential training sources, and am curious why they were unsatisfactory.

Thanks for any response you can offer.


Not op but someone that currently runs an ai contract review tool.

To answer some of your questions:

- contract review works very well for high volume low risk contract types . Think slas, SaaS… these are contracts comercial legal teams need to review for compliance reasons but hate it.

- it’s less good for custom contracts

- what law firms would benefit from is just natural language search on their own contracts.

- it also works well for due diligence. Normally lawyers can’t review all contracts a company has. With a contract review tool they can extract all the key data/risks

- LLM doesn’t need to provide advice. LLM can just identify if x or y is in the contract. This improving the process of review.

- context windows keep increasing but you don’t need to send the whole contract to the LLM . You can just identify the right paragraphs and send that.

- things changes a lot in the past year. It would cost us $2 to review a contract now it’s $0.2 . Responses are more accurate and faster

- I don’t do contract generation but have explored this. I think the biggest benefit isn’t generating the whole contract but to help the lawyer rewrite a clause for a specific need. The standard CLM already have contract templates that can be easily filled in. However after the template is filled the lawyer needs to add one or two clauses . Having a model trained on the companies documents would be enough.

Hope this helps


Thanks. Appreciate your feedback.

Do you think LLMs have meaningfully greater capabilities than existing tools (like Kira)?

I take your point on low stakes contracts vs. sophisticated work. There has been automation at the "low end" of the legal totem pole for a while. I recall even ten years ago banks were able to replace attorneys with automations for standard form contracts. Perhaps this is the next step on that front.

I agree that rewriting existing contracts is more useful than generating new ones--that is what most attorneys do. That said, I haven't been very impressed by the drafting capabilities of the LLM legal tools I have seen. They tend to replicate instructions almost word for word (plain English) rather than draw upon precedent to produce quality legal language. That might be enough if the provisions in question are term/termination, governing law, etc. But it's inadequate for more sophisticiated revisions.


Didn't try Kira but tried zuva.ai, which is a spin off from them. We found that the standard LLM performed at the same level for classification for what we needed. We didn't try everything though. They let you train their model on specific contracts and we didn't do that.

For rewriting contracts keep in mind that you don't have to actually use the LLM to generate the text completely. It is helpful if you can feed all the contracts of that law firm into a vector db and help them find the right clause from previous contracts. Then you can add the LLM to rewrite the template based on what was found in the vector db.

Many lawyers still just use folders to organize their files.


As I've said before, one of my biggest concerns with LLMs is that they somehow manage to concentrate their errors in precisely the places we are least likely to notice: https://news.ycombinator.com/item?id=39178183

If this is dangerous with normal English, how much more so with legal text.

At least if a lawyer drafts the text, there is at least one human with some sort of intentionality and some idea of what they're trying to say when they draft the text. With LLMs there isn't.

(And as I say in the linked post, I don't think that is fundamental to AI. It is only fundamental to LLMs, which despite the frenzy, are not the sum totality of AI. I expect "LLMs can generate legal documents on their own!" to be one of those things the future looks back on our era and finds simply laughable.)


Hi hansonkd,

I'm working on Hotseat - a legal Q&A service where we put regulations in a hot seat and let people ask sophisticated questions. My experience aligns with your comment that vanilla GPT often performs poorly when answering questions about documents. However, if you combine focused effort on squeezing GPT's performance with product design, you can go pretty far.

I wonder if you have written about specific failure modes you've seen in answering qs from documents? I'd love to check whether Hotseat is handling them well.

If you'r curious, I've written about some of the design choices we've made on our way to creating a compelling product experience: https://gkk.dev/posts/the-anatomy-of-hotseats-ai/


Thanks for the response. I will check it out.

Specific failure modes can be something as simple as extraction of beneficiary information from a Trust document. Sometimes it works, but a lot of times it doesn't even with startups with AI products specific to extracting information from documents. For example it will have an incomplete list of beneficiaries, or if there are contingent beneficiaries, it won't know what to do. Not even a hard question about the contingency. Just making a simple list with percentages of if no-one dies what is the distribution.

Further trying to get an AI to describe the contingency is a crap shoot.

While I expect these options to get better and better, I have fun trying them out and seeing what basic thing will break. :)


Thanks for the response! I'm not familiar with Trust documents but I asked ChatGPT about them: https://chat.openai.com/share/c9d86363-b64a-4e44-9fd4-1d5b18...

If the example is representative, I see two problems: a simple extraction of information that is laid out bare (list of beneficiaries), and reasoning to interpret the section of contingent beneficiaries and connect it facts from other parts. Is that correct?

If that's the case, then Hotseat is miles ahead when it comes to analyzing regulations (from the civil law tradition, which is different from the US), and dealing with the categories of problems you mentioned.


Your post is very interesting. Thanks for sharing.

If your focus is narrow enough the vanilla gpt can still provide good enough results. We narrow down the scope for the gpt and ask it to answer binary questions. With that we get good results.

Your approach is better for supporting broader questions. We support that as well and there the results aren’t as good.


Thanks for reading it! I agree that binary questions are easy enough for vanilla GPT to answer. If your problem space fits them - great. Sadly, the space I'm in doesn't have an easy mode!


"So It works well for someone smart who is doing their own research."

That's a concise explanation that also applies to GPTs and software engineering. GPT4 boosts my productivity as a software engineer because it helps me travel the path quicker. Most generated code snippets need a lot of work because I'm prodding it for specific use cases and it fails. It's perfect as an assistant though.


>There is also no legal liability for GPT for giving the wrong answer. So It works well for someone smart who is doing their own research. Just like if you are smart you could use google before to do your own research.

How is that good for the end user? Malpractice claims are often all that is left for a client after the attorney messes up their case. If you use a GPT, you wouldn't have that option.


I launched a contract review tool about year ago[1].

The legal liability is an issue in several countries but contract generation can also be. If you are providing whatever is defined as legal services and are not a law firm, you will have issues.

[1]legalreview.ai


Thanks for the link.

> If you are providing whatever is defined as legal services and are not a law firm, you will have issues.

that is a big reason why we haven't integrated AI tools into our product yet. Currently our business essentially works as a free product that is the equivalent of a "stationary store" of you are filling out a blank template and it is your responsibility what happens. This has a long history of precedence since for decades people could buy these templates off the shelf and fill them out themselves.

Giving a tool to our users to answer legal questions opens a can of works like you say. We decided that the stationary store templates are a commodity and should be free (even though our competitors charge hundreds for them) so we make money providing services on top of it.


"There is also no legal liability for GPT for giving the wrong answer."

I mean, i get your point but lets be real: I cannot count the number of times I sat in a meeting and looked back at a contract and wished some element had a different structure to it. In law there are a lot of "wrong answers" someone could foolishly provide, but its way more often something more variable as to how "wrong" the answer is, than it is a binary bad/good piece of advice.

I personally feel the ability to have more discussion about a clause is extremely helpful, v's getting the a hopefully "right answer" from a lawyer, and counting the clock / $ as you try wrap your head around the advice you're being given. If you have deep pockets, you invite your lawyer to a lot of meetings, they have context and away you go....but for a lot of people, you're just involving the lawyer briefly and trying to avoid billable hours. That's been me at the early stage of everything, and it's a very tricky balance.

If you're a startup trying to use GPT, i say do it, but also use a lawyer. Augmenting the lawyer with GPT to save billable hours so you can turn up to a meeting with your lawyer and extract the most value in the shortest time period seems like the best play to me.


You can read my other reply which agrees with you that law is a spectrum rather than a binary.

> I cannot count the number of times I sat in a meeting and looked back at a contract and wished some element had a different structure to it.

The only way to have something "bullet proof" is to have experience in ways in which something can go wrong. Its just like writing a program in which the "happy path" is rather obvious but then you have to come up with all the different attack vectors and use cases in which the program can fail.

The same is with lawyers. Lawyers at big firms have the experience of the firm to guide them on what to do and what they should include in a contract. A small town family lawyer might have no experience in what you ask them to do.

Which is why I advocate for more standardized agreements as opposed to one off generated agreements (with GPT or a lawyer). Think of the YCombinator SAFE, it made a huge impact on financing because it was standardized and there were really no terms to negotiate compared to the world before which the terms Notes were complex had to be scrutinized and negotiated.

> Augmenting the lawyer with GPT to save billable hours so you can turn up to a meeting with your lawyer and extract the most value in the shortest time period seems like the best play to me.

The issue is that a lot of lawyers have a conflict of interest and a "Not invented here" way of doing business. If you have a Trust for instance written by one lawyer and bring it to another lawyer, the majority of lawyers we talked to actually prefer to throw out the document and use their own. This method works well if you are a smart savvy person, but for the population at large, people have some crazy and weird ideas about how the law works and need to be talked out of what they want into something more sane.

Another common lawyer response besides "It depends" is "Well you can, but why would you want to?" So many people of a skewed view on what they want and part of a lawyers job is interpreting what they really want and guiding them on the path of that.

So the hybrid method really only works if you find a lawyer that accepts whatever crazy terms you came up with and are willing to work with what you generated.


Thats all very reasonable, thanks for taking the time to reply!

When i suggest going down a hybrid path, I mostly mean use GPT on your own (disclose this to your lawyer at your own risk) as a means to understand what they're proposing. I've spent so many hours asking questions clarifying why something is done a certain way, and most of that is about understanding the language and trying to rationalize the perspective the lawyer has taken. I feel I could probably have done a lot of that on my own time, just as fast, if GPT had been around during these moments. And then of course, confirmed my understanding aligns with the lawyer at the end.

I need to be upfront, I really don't know I'm right here....its just a hunch and gut reaction to how I'd behave in the present moment, but I find myself using AI more and more to get myself up to speed on issues that are beyond my current skill level. This makes me think law is probably another good way to augment my own disadvantages in that I have a very limited understanding of the rules and exceptional scenarios that might come up. I also find myself often on the edge of new issues, trying to structure solutions that don't exist or are intentionally different to present solutions...so that means a lot of explaining to the lawyer and subsequently a fair bit of back and forward on the best way to progress.

It's a fun time to be in tech, I'm hoping things like GPT turn out to be a long term efficiency driver, but I'm fearful about the future monetization path and how it'll change the way we live/work.


If you need a GPT to explain your lawyer's explanations to you, you need a new attorney.


Eh, no...i mean, maybe...I honestly feel the issue is me. I always want a lot of detail, and that can become expensive. Sometimes the detail I want is more than I needed, but I don't know that until after I've asked the question.


If your attorney is not adequately explaining things to you or providing you with resources to understand things he does not need to spend his time explaining to you, then you need a new attorney.


In other words, LLM’s are great examples of the 80/20 rule.

They’re going to be great for a lot of stuff. But when it comes to things like the law the other 20% is not optional.


So the world needs 1/5 as many attorneys ? or 1/100 ? How will 6-figure attorneys replace that income?


> If $10m was just spent building a repository of well reviewed contracts

What's your objection to Nolo Press? They seem to have already done that.


That was more directed towards people who are trying to train AIs to be a competitor to Nolo. Lots of document repositories exist, but they won't work with you if you want to sell legal contracts yourself. I have seen a lot of startups raise money to try to build an AI solution to this, but the results haven't been great so far.


> It works well for someone smart who is doing their own research. Just like if you are smart you could use google before to do your own research.

That's a trap: If you don't have prior expertise then you can't distinguish plausible-sounding fact from fiction. If you think you are "smart", then afaik research shows you are easier to fool because you are more likely to think you know.

Google finds lots of mis/disinformation. GPTs are automated traps: they generate the most statistically likely text from ... the Internet! Not exactly encouraging. (Of course, it really depends on the training data.)


I'd like to know your company, and talk to you about GPT as it applies to legal - this is the most disruptive space available to GPTs, and its being poorly addressed.

>>>"Laywers hoard their contracts and it was very difficult in our journey to find lawyers who would be willing to write contracts we would turn into templates because they are essentially putting themselves and their professional community out of income streams in the future.

This is why Lawyers must die.

If "letter of law" is a thing - then "opinions" in legal parlance, should be BS.

We should have every SC decision eviscerated by GPTs

Any any single person saying "You need a lawyer to figure this out" is a grifter and a charlatan

--

Anyway - I'd like to know more about your startup. I'd like to know what you o for lkegal DMS, ala hummingbird for biotech, but there are so many real applications of GPT to legal troves, such as an auto index/summary/ parties/contacts/dates/ blah blah that GPTs make legal searching all that more powerful and ANY legal-person in any position telling you anything bad about computers keeping them in check is a fraud.

The legal industry is the most low hanging fruit for GPTs.


Are you using GPT-3 + RAG by any chance?


This is one of the domains I'm very very excited about for LLMs to help me with. In 5-10 years (even though this research paper makes me feel its already here), I would feel very confident chatting for a few hours with a "lawyer" LLM that has access to all my relevant taxes/medical/insurance/marriage documents and would be able to give me specialized advice and information without billing me $500 an hour.

A wave of (better) legally informed common-person is coming, and I couldn't be more excited!


I wouldn't blindly trust what the LLM says, but I take it that it would be mostly right, and that would give me at the very least explorable vocabulary that I can expand on my own, or keep grilling it about.

I've already used some LLMs to ask questions about licenses and legal consequences for software related matters, and it gave me a base, without having to involve a very expensive professional into it for what are mostly questions for hobby things I'm doing.

If there was a significant amount of money involved in the decision, though, I will of course use the services of a professional. These are the kinds of topics you can't be "mostly right".


I don't understand how everyone keeps making this mistake over and over. They explicitly just said "in 5-10 years".

So many people continually use arguments that revolve around 'I used it once and it wasn't the best and/or me things up', and imply that this will always be the case.

There are many solutions already for knowledge editing, there are many solutions for improving performance, and there will very likely continue to be many improvements across the board for this.

It took ~5 years from when people in the NLP literature noticed BERT and knew the powerful applications that were coming, until the public at large was aware of the developments via ChatGPT. It may take another 5 before the public sees the developments happening now in the literature hit something in a companies web UI.


> It took ~5 years from when people in the NLP literature noticed BERT and knew the powerful applications that were coming, until the public at large was aware of the developments via ChatGPT. It may take another 5 before the public sees the developments happening now in the literature hit something in a companies web UI.

It also may take 10, 20, 50, or 100 years. Or it may never actually happen. Or it may happen next month.

The issue with predicting technological advances is that no one knows how long it'll take to solve a problem until it's actually solved. The tech world is full of seemingly promising technologies that never actually materialized.

Which isn't to say that generative AI won't improve. It probably will. But until those improvements actually arrive, we don't know what those improvements will be, or how long it'll take. Which ultimately means that we can only judge generative AI based on what's actually available. Anything else is just guesswork.


I'm concerned that until they do improve, we're in a weird place. For example, if you were 16, would you go an invest a bunch of time and money to study law with the prospect of this hanging of your future? Same for radiology, would you go study that now Geoffrey Hinton has proclaimed the death of radiologists in 3 years or whatever? Photography and filmography ?

My concern is we're going to get to a place where we think the machines can just take over all important professions, but they're not quite there yet, however people don't bother learning those professions because they're a career dead end and then we just end up with a skill shortage and mediocre services, when something goes wrong, you just have to trust "the machine" was correct.

How do we avoid this? Almost like we need government funded "career insurance" or something like this.


I'm not so sure that truth and trustability is something we can just hand-wave away as something they'll sort out in just a few more years. I don't think a complex concept like whether or not something is actually true can be just tacked onto models whose core function is to generate what they think the next word of a body of text is most likely to be.


on the other hand the rate of change isn't constant and there isn't a guarantee that the incredible progress in the past ~2 years in the LLM/diffusion/"AI" space will continue. As an example, take computer gaming graphics; compare the evolution between Wolfenstein 3D (1992) and Quake 3 Arena (1999), which is an absolute quantum leap. Now compare Resident Evil 7 (2017) and Alan Wake 2 (2023) and it's an improvement but nowhere near the same scale.

We've already seen a fair bit of stagnation in the past year as ChatGPT gets progressively worse as the company is more focusing on neutering results to limit its exposure to legal liability.


Yes again, it's very strange to see a simple focus on one particular instance from one particular company to represent the entire idea of technology in general.

If windows 11 is far worse in many metrics than windows XP or Linux, does that mean that technology is useless?

It's one instance of something with a very particular vision being imposed. Windows 11 being slow due to reporting several GB of user data in the first few minutes of interaction with the system does not mean that all new OS are slow. Similarly, some older tech in a web UI (ChatGPT) for genAI producing non-physical data does not mean that all multimodal models will produce data unsupported by physics. Many works have already shown a good portion of the problems in GPTs can be fixed with different methods stemming from rome, rl-sr, sheavNNs, etc.

My point isn't even that certain capabilities may get better in the future, but rather that they already are better now, just not integrated into certain models.


>ChatGPT gets progressively worse

https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboar..., In blinded human comparisons, newer models perform better than older ones.


That website doesn't load for me but anyone who uses ChatGPT semi regularly can see that it's getting steadily worse if you ever ask for anything that begins to border risque. It has even refused to provide me with things like bolt torque specs because of risk.


It could be a bias, that's why we do blinded comparisons for a more accurate rating. If we have to consider my opinion, since I use it often, then no, it hasn't gotten worse over time.


Well I can't load that website so I can't assess their methodology. But I am telling you it is objectively worse for me now. Many others report the same.

Edit - the website finally loaded for me and while their methodology is listed, the actual prompts they use are not. The only example prompt is "correct grammar: I are happy". Which doesn't do anything at all to assess what we're talking about, which is ChatGPT's inability to deal with subjects which are "risky" (where "risky" is defined as "Americans think it's icky to talk about").


There is no selected prompt, humans ask the models (blindly) some questions in a chat and then select the best one for them.


Worse is really subjective. More limited functionality with a specific set of topics? Sure. More difficult to trick to get around said topic bans? Sure.

Worse overall? You can use chatgpt 4 and 3.5 side by side and see an obvious difference.

Your specific example seems fairly reasonable. Is there liability in saying x bolt can handle y torque if that ended up not being true? I don't know. What is that bolt causes an accident and someone dies? I'm sure a lawyer could argue that case if ChatGPT gave a bad answer.


I wouldn't blindly trust what a lawyer says either so there's no difference there.


Sure, but you have a lot less personal risk following advice from a lawyer vs. advice from an LLM.


When your GPT is wrong, you will be laughed out of the room and sanctioned.

When your attorney is wrong, you get to point at the attorney and show a good faith effort was made.

Hacks are fun, just keep in mind the domain you're operating in.


"When your attorney is wrong, you get to point at the attorney and show a good faith effort was made."

And possibly sue their insurance to correct their mistakes.


But you’ll have to find a lawyer that specializes in suing lawyers and their own malpractice plans.

Maybe that’s where legal AI will find the most demand.


Can't a tech firm running a "legal gpt" have an insurance?


No. Malpractice insurance would be at the professional level. There could be lawyers using a legal chatGTP, but the professional liabilities would still be with the licensed professional.


Well, I guess since it's not "practice" we gonna call it "mal-inference insurance".


More legal malpractice? No, because they aren't attorneys and you cannot rely upon them for legal advice such that they'd be liable to you for providing subpar legal advice.


Why? Because there's no word for "insurance of AI advise accuracy"? The whole point of progress is that we create something that is not a thing at the moment.


No, because, like I said, GPTs are not legally allowed to represent individuals, so they cannot obtain malpractice insurance. You can make up an entirely ancillary kind of insurance. It does not change the fact that GPTs are not legally allowed to represent clients, so they cannot be liable to clients for legal advice. Seeing as how you think GPTs are so useful here... why are you asking me these questions when a GPT should be perfectly capable of providing you with the policy considerations that underline attorney licensing procedures.


That was the point of my comment - no ability to collect the insurance.


Do they have a license to practice law?


What about if your lawyer is using chatgpt? :D

https://www.forbes.com/sites/mollybohannon/2023/06/08/lawyer...


I like the term "explorable vocabulary." I can see using LLMs to get an idea of what the relevant issues are before I approach a professional, without assuming that any particular claim in the model's responses is correct.


The only problems are it could be convincingly wrong about anything it tells you and isn't liable for its mistakes.


This is an area for further development and thought...

If a LLM can pass the bar, and has a corpus of legal work instantly accessible, what prevents the deployment of the LLM (or other AI structure) to provide legitimate legal services?

If the AI is providing legal services, how do we assign responsibility for the work (to the AI, or to its owner)? How to insure the work for Errors and Omissions?

More practically, if willing to take on responsibility for yourself, is the use of AI going to save you money?


A human that screws up either too often or too spectacularly can be disbarred, even if they passed the bar. They can also be sued. If a GPT screws up, it could in theory be disbarred. But you can't sue it for damages, and you can't tell whether the same model under a different name is the next legal GPT you consult.


Agreed - which is why this is an area that needs more thought and development


> If a LLM can pass the bar, and has a corpus of legal work instantly accessible, what prevents the deployment of the LLM (or other AI structure) to provide legitimate legal services?

The law, which you can bet will be used with full force to prevent such systems from upsetting the (obscenely profitable) status quo.


My comrades in law work too hard, to be fair


To be an attorney you also have to pass an ethics exam and a character and fitness examination.


Re your first point: it's not conscious. It has no understanding. It's perfectly possible the model could successfully answer an exam question but fail to reach the same or similar conclusion when it has to reason it's own way there based on information provided.


Great point, LLM will not be great at ground breaking law.... But most lawyers aren't. That's to say, most law isn't cutting edge. The law is mostly a day-to-day administrative matter


Careful, there are plenty of True Believers on this website who really think that these "guess the next word" machines really do have consciousness and understanding.


I incline towards you on the subject but if you call it guessing you open yourself up to all sorts of rebuttals.


The obvious intermediate step is that you add an actual expert into the workflow, in terms of using LLMs for this purpose.

Basically, add a "validate" step. So, you'd first chat with the LLM, create conclusions, then vet those conclusions with an expert specially trained to be skeptical of LLM generated content.

I would be shocked if there aren't law agencies that aren't already doing something exactly like this.


Ah, so have the lawyer do everything the GPT did so the lawyer can be sure the GPT didn't fuck up.


What if they were liable? Say the company that offers the LLM lawyer is liable. Would that make this feasible? In terms of being convincingly wrong, it's not like lawyers never make mistakes...


You'd require them to carry liability insurance (this is usually true for meat lawyers as well), which basically punts the problem up to "how good do they have to be to convince an insurer to offer them an appropriate amount of insurance at a price that leaves the service economically viable?"


Given orders of magnitude better cost efficiency, they will have plenty of funds to lure in any insurance firm in existence. And then replace insurance firms too.


"What if they were liable?"

They'd be sued out of existence.

"In terms of being convincingly wrong, it's not like lawyers never make mistakes..."

They have malpractice insurance, they can potentially defend their position if later sued, and most importantly they have the benefit of appeal to authority image/perception.


All right, what if legal GPTs had to carry malpractice insurance? Either they give good advice, or the insurance rates will drive them out of business.

I guess you'd have to have some way of knowing that the "malpractice insurance ID" that the GPT gave you at the start of the session was in fact valid, and with an insurance company that had the resources to actually cover if needed...


It's funny how any conversation ends with this question unanswered.


Weirdly HN is full of anti AI people who just refuses to discuss the point that is being discussed and goes into all the same argument of wrong answer that they got some time. And then they present anecdotal evidence as truth, while there is no clear evidence if AI lawyer has more or less chance to be wrong than human. Surely AI could remember more and has been shown to clear bar examination.


"while there is no clear evidence if AI lawyer has more or less chance to be wrong than human."

In the tests they are shown to be pretty close. The point I made wasn't about more mistakes, but about other factors influencing liability and how it would be worse for AI than humans at this point.


> at this point.

This is the key point. Even if assume the AI won't get better, the liability and insurance premiums will likely become similar in very near future. There is a clear business opportunity that's there in insuring AI lawyer.


I wonder could GPTs come up with legal loopholes. Like they are expected to come up with security vulnerabilities


We are literally building this today!

Our core business is legal document generation (rule based logic, no AI). Since we already have the users' legal documents available to us as a result of our core business, we are perfectly positioned to build supplementary AI chat features related to legal documents.

We recently deployed a product recommendation AI to prod (partially rule based, but personalized recommendation texts generated by GPT-4). We are currently building AI chat features to help users understand different legal documents and our services. We're intending to replace the first level of customer support with this AI chat (and before you get upset, know that the first level of customer support is currently a very bad rule-based AI).

Main website in Finnish: https://aatos.app (also some services for SE and DK, plus we recently opened UK with just a e-sign service)


So, let’s say that the chat will work as well as the real lawyer some day.

If the current pricing would be $500 an hour for a real lawyer, and at some point your costs are just keeping services up and running, how big cut will you take? Because it is enough if you are only a little cheaper than the real lawyer to win customers.

There is an upcoming monopoly problem, if the users get the best information from the service after they submit all their documents. And soon the normal lawyer might be competitive enough. I fear that the future is in the parent commenter’s open platfrom with open models and the businesses should extract money from some other use cases, while for a while, you get money momentarily based on the typical ”I am first, I have the user base” situation. It is interesting to see what will happen to lawyers.


> If the current pricing would be $500 an hour for a real lawyer, and at some point your costs are just keeping services up and running, how big cut will you take?

Zero. We're providing the AI chat for free (or free for customers who purchase something from us, or some mix of those 2 choices). Our core business is generating documents for people, and the AI chat is supplementary to the core business.

It sounds like you're approaching the topic with the mindset that lawyers might be entirely replaced by automation. That's not what we're trying to do. We can roughly divide legal work into 3 categories:

1. Difficult legal work which requires a human lawyer to spend time on a case by case basis (at least for now).

2. Cookie cutter legal work that is often done by a human in practice, but can be automated by products like ours.

3. Low value legal issues that people have and would like to resolve, but are not worth paying a lawyer for.

We're trying to supply markets 2 and 3. We're not trying to supply market 1.

For example, you might want a lawyer to explain to you what is the difference between a joint will and an individual will in a particular circumstance. But it might not be worth it to pay a lawyer to talk it through. This is exactly the type of scenario where an AI chat can resolve your legal question which might otherwise go unanswered.


> It sounds like you're approaching the topic with the mindset that lawyers might be entirely replaced by automation.

That is the cynical future, however, and based on the evolution speed of the last year, it is not too far away. We humans are just interfaces for information and logic. If the chatbot has the same capabilities (both information and logic, and natural language), then they will provide full automation.

The natural language aspect of AI is the revolutionary point, less about the actual information they provide. Quoting Bill Gates here, like the GUI was revolutionary. When everyone can interact and use something, it will remove all the experts that you needed before as middle man.


Here's an example of what our product recommendations look like:

Given your ownership in a company and real estate, a lasting power of attorney is a prudent step. This allows you to appoint PARTNER_NAME or another trusted individual to manage your business and property affairs in the event of incapacitation. Additionally, it can also provide tax benefits by allowing tax-free gifts to your children, helping to avoid unnecessary inheritance taxes and secure the financial future of your large family.


> Since we already have the users' legal documents available to us as a result of our core business, we are perfectly positioned to build supplementary AI chat features related to legal documents.

Uhh... What are the privacy implications here?!


If you look at the example I posted of our product recommendations, you will see that the GPT-4 generated text contains "PARTNER_NAME" instead of actual partner name. That's because we've created anonymized dataset from users in such a way that it's literally impossible for OpenAI to deanonymize users. Of course the same can not be done if we want to provide a service where users can, for example, chat with their legal documents. In that case we will have to send some private details to OpenAI. Not sure how that will pan out (what details we decide to send and what we decide not to send).

In any case, all startups today are created on top of a mountain of cloud services. Any one of those services can leak private user data as a result of outsider hack or insider attack or accident. OpenAI is just one more cloud service on top of the mountain.


Or a LLM that helps you spend less. Imagine a LLM that goes over all your spending, knows all the current laws, benefits, organizations, promotional campaigns, and suggests (or even executes) things like changing electricity provider, insurance provider, buying stuff in bulk from a different shop that you get for 4x the price at your local store, etc.


I love this idea. It would be incredibly useful!

I feel LLMs are great at suggestions that you follow up yourself (if only for sanity checking, but nothing you wouldn't do with a human too).


That would not be in the interest of anyone with any real power, so you're going to see tanks on the streets before it happens.


And not just legal either.

I uploaded all of my bloodwork tests and my 23andme data to Chat GPT and it was better at analyzing it than my doctor was.


Yup. Because the doctor doesn't have time and doesn't give a fuck.

LLMs don't have to compete against the cutting edge of human professional knowledge. They only have to compete against the disinterested, arrogant, greedy, and overworked professionals that are actually available to people in practice. No wonder they're winning.


This is a really interesting use case for me. I've been envisioning a specially trained LLM that can give useful advice or insights that your average PCP might gloss over or not have the time to investigate.

Did you do anything special to achieve this? What were the results like?


I think a lot of startups are working on exactly what you are describing, and honestly, I wouldn't hold my breath. Everyone is still bound by token limits and the two best approaches for getting around them are RAG and Knowledge-Graphs, both of which could get you close to what you describe but not close enough to be useful (IMO).


This does not make sense to me. ChatGPT is completely nerfed to the point where it's either been conditioned or trained to provide absolutely zero concrete responses to anything. All it does is provide the most baseline, generic possible response followed by some throwaway recommendation to seek the advice of an actual expert.


The way to get around this is to have it "quote" or at least try to quote from input documents. Which is why RAG became so popular. Sure, it won't right you a contract, but it will read one back to you if you've provided one in your prompt.

In my experience, this does not get you close to what the top-level comment is describing. But it gets around the "nerfing" you describe


It's very easy to get ChatGPT to provide legal advice based on information fed in the prompt. OpenAI is not censoring legal advice anywhere near as hard as they are censoring politics or naughty talk.


That's because its just advice, not legal advice. Legal advice is something you get from an attorney that represents you. It creates a liability relationship between you and the attorney for the legal advice they did provide.


Sure, we can call it "advice" instead of "legal advice" or we can even call it other names, like "potato", if that's what you want. My point is that potato not censored.


Feel free to miss the point as much as you want. You can call it a baked potato then!


I will reserve judgment of the possibilities of LLMs as applied to the legal field until they are tested on something other than Document/ contract review. Contract review is, in the large business law case, often done by outsourcing to hundreds of recent graduates and act more like proof reading with minimal application of actual lawyering skills to increase a corporation’s bottom line.

The more common, for individual purchasers of legal services, lawyering is going to be family law matters, criminal law matters, and small claims court matters. I can not see a time in the near future where an LLM can handle the fact specific and circumstantial analysis required to handle felony criminal litigation, and I see nothing that would imply LLMs can even approach the individualized, case specific and convoluted family dynamics required for custody cases or contested divorces.

I’m not unwilling to accept LLMs as a tool an attorney can use, but outside of more rote legal proof reading I don’t think the technology is at all ready for adoption in actual practice.


"and I see nothing that would imply LLMs can even approach the individualized, case specific and convoluted family dynamics required for custody cases or contested divorces."

Humans are pretty bad at this. Based on the results, it seems the judges' personal views and emotions are a large part of these cases. I'm not sure what they would look like without emotion, personal views, and the case law built off of those.


The worse judges are at being perfectly removed arbiters of justice, the more room for lawyers to exploit things like emotions and humans connections with those judges, and thus the worse LLMs will be at doing that part of the job. A charismatic lawyer backed by an LLM will be much better than an LLM.

At least until the LLMs surpass humans at being charismatic, but that would seem to be its own nightmare scenario.


> At least until the LLMs surpass humans at being charismatic

Look into "virtual influencers". Sounds like you should find it interesting.


> judges' personal views and emotions are a large part of these cases

That's a completely separate question. We're talking about automating lawyers, not judges. (to be a good lawyer in such a situation, you would need to model the judge's emotions and use them to your advantage. Probably AIs can do this eventually but it's not easy or likely to happen soon)


Well, judges are a subset of lawyers. And interactions with judges are a large part of being a successful lawyer, as you point out.


I lead data and AI at a tech-enabled commercial insurance brokerage. We have been leveraging GPT-4 to build a deep contract analysis tool specifically for insurance requirements. My teams at Google also built several LLM solutions to support Google's legal team, from patent classification to discovery support.

Language models are great at digesting legalese and analyzing what's going on. However, many legal applications involve around pretty important decisions that you don't want to get wrong ("am I contractually covered with my current insurance program?"). Because of that, we've built LLM products in the legal space with the following principles in mind:

- Human-in-the-loop tooling -- The product should be built around an expert using it whenever possible, so decision support as opposed to automation. You still see massive time savings with that in place

- Transparency / citations -- With a human-in-the-loop tool, you need mechanisms to build trust. Whether that's highlighting clauses in the document that the LLM referred to or explaining why a part of the analysis wasn't provided, citing your work is important

- Tuned for precision instead of recall -- False positives (and hallucinations) are especially bad in many of these legal use cases, so tuning the model or prompts for precision helps with mitigation.


Do you have any pointers on how to get GPT-4 to do citations? Is a prompt like “quote back the passage you are citing” so you can locate and highlight the original?

When you say “tuned for precision” is this your prompt engineering or are you actually fine-tuning GPT-4?

Appreciate the insights.


For RAG applications, starting simply with a reference to the most relevant chunk(s) is helpful in building transparency. A lot of our contract review task is a data extraction one (e.g. extract this type of insurance language and compare against your policy). As such, it's much easier to pinpoint the exact text from the source doc for citation. In our applications, currently, we are doing citations as a postprocessing task as opposed to as part of the prompt itself. Finding that feeding too many instructions to the LLM results in worse responses.

We're not fine-tuning today. Instead, "tuning for precision" is done through prompt chains. A simple example would be returning "I don't know" early on if the document isn't a contract or doesn't have clear insurance requirements in it. We've had success with various guardrail prompts.

The citation work we did at Google used model internals to highlight text (path integrated gradients). It's also easier to finetune for precision when you have control over the model itself.


While this paper is clearly not without merits, it intends to be more like an excuse to make a bombastic statements about a whole profession or "industry" (perhaps to raise their visibility and try to sell something later on?). The worst part is that they have actually referenced a single preprint document as "previous art" - and that document itself is not related to contract review, but to legal reasoning in general. (A part of LegalBench is of course "interpretation", and that is built on existing contract review benchmarks, but they could've found more relevant papers). Automating legal document review has been a very active field in NLP for twenty years or so (including in QA tasks) and became a lot more active since 2017. At least e.g. Kira (and Luminance etc., none of which is LLM-based) are already used quite widely in legal departments/firms around the world. So lawyers do have practical experience in their limitations... But Kira & co. are not measuring the performance of the latest and greatest models and they do not use transparent benchmarks etc. So the benchmark results in this paper are indeed a welcome addition in terms of using LLMs. But also considering its limited scope of reviewing 10 (!) documents based on a single review playbook, they should not have written about "implications for the industry". It is very much pretentious and shows more of the lack of knowledge of the authors of the very same industry than about the future of the legal services industry.

If you're interested in the capabilities and limitations, I suggest these informative, but still light reads as well: https://kirasystems.com/science/ https://zuva.ai/blog/ https://www.atticusprojectai.org/cuad


Any particular papers you would recommend? The links are to blogs with lots of papers.


The third one (CUAD) is a single paper, not blogposts like the others. I think this paper is still the best in terms of being done by NLP experts and understanding the possibilities and not being just some and mirror. But there are so many papers published in this area nowadays that I might not even notice a new one. The CUAD paper was still based on BERT, so pretraining was needed - that needs a bit more expertise than just prompting GPT-4-32k like in this paper or feeding prompts back to GPT-4 for another round of refining, or doing RAG. For honest research purposes, "contract review" is not really a good area of approach: the subject field is not standardised, there is no good benchmark yet and your paper can easily get into bad company of snake oil sellers cashing on visceral hatred of average people for all professions.


I wonder what the reach of a legal argument a bunch of lawyers are going to come up with in order to cripple the tech that threatens their industry?


I think the other way is going to happen, being a lawyer will now be a lot more expensive requiring some servers doing AI inference, developers, and 3 party services..


I wouldn't be so sure. I've worked in the MSP space and law is the most tech averse industry I've ever come into contact with.


I have two attorney brothers.

January 2023 my self-proclaimed smartypants lawyerbro tried to bully me into accepting that ChatGPT wasn't anything special.

I still maintain that he is incorrect.


Lawyers control government, at least in the U.S. Expect laws banning or severely restricting the use of AI in the legal field soon. I expect arguments will range from the dangers of ineffective counsel to "but think of the children" - whatever helps them protect their monopoly.


That’s some cartel level action.


I think it’s more accurate to think of lawyers as a guild. Likewise doctors, accountants, plumbers, and electricians.


A guild that has the inside track on changing the rules for itself.


> whatever helps them protect their monopoly

Ah yes, the story of bad people not wanting their livelihoods taken from them by good tech giants. Seriously, is there no room for empathy in all of this ? If you went through law school and likely got yourself in debt in the process then you're not protecting any monopoly but your means to exist. There are people like that out there you know.


> Seriously, is there no room for empathy in all of this ?

Are you joking?

Do you not empathize with the far, far larger number of people who can't afford adequate legal representation and have no legal recourse?

There are people like that out there you know!!!!


There seems to be an assumption baked in here that somehow GPT will be "adequate legal representation" and I'm not sure how to get there. An "adequate" source of revenue for OpenAI, I guess.


I gave a specific example with whom I empathize and no, I'm not joking. You on the other hand point to a different group and say "look, look, there is this group, don't you like them as well ?" which is close to "but what of the children in Africa" argument/deflection.


In general, we should not stall technological progress just to protect jobs. They will find other jobs. This is the way throughout human history.


I'm not advocating anything of this sort. I only reject the typical framing of "bad guys" on one side.


Good point. I expected arguments would range from the dangers of ineffective counsel to "but think of the children". You are correct that I did not anticipate to hear "but think of the poor lawyers".

That's because even lawyers understand that that's an ineffective argument. Lawyers running government have allowed buggy repairmen, secretaries, and telephone operators to be automated out of jobs in the past, and now we're seeing cashiers, call center support staff, and writers have their numbers reduced due to automation.

If lawyers use "but think of the poor lawyers" reasoning to suddenly take a stand and pass laws further guaranteeing their legal monopoly, they would rightfully be called out as selfish hypocrites. I think lawyers know they're better off with the "ineffective counsel", "but think of the children", and similar types of arguments.


Not a reach, it's called "unlicensed practice of law" and it's a crime.


Copyright appears to be the primary attack vector.


Are the arguments submitted to a court (and made publicly accessible) subject to copyright?

I kind of assumed they were in the same space as government documents.


A legal LLM would be significantly crippled without the knowledge stored in the universe of non-legal documents.


You're probably right, but the law and reality often seem to be orthogonal.


Given the recent legal cases where lawyers did use Chat GPT to do research and help write their brief did not go very well I'm not sold that on all the optimism that's here in the comments.


The technology is fine, the education and literacy about its flaws and limitations among typical nontechnical users is another story.


They were all idiots too cheap to pay for GPT-4. Got caught by hallucinations.


That was rookie level mistakes though. Not checking a *case* exists? Building a small pipeline of generation->validation isn't trivial, but it isn't impossible. The cases you describe seem to me like very lazy associates matched with a very poor understanding of what LLMs do.


Theoretically. The language in laws should be structured similar to code. It has some logical structure. Thus should be more easily adopted to LLMs than other 'natural language'.

So despite the early news about lawyers submitting 'fake' cases, it is only a matter of time before the legal profession is up-ended. There are a ton of paralegals, etc.. doing 'grunt' work in firms, that an LLM can do. These are considered white color, and will be gone.

It will progress in a similar fashion to coding.

It will be like having a junior partner that you have to double check, or that can do some boiler plate for you.

You can't trust completely, but you don't trust your junior devs do you, but it gets you 80% there.


> It will progress in a similar fashion to coding.

I kind of agree with this, but this is why I am confused that I only ever see people (at least on HN) talk about AI up-ending the legal profession and putting droves of lawyers out of work--I never see the same talk about the coding industry being transformed in this way.


I've heard it discussed. A lot more a few months ago when GPT first blew up.

Maybe HN is full of coders that still think themselves 'special' and can't be replaced.

Or maybe, the law profession has a lot more boilerplate than the coding profession?

So legal profession has more that can be replaced?

Coders will be replaced, but maybe not at same rate as paralegals.


Interesting problem space-- I have a culture jamming legal theory this might work for:

What if you had a $99/mo pro-se legal service that does two things, 1) teaches you how to move all of your assets into secure vehicles. 2) At the same time it lets you conduct your own defense pro-se-- but the point is not to win, it's just to jam the system. If you signal to the opposing party that you're legally bankrupt and then you just file motion after motion and make it as excruciating as possible for them they might just say nevermind when they realize it's gonna take them 5 years to get through appeals process.

It's true lawyers don't want to give up their legal documents for a template service-- but honestly just going to the court house and ingesting tons of filings might do the trick. With that strategy in mind you don't really need GREAT documents or legal theory anyway. Just docs that comply with court filing requirements. Yeah we're def gonna need to deposition your housekeepers daughters at $400/h and if you have a problem with that I would be happy to have a hearing about it. If enough people did this is would basically bring the legal system to a standstill and give power back to the people.

RIP Aaron Swartz who died fighting for these issues :'(


You would be a vexatious litigant.

https://www.courts.ca.gov/12272.htm


That's different. I'm talking about using it as a defensive mechanism against wealthy individuals and corporations who bully (relatively) poor people knowing they can't afford years of litigation. In theory if you had an AI system like what I'm talking about it could be used the other way though. Honestly if every individual had the ability to go after corporations in the same manner it would even the playing field. Still wouldn't necessarily be vexatious.


What you're describing in 2) sounds a lot like paper terrorism[0]

[0]https://en.wikipedia.org/wiki/Paper_terrorism


So what do you call it when the wealthy and corporations exploit their opponent's inability to afford legal representation? A normal Tuesday in Amerika. Yes, the banality of tuesday terrorism.


> So what do you call it when the wealthy and corporations exploit their opponent's inability to afford legal representation?

It's called a Strategic Lawsuit Against Public Participation.

It's also illegal in most states.

> A normal Tuesday in Amerika.

We already understood your derogatory outlook without it needing to be literal.

Anyways, that's why the founders gave us the First, Fourth, and Fifth amendments. It turns out, they recognized, the "law" is literally the worst mechanism for discovering truth and managing outcomes.

> Yes, the banality of tuesday terrorism.

It's terrorism because it has no logical conclusion nor any ability to positively benefit anyone's life, in particular, the person who would wield it. Your equivocation ignores this.


  In all criminal prosecutions, the accused shall enjoy the right ... to have the Assistance of Counsel for his defense -- 6th amendment to US Constitution
When an LLM is more competent than an average human counsel, does this amendment still require assistance of a human counsel?


All the governments in the world would do everything in their power to get people accept this suggestion as a truth and use LLMs instead of human lawyers especially in criminal defense. Now, why is that? Maybe it's not the technical knowledge that is the most important feature of a lawyer?


Yea it’s their ability to manipulate language and people to bend the letter of the law to suit their specific cases regardless of any long term potential societal harm.


All governments of the world at least in the west appear to mostly consist of attorney's I doubt they'd let it happen. It'd be bad for their guild.


You are mixing up important things and probably that's why you don't see a bigger threat here than the livelihood of a couple of lawyers.

People who attended law school/university/have a law degree are not necessarily practising law as lawyers/attorney or whatever they call them in the given country, as a member of the government. But of course, once a marine, always a marine...

On the other hand, the parent was trying to refer to criminal defense (Amendment 6 of US Constitution). That's not something active politicians can participate in (conflict of interest, etc.).

If one has the means to pay for their criminal defense (or family law, anything that truly matters for the average citizen), they tend to be willing to pay a lot and expect some quality of service in exchange. They will certainly not choose ChatGPT, just because that provides the answer faster for long documents.

But a large number of people don't have this kind of money. As long as governments don't have a proper budget for legal aid, they will be inclined to cover the vanishing legal aid budget with false claims that they fulfill "appointment of counsel" with LLM services - while they don't really care about the outcome of the case. (And of course, successful politicians have rarely worked as pro bono counsels before their current job, not that it has any more relevance here.)

Non-Western governments are also interested in having this replacement service - the cost of lawyers is also money there. It's much better for all governments to have chatbots they have reliably trained than less controllable humans.

That's a hypothetical threat for now, but I hope you understand why there is something more here beyond "guildthink" and protecting the livelihood of lawyers at all costs behind the Sixth Amendment. There are lots of countries (or even within the US, like in Utah) where you can review contracts without being a practising lawyer, like LPOs mentioned in the paper. So far, it failed to make the world a better place.

But replacing including LLMs as "counsel" under Sixth Amendment would mean a very different situation.


They serve different uses.

A lawyer can handle the trial for you and things like that. The LLM can help you with issues of fact, etc. And could even make stronger privacy guarantees than a lawyer if setup right. (But I doubt that will ever happen.)


Ironically you might be better asking GPT this question.


yes because LLM are not free and still requires an expert to verify the output.


That's for a court to decide. It's certainly reasonable to guess that that court case is coming. The only question is how soon.


Talk about a conflict of interest. A company that pushes llms for legal work says they work better.

This isn't worth the pdf it wasn't printed on.


I disagree. The company is mentioned multiple times, a conflict of interest is clearly visible. We also don't complain about Google et al publishing papers about some of their internal systems and why it helps, I hope?


Google isn't trying to sell you their internal systems. This is a bullshit AI hype bubble advert masquerading as an academic paper. Bet you 10 bucks it doesn't get through peer review (or isn't even submitted for peer review).


As some internal systems will be made available as a product, they do sell it (maybe).


>Google isn't trying to sell you their internal systems.

They sometimes are.

If you have an issue with the methodology of paper then all well and good but "conflict of interest" is pretty weak.

Yes, Google and Microsoft et al regularly publish papers describing Sota performance they sometimes use internally and even sell. I didn't have to think much before wavenet came to mind.

Besides, the best performing models here are all Open AI.


Agreed. That's most AI research though. They are a mechanism whose entire value proposition lies in laundering externalities.

Not that this isn't exactly what all the big "tech innovation" of the last decade were either. It's depressing and everyone involved should be ashamed of themselves.


I wonder if this could help regular (non-lawyer) people understand legal documents they run into in everyday life. Things like software license agreements, terms of service, privacy policies, release of liability forms, deposit agreements, apartment leases, and rental car agreements.

Many people don't even try to read these because they're too long and you wouldn't necessarily understand what it means even if you did read it.

What if, before you signed something, you could have an LLM review it, summarize the key points, and flag anything unusual about it compared to similar kinds of documents? That seems better than not reading it at all.


We're building exactly this today, for common business contracts.

We're not building for consumers today, because I think it's vanishingly unlikely that you'll, like, pick a different car rental company once you read their contract :) but leases, employment contracts, options agreements, SaaS agreements... all common, all boilerplate with 5-10 areas to focus on, all ready for LLMs!


We have also have been building this[1] but struggled to monetize even with 100s of users and 1000s of contracts review. We are live for about 1 year.

If you want to share experience feel free to reach out [1] legalreview.ai


Honestly, that type of consumer use case might actually be relevant once LLM’s can do this sort of thing. Certainly, nobody is going to contact their attorney before renting a car, but if this could be integrated into a travel site or something…

You never know how consumer behavior may change when something that was either impossible or impractical becomes very easy b


It feels to me like the law is already a staggering heap of complexity. Isn't using technology going to just enable more of the same, making the situation worse?


on the contrary, it may help to highlight incongruities in the legal domain and provide lawyers with compelling groundwork to make relevant claims


Or you could take the view that, in fact, this is one of the things LLMs are very good at, ie making sense of complexity.


But the lawyers reading the law won't be the only ones using LLMs. LLMs will also be used to write laws. Then lawmakers will use them to "check" that their 20,000 page law supposedly works. No human can understand the scope of today's laws: how much less when no LLMs can understand the laws created 20 years from now.

I'd love to hear that LLMs can be used to trim and simplify complexity, but I don't believe it. They generate content, not reduce it.


Lawyers have been resisting technological advances for years. No industry rejects technological tools as vehemently as the legal industry, arguing that everything has to be judged by people. Even laws that are available online do not even link to the paragraphs that are referenced. All in all, progress is institutionally suppressed here in order to preserve jobs.


> Even laws that are available online do not even link to the paragraphs that are referenced.

It's not lawyers' job to publish the laws online. Lawyers are the ones who would benefit from more easily searchable online laws, as they are the ones whose job is actually to read the laws. That is why there are various commercial tools that provide this functionality, that lawyers pay for. You need to ask your government why public online legal databases are so poor, not your lawyer.


Right. 70% of governmental staff are people with a law education. So I mean lawyer in a broader sense


I also built an AI contract review ai tool and talked to > 100 lawyers. What I found is that lawyers want technological advances but only if they work 100% of the time.

Also helped lawyers looking for a CLM, and they rejected something if it caused any inconvenience.


Comparing current generation LLMs, which aren't design for accuracy, against lawyers has some pretty obvious problems. The fact is that only one of the two is set to make non-linear improvements, and that is the only point that matters in this debate.


It's bizarre how easily we got to the Goodhart's Law event horizon in our comparisons of complex fields to AI models

But this is what happens when industries spend a decade brain-draining academia with offers researchers would be insane to refuse


I would not want an transformer-generated contract but I would be delighted if a transformer-generated contract were used as input by an actual lawyer and it saved me money.

In my experience current practice (unchanged for the decades I've been using lawyers) is that associates start with an existing contract that's pretty similar to what's needed and just update it as necessary.

Also in my experience, a contract of any length ends up with overlooked bugs (changed sections II(a) and IV(c) for the new terms but forgot to update IX(h)) and I doubt this would be any better with a machine-generated first draft.


Two reasons why it's a bad idea:

1. ChatGPT can't be held responsible, it has no body, like summoning the genie from the lamp, and about as sneaky with its hard to detect errors

2. ChatGPT is not autonomous, not even a little, these models can't recover from error on their own. And their input buffers are limited in size, and don't work all that well when stretched at maximum

Especially the autonomy part is hard. Very hard in general. LLMs need to become agents to collect experience and learn, just training on human text is not good enough, it's our experience not theirs. They need to make their own mistakes to learn how to correct themselves.


Where can someone upload a contract and ask the AI questions about it in a secure and private way? It's my understanding that most people and organizations aren't able to do this because it isn't private.


We do this today (securely upload a file and ask questions or summarize) and part of our promise, and why we're having early success, is because we promise not to train with customer data and we don't run directly on top of OpenAI.

https://casemark.ai


You can try us [1] . During the upload process you can enable data anonymization. It’s not perfect though.

We use open ai but they only get segments of a contract. Not the full one and can’t connect them.

You get the review via email and after you can delete the document and keep the review.

[1] legalreview.ai


ChatGPT enterprise? Or over API. They state that those offerings data is not used for training. I'm not a lawyer but afaik illegally retrieved evidence cannot be used - "exclusionary rule".


You can host your own LLM - something like Mixtral for example - then you have full control over the information you submit to it.


GPTs only generate specific answers if they are trained using RHLF to prefer certain answers to others. Won't this mean that coming up with a contract that meets a special individual's case will require that much more fine-tuning?

Also how do you reconcile several logical arguments without a solver? Like "If all tweedles must tweed", "If X is a tweedle, therefore it must tweed unless it can meet conditions in para 12". How can it possibly learn to solve many such conjunctions that are staple in legal language?



Lexis has some AI feature built into it:

https://www.lexisnexis.com/en-us/products/lexis-plus-ai.page

I haven't had a chance to test it out as anyone should be a bit weary to add more paid features to an already insanely expensive software product!


I must not be reading this paper correctly because it appears that they only used 10 procurement contracts to do this study.

If so, the abstract and title feels misleading.

I’d be more interested in a study done on thousands of contracts of different types. I also have my doubts it would perform well on novel clauses or contracts.


Your reading is correct. Their primary aim was to compare some real lawyers' work with their wonderprompts to show off an upcoming service and how faster that is compared to humans. But they didn't have the funds to ask lawyers to review 15 or 50 or 1000 contracts, just 10, hence the cheat in methods and in conclusions.

This is not about drawing up a benchmark for an industry, let alone validating any useful method in general.

Ten docs reviewed based a single review 'playbook' (I think it's not in the paper, but probably max 20 questions per contract?) and compared across 3 different providers/roles + LLMs...


For the auditory learners, here is this paper in a summarized Audiobook format : https://player.oration.app/1960399e-ccb0-44f6-81f0-870ef7600...


> Cost wise, LLMs operate at a fraction of the price, offering a staggering 99.97 percent reduction in cost over traditional methods.

This seems like a weirdly arbitrary and forced statistic, when "100% reduction" would be just as valid of a statement.


100% reduction would mean "literally free", which it isn't.


But LLMs are literally free. This paper just chose to focus on the "prominent" ones.


This is apples and bowling balls. You also probably could replace the CEO and entire executive team with LLMs. And cheaper! Much cheaper!

But, if the stochastic analysis was ... wrong ... who would be left to correct it?


Conflict of interest in the paper, as this mainly is a PR piece from Onit:

"Onit Announces Generative AI-Powered Virtual Legal Operations Assistant for In-house Counsel"


Does LPO (Legal Process Outsourcing) mean a paralegal?


Could be paralegals as well, but they are usually just human employees of legal service providers that are not licensed lawyers/law firms. It's jurisdiction specific who can do this besides lawyers. In the UK and Australia (and probably NZ as well) they can be practically be anyone, contract review is not a reserved legal activity like in most US states.


We may finally again get affordable access to the rule of law in the United States.


I like that they incl the prompt in the paper


Just as reminder, Chief US Justice released his "year in review" on acceptance of LLM / GPT systems within legal practices, and that it will likely enable access to the law by more-and-more commoners. The entire six pages is worth the time reading [1].

[1] [PDF] https://www.supremecourt.gov/publicinfo/year-end/2023year-en...


Nice title!


Life can be so cruel sometimes but even worst when you have a cheating partner. I got to discover my spouse was having an affair through the help of { remote spy wise @ gm ail c o m } who gave me access to his device remotely without his notice. I got access to all social media apps, call logs and sms and deleted datas and messages too. Do not be a victim to lies and mischief. Get in contact with him and stay safe.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: