Reply
Highlighted
Respected Contributor
Posts: 3,031
Registered: ‎10-22-2018

Two New York lawyers are in hot water after using ChatGPT to search for precedents supporting a client's case against Avianca Airlines. They were excited at discovering some new relevant cases to include in their brief.

 

But the judge became suspicious and said the cases sounded like legal gibberish. Research could not uncover any actual evidence the cases existed.

 

Yes, ChatGPT created legal precedents that would give the lawyers exactly what they wanted.

 

The lawyers apologized, but the judge has not yet decided if fines would be appropriate. 

 

 

 

 

Valued Contributor
Posts: 819
Registered: ‎02-28-2017

This shows the dangers of AI and in particualr ChatGPT.   Don't those lawyers read? Information is out there showing how bad this site is. I would caution anyone who toys with the idea of chatting it up to  have second thoughts.

 

 

Respected Contributor
Posts: 2,337
Registered: ‎08-19-2011

I didn't know there were two, just read about one in the NYT, and my jaws dropped. The AI cited cases that it made up, and the lawyer never checked.  Not one.  As a former academic, I find this mind-boggling.  It is well known that  supposed research papers written by ChatGPT contain totally ficticious citations, which are of course easy to disprove because they cannot be found. There is lazy and then there is stupid lazy.  It sounds like the courtroom was simultaneously horrified, amused and astonished by this bonehead. Talk about phoning it in....

Trusted Contributor
Posts: 1,943
Registered: ‎07-03-2014

has anyone read what sam altman, the man behind chatgpt, said about it? he said he himself is worried about what this thing can do, and named them. it's scary. 

Honored Contributor
Posts: 13,510
Registered: ‎05-23-2010

Re: Oh That Naughty ChatGPT

[ Edited ]

@freakygirl wrote:

has anyone read what sam altman, the man behind chatgpt, said about it? he said he himself is worried about what this thing can do, and named them. it's scary. 


@freakygirl @Besides Sam Altman, the man known as the godfather of AI, Geoffrey Hinton, has resigned from Google, which paid him 44 million dollars to bring him aboard, in order to speak freely and inform the public about the possible dangers and need for regulations. Hinton has worked with AI for 50 years and is the one that led the way in the research into neural networks in its design.

 

"Hinton is best known for his work on a technique called backpropagation, which he proposed (with a pair of colleagues) in the 1980s. In a nutshell, this is the algorithm that allows machines to learn. It underpins almost all neural networks today, from computer vision systems to large language models." Quoted from MIT Technology Review.https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/



I'll be posting about him soon. Two areas he is most troubled about are military applications and robotics. Even if the U.S. and some other countries come together to regulate, how will unfriendly foreign nations or those with malicious intent be controlled in its use. To its credit, some years ago, Hinton refused to work on any capacity of AI for military use and went to Google with his concerns. Google took his fears into consideration and stopped any work in the area of military applications. 

Honored Contributor
Posts: 13,510
Registered: ‎05-23-2010

Re: Oh That Naughty ChatGPT

[ Edited ]

@PickyPicky3 @AI is known to confabulate. It's so common that it's been given a name. When AI does this, it is said to be 'hallucinating'. Reports about AI hallucinations are coming out within the industry; which is working to stop it, and to the public. Open AI, the parent company of ChatGPT, acknowledges this issue. The lawyers could have easily checked case law before using the cases but they failed to check. This is a cautionary tale for lawyers using ChatGPT. In this instance one of the lawyers was not versed in how AI works or that it might fabricate cases and the second lawyer just trusted the first lawyer's inclusion on the fabricated cases generated by ChatGPT. 

Respected Contributor
Posts: 3,031
Registered: ‎10-22-2018

I personally believe if the lawyers knew how to use ChatGPT, it is highly likely they knew what it could do. 

 

Should the judge fine the lawyers for laziness and stupidity?

 

Does the lawyers' client have a case for malpractice?

Respected Contributor
Posts: 2,337
Registered: ‎08-19-2011

@PickyPicky3 wrote:

I personally believe if the lawyers knew how to use ChatGPT, it is highly likely they knew what it could do. 

 

Should the judge fine the lawyers for laziness and stupidity?

 

Does the lawyers' client have a case for malpractice?


He claims he learned about it from his "college-aged children", never used it before, and thought it was a database.  This is a quote from yesterday's NYT article":

 

"Mr. Schwartz, who has practiced law in New York for 30 years, said in a declaration filed with the judge this week that he had learned about ChatGPT from his college-aged children and from articles, but that he had never used it professionally.

He told Judge Castel on Thursday that he had believed ChatGPT had greater reach than standard databases."

 

More Better Call Saul than Perry Mason.

Frequent Contributor
Posts: 122
Registered: ‎08-18-2011

   This issue is just one of many problems with AI, and it is happening as it is in its infancy.  AI is growing much faster than people think.  Another critical issue that we will be confronted with is the loss of middle class jobs, jobs that pay between $50k-$250k per year.  The middle class plays a critical role in our society and is the source for economic growth for our economy.

 

   It supports both lower and wealthy class people.  Lower income people have no money to pay taxes, and the weathy don't pay taxes, so it is the middle class who keeps government going.  If it is reduced or at risk, how will government be funded? Also credit, and credit cards are middle class tools.  Lower class people can't borrow money, and the wealthy are looking for interest payments from loaning their money to the middle class. So if the middle class is eliminated, society as we know it collapses.

 

   The AI middle class job elimination process has already begun, and we need to set up a program right away to help those who are being displaced by AI including the 7,000+ people from IBM, and the 4,000+ from META.  As of now it is estimated that 141,000 people are now unemployed due to AI.  Already Google has had 2 employees commit suicide.  These IT workers may not be able to find jobs in their fields, and will need financial, psychological, and possible retraining assistance.  Finding jobs for them is critical for all of us.