Sign In  |  Register  |  About Daly City  |  Contact Us

Daly City, CA
September 01, 2020 1:20pm
7-Day Forecast | Traffic
  • Search Hotels in Daly City

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

ChatGPT goes to court

I attended a show-cause hearing for two attorneys and their firm who submitted nonexistent citations and then entirely fictitious cases manufactured by ChatGPT to federal court, and then tried to blame the machine. “This case is Schadenfreude for any lawyer,” said the attorneys’ attorney, misusing a word as ChatGPT might. “There but for the grace of God go I…. […] The post ChatGPT goes to court appeared first on BuzzMachine .

I attended a show-cause hearing for two attorneys and their firm who submitted nonexistent citations and then entirely fictitious cases manufactured by ChatGPT to federal court, and then tried to blame the machine. “This case is Schadenfreude for any lawyer,” said the attorneys’ attorney, misusing a word as ChatGPT might. “There but for the grace of God go I…. Lawyers have always had difficulty with new technology.”

The judge, P. Kevin Castel, would have none of it. At the end of the two-hour hearing in which he meticulously and patiently questioned each of the attorneys, he said it is “not fair to pick apart people’s words,” but he noted that the actions of the lawyers were “repeatedly described as a mistake.” The mistake might have been the first submission with its nonexistent citations. But “that is the beginning of the narrative, not the end,” as again and again the attorneys failed to do their work, to follow through once the fiction was called to their attention by opposing counsel and the court, to even Google the cases ChatGPT manufactured to verify their existence, let alone to read what “gibberish” — in the judge’s description—ChatGPT fabricated. And ultimately, they failed to fully take responsibility for their own actions.

Over and over again, Steven Schwartz, the attorney who used ChatGPT to do his work, testified to the court that “I just never could imagine that ChatGPT would fabricate cases…. It never occurred to me that it would be making up cases.” He thought it was a search engine — a “super search engine.” And search engines can be trusted, yes? Technology can’t be wrong, right?

Now it’s true that one may fault some large language models’ creators for giving people the impression that generative AI is credible when we know it is not — and especially Microsoft for later connecting ChatGPT with its search engine, Bing, no doubt misleading more people. But Judge Castel’s point stands: It was the lawyer’s responsibility — to themselves, their client, the court, and truth itself — to check the machine’s work. This is not a tale of technology’s failures but of humans’, as most are.

Technology got blamed for much this day. Lawyers faulted their legal search engine, Fastcase, for not giving this personal-injury firm, accustomed to state courts, access to federal cases (a billing screwup). They blamed Microsoft Word for their cut-and-paste of a bolloxed notorization. In a lovely Gutenberg-era moment, Judge Castel questioned them about the odd mix of fonts — Times Roman and something sans serif — in the fake cases, and the lawyer blamed that, too, on computer cut-and-paste. The lawyers’ lawyer said that with ChatGPT, Schwartz “was playing with live ammo. He didn’t know because technology lied to him.” When Schwartz went back to ChatGPT to “find” the cases, “it doubled down. It kept lying to him.” It made them up out of digital ether. “The world now knows about the dangers of ChatGPT,” the lawyers’ lawyer said. “The court has done its job warning the public of these risks.” The judge interrupted: “I did not set out to do that.” For the issue here is not the machine, it is the men who used it.

The courtroom was jammed, sending some to an overflow courtroom to listen. There were some reporters there, whose presence the lawyers noted as they lamented their public humiliation. The room was also filled with young, dark-suited law students and legal interns. I hope they listened well to the judge (and I hope the journalists did, too) about the real obligations of truth.

ChatGPT is designed to tell you what you want it to say. It is a personal propaganda machine that strings together words to satisfy the ear, with no expectation that it is right. Kevin Roose of The New York Times asked ChatGPT to reveal a dark soul and he was then shocked and disturbed when it did just what he had requested. Same for attorney Schwartz. In his questioning of the lawyer, the judge noted this important nuance: Schwartz did not ask ChatGPT for explanation and case law regarding the somewhat arcane — especially to a personal-injury lawyer usually practicing in state courts — issues of bankruptcy, statutes of limitation, and international treaties in this case of an airline passenger’s knee and an errant snack cart. “You were not asking ChatGPT for an objective analysis,” the judge said. Instead, Schwartz admitted, he asked ChatGPT to give him cases that would bolster his argument. Then, when doubted about the existence of the cases by opposing counsel and judge, he went back to ChatGPT and it produced the cases for him, gibberish and all. And in a flash of apparent incredulity, when he asked ChatGPT “are the other cases you provided fake?”, it responded as he doubtless hoped: “No, the other cases I provided are real.” It instructed that they could be found on reputible legal databases such as LexisNexis and Westlaw, which Schwartz did not consult. The machine did as it was told; the lawyer did not. “It followed your command,” noted the judge. “ChatGPT was not supplementing your research. It was your research.”

Schwartz gave a choked-up apology to the court and his colleagues and his opponents, though as the judge pointedly remarked, he left out of that litany his own ill-served client. Schwartz took responsibility for using the machine to do his work but did not take responsibility for the work he did not do to verify the meaningless strings of words it spat out.

I have some empathy for Schwartz and his colleagues, for they will likely be a long-time punchline in jokes about the firm of Nebbish, Nebbish, & Luddite and the perils of technological progress. All its associates are now undergoing continuing legal education courses in the proper use of artificial intelligence (and there are lots of them already). Schwartz has the ill luck of being the hapless pioneer who came upon this new tool when it was three months in the world, and was merely the first to find a new way to screw up. His lawyers argued to the judge that he and his colleagues should not be sanctioned because they did not operate in bad faith. The judge has taken the case under advisement, but I suspect he might not agree, given their negligence to follow through when their work was doubted.

I also have some anthropomorphic sympathy for ChatGPT, as it is a wronged party in this case: wronged by the lawyers and their blame, wronged by the media and their misrepresentations, wronged by the companies — Microsoft especially — that are trying to tell users just what Schwartz wrongly assumed: that ChatGPT is a search engine that can supply facts. It can’t. It supplies credible-sounding — but not credible — language. That is what it is designed to do. That is what it does, quite amazingly. Its misuse is not its fault.

I have come to believe that journalists should stay away from ChatGPT, et al., for creating that commodity we call content. Yes, AI has long been used to produce stories from structured and limited data: sports games and financial results. That works well, for in these cases, stories are just another form of data visualization. Generative AI is something else again. It picks any word in the language to place after another word based not on facts but on probability. I have said that I do see uses for this technology in journalism: expanding literacy, helping people who are intimidated by writing and illustration to tell their own stories rather than having them extracted and exploited by journalists, for example. We should study and test this technology in our field. We should learn about what it can and cannot do with experience, rather than misrepresenting its capabilities or perils in our reporting. But we must not have it do our work for us.

Besides, the world already has more than enough content. The last thing we need is a machine that spits out yet more. What the world needs from journalism is research, reporting, service, solutions, accountability, empathy, context, history, humanity. I dare tell my journalism students who are learning to write stories that writing stories is not their job; it is merely a useful skill. Their job as journalists is to serve communities and that begins with listening and speaking with people, not machines.


Image: Lady Justice casts off her scale for the machine, by DreamStudio

The post ChatGPT goes to court appeared first on BuzzMachine.

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Copyright © 2010-2020 DalyCity.com & California Media Partners, LLC. All rights reserved.