Here’s a thing: some years ago, a guy filed a lawsuit against Avianca Airlines. His name and the reason for the action don’t really matter (but Roberto Mata and being struck in the knee by a service cart).
There’s been a cartload of back-and-forth on this case, giving
job security to the folks at the courthouse who have to keep track of such
things, but in the most recent filing, Mata’s attorney, Steven A. Schwartz
(of Levidov Levidov & Oberman, if you're asking) argued against Avianca's motion for dismissal. And in the course
of his 10-page document, he cited several similar cases as precedent.
The law loves precedent.
A tiny problem, tho—no one who paid attention to the cases could
find them.
Turns out Schwartz enlisted the help of ChatGPT, the AI tool
released by OpenAI last November, in his legal research. Cheaper than
paralegals, I expect. ChatGPT wrote up the argument and cited the cases and
Schwartz looked it over, logged his billable hours and submitted the filing.
Well, he did some form of due diligence: he asked ChatGPT, “Are
these cases real?”
And ChatGPT nodded, “Yeah, bro—totes real.”
So this is where we are: AI has learned how to lie. It’s at the
stage of a seven-year-old when you ask if they’ve cleaned up their room and
they tell you they have, even as you look past their earnest face and see a
rubbish tip. "Nope, nope
I do not know what to make of this, but I’m betting Schwartz
doesn’t deduct any hours from his monthly bill to Mata.
No comments:
Post a Comment