Network TodayNetwork Today
    What's Hot

    GOP moves to axe USDA proposal pressuring public schools to adopt Biden admin’s LGBTQ ‘social experiments’

    September 25, 2023

    Yankees fail to make playoffs for first time since 2016

    September 25, 2023

    Man arrested after fight with father-in-law takes gruesome, deadly turn

    September 25, 2023
    Facebook Twitter Instagram
    • About
    • Privacy Policy
    • Terms
    • Contact
    Facebook Twitter Instagram
    Monday, September 25
    Network TodayNetwork Today
    • Home
    • News
    • Politics
    • Business
    • Energy
    • Technology
    • Health
    • Lifestyle
    • Sports
    Network TodayNetwork Today
    Home » Lawyers who used ChatGPT included fake legal research fabricated by AI chatbot

    Lawyers who used ChatGPT included fake legal research fabricated by AI chatbot

    June 9, 20235 Mins Read News
    Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Two apologetic lawyers responding to an angry judge in Manhattan federal court blamed ChatGPT Thursday for tricking them into including fictitious legal research in a court filing.

    Attorneys Steven A. Schwartz and Peter LoDuca are facing possible punishment over a filing in a lawsuit against an airline that included references to past court cases that Schwartz thought were real, but were actually invented by the artificial intelligence-powered chatbot.

    Schwartz explained that he used the groundbreaking program as he hunted for legal precedents supporting a client’s case against the Colombian airline Avianca for an injury incurred on a 2019 flight.

    CHATGPT FOUND TO GIVE BETTER MEDICAL ADVICE THAN REAL DOCTORS IN BLIND STUDY: ‘THIS WILL BE A GAME CHANGER’

    The chatbot, which has fascinated the world with its production of essay-like answers to prompts from users, suggested several cases involving aviation mishaps that Schwartz hadn’t been able to find through usual methods used at his law firm.

    The problem was, several of those cases weren’t real or involved airlines that didn’t exist.

    Schwartz told U.S. District Judge P. Kevin Castel he was “operating under a misconception … that this website was obtaining these cases from some source I did not have access to.”

    He said he “failed miserably” at doing follow-up research to ensure the citations were correct.

    “I did not comprehend that ChatGPT could fabricate cases,” Schwartz said.

    The ChatGPT app is displayed on an iPhone in New York, on May 18, 2023. A Manhattan federal judge is deciding whether to sanction two lawyers who blamed ChatGPT for tricking them into including fake legal research. (AP Photo/Richard Drew, File)

    Microsoft has invested some $1 billion in OpenAI, the company behind ChatGPT.

    Its success, demonstrating how artificial intelligence could change the way humans work and learn, has generated fears from some. Hundreds of industry leaders signed a letter in May that warns ” mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

    Judge Castel seemed both baffled and disturbed at the unusual occurrence and disappointed the lawyers did not act quickly to correct the bogus legal citations when they were first alerted to the problem by Avianca’s lawyers and the court. Avianca pointed out the bogus case law in a March filing.

    The judge confronted Schwartz with one legal case invented by the computer program. It was initially described as a wrongful death case brought by a woman against an airline only to morph into a legal claim about a man who missed a flight to New York and was forced to incur additional expenses.

    “Can we agree that’s legal gibberish?” Castel asked.

    AI WILL MAKE HUMANS MORE CREATIVE, NOT REPLACE THEM, PREDICT ENTERTAINMENT EXECUTIVES

    Schwartz said he erroneously thought that the confusing presentation resulted from excerpts being drawn from different parts of the case.

    When Castel finished his questioning, he asked Schwartz if he had anything else to say.

    “I would like to sincerely apologize,” Schwartz said.

    He added that he had suffered personally and professionally as a result of the blunder and felt “embarrassed, humiliated and extremely remorseful.”

    He said that he and the firm where he worked — Levidow, Levidow & Oberman — had put safeguards in place to ensure nothing similar happens again.

    LoDuca, another lawyer who worked on the case, said he trusted Schwartz and didn’t adequately review what he had compiled.

    After the judge read aloud portions of one cited case to show how easily it was to discern that it was “gibberish,” LoDuca said: “It never dawned on me that this was a bogus case.”

    He said the outcome “pains me to no end.”

    Ronald Minkoff, an attorney for the law firm, told the judge that the submission “resulted from carelessness, not bad faith” and should not result in sanctions.

    He said lawyers have historically had a hard time with technology, particularly new technology, “and it’s not getting easier.”

    “Mr. Schwartz, someone who barely does federal research, chose to use this new technology. He thought he was dealing with a standard search engine,” Minkoff said. “What he was doing was playing with live ammo.”

    Daniel Shin, an adjunct professor and assistant director of research at the Center for Legal and Court Technology at William & Mary Law School, said he introduced the Avianca case during a conference last week that attracted dozens of participants in person and online from state and federal courts in the U.S., including Manhattan federal court.

    He said the subject drew shock and befuddlement at the conference.

    “We’re talking about the Southern District of New York, the federal district that handles big cases, 9/11 to all the big financial crimes,” Shin said. “This was the first documented instance of potential professional misconduct by an attorney using generative AI.”

    He said the case demonstrated how the lawyers might not have understood how ChatGPT works because it tends to hallucinate, talking about fictional things in a manner that sounds realistic but is not.

    “It highlights the dangers of using promising AI technologies without knowing the risks,” Shin said.

    The judge said he’ll rule on sanctions at a later date.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

    Related Posts

    Man arrested after fight with father-in-law takes gruesome, deadly turn

    September 25, 2023

    Historians race to locate Great Lake shipwrecks before invasive mussels destroy sites

    September 25, 2023

    Chairman of Hong Kong Journalist Association found guilty of obstructing police

    September 25, 2023

    Dock worker at center of wild brawl in Montgomery, Alabama, says he was ‘just in shock’ when he was attacked

    September 25, 2023

    First US-made Abrams tanks arrive in Ukraine months ahead of schedule

    September 25, 2023

    Florida police identify woman killed in ‘alligator attack’ as daughter describes ‘unbearable pain’: report

    September 25, 2023
    Trending

    GOP moves to axe USDA proposal pressuring public schools to adopt Biden admin’s LGBTQ ‘social experiments’

    September 25, 2023

    Yankees fail to make playoffs for first time since 2016

    September 25, 2023

    Man arrested after fight with father-in-law takes gruesome, deadly turn

    September 25, 2023

    ‘Nothing But Raves When I Share It With Friends’

    September 25, 2023
    Latest News

    US Treasury says request for Hunter Biden records must come from Dem-led committee, not Republicans

    September 3, 2022

    An Art Professor Says A.I. Is the Future. It’s the Students Who Need Convincing

    May 1, 2023

    Biden family, Hunter associates raked in over $17M from foreign sources, IRS whistleblower testifies

    July 19, 2023

    Peru’s president dissolves Congress ahead of 3rd removal try

    December 7, 2022

    Chinese government mouthpiece vows Beijing will ramp up drive for AI global supremacy

    June 28, 2023

    Flipping Out

    June 26, 2023

    Network Today is one of the biggest English news portal, we provide the latest news from all around the world.

    We're social. Connect with us:

    Facebook Twitter Instagram Pinterest YouTube
    Recent

    GOP moves to axe USDA proposal pressuring public schools to adopt Biden admin’s LGBTQ ‘social experiments’

    September 25, 2023

    Yankees fail to make playoffs for first time since 2016

    September 25, 2023

    Man arrested after fight with father-in-law takes gruesome, deadly turn

    September 25, 2023
    Featured

    TikTok says its American traffic is going through Oracle servers, but it retains backups.

    June 17, 2022

    ‘Different from the Other Southerners’: Jimmy Carter’s Relationship with Black America

    February 28, 2023

    Watch These Great Harry Belafonte Screen Performances

    April 26, 2023
    Copyright ©️ All rights reserved | Network Today
    • About
    • Privacy Policy
    • Terms
    • Contact

    Type above and press Enter to search. Press Esc to cancel.