A lawyer in New York has feel himself in trouble with a judge after he submitted legal research which had been created by contrived intelligence ( AI ) chatbotChatGPT .

During a vitrine of an airline being litigate over an alleged personal combat injury , attorney for the plaintiff file away a brief contain several case to be used as legal precedent . Unfortunately , as subsequently admit in anaffidavit , the follow cases were " found to be nonexistent " by the motor hotel :

Varghese v. China Southern Airlines Co Ltd , 925 F.3d 1339 ( 11th Cir . 2019 )   Shaboon v. Egyptair 2013 IL App ( 1st ) 111279 - U ( Il App . Ct . 2013 )   Petersen v. Iran Air 905 F. Supp 2d 121 ( D.D.C. 2012)Martinez v. Delta Airlines , Inc , 2019 WL 4639462 ( Tex . App . Sept. 25 , 2019 )   Estate of Durden v. KLM Royal Dutch Airlines , 2017 WL 2418825 ( Ga. Ct . App . June 5 , 2017)Miller v. United Airlines , Inc , 174 F.3d 366 ( 2d Cir . 1999 )

The " enquiry " was compiled by lawyer Steven A. Schwartz , an attorney with over 30 years of experience according to theBBC . Schwartz suppose in the affidavit that he had not used ChatGPT for sound research before and was " unaware of the hypothesis that its content could be false " .

Screenshots in the affidavit show the attorney asking the chatbot " is varghese a real case " , to which the chatbot responded " yes " . When asked for sources , it separate the lawyer that the case could be observe " on legal inquiry databases such as Westlaw and LexisNexis " . When take " are the other cases you provided fake " it reply " No " , add together that they could be regain on the same database .

As fun as chatbots may be , or as advanced as they may seem , they are still prone to " hallucination " – absolutely coherent - sounding answers that do n’t in any style tie in to the existent world .

ⓘ IFLScience is not responsible for for content shared from external sites .

Without heavy fact - checking , it ’s not really a shaft you should habituate when trying to search a sound case that relies on existent - world case in point rather than the hallucinations of aspicy autocomplete .

The lawyer wrote that he " greatly rue having utilized generative artificial intelligence to affix the sound enquiry performed herein " and vows to " never do so in the future without absolute confirmation of its genuineness " .

Both Schwartz and attorney Peter LoDuca , who was not aware that ChatGPT had been used while researching the case , are confront a audition on June 8 about the incident .