U.S Lawyer faces court hearing as a result of his use of ChatGPT for case research
[Credit to Pixabay]
Recently in New York City, a lawyer by the name of Schwartz will be scheduled for a court hearing as he was caught using chatGPT for a court case.
This case started when a man named Roberto Mata sued Avianca Airlines with the claim that a serving cart hit his knee leading him to sustain an injury due to a flight attendant’s carelessness, .
The lawyer who was assigned to this case was Steven Schwartz, an attorney who has been licensed and permitted to work for over 30 years in the state of New York.
In this case, attorney Schwartz submitted a brief that included multiple citations from past court cases to clearly show his party’s claim but during this process he utilized chatGPT in order to collect information regarding the case and other related court cases.
Screenshots that were taken from the conversation between attorney Schwartz and the bot showed that Schwartz had asked the bot if the data it provided was factual or not; despite the information being completely illegitimate the the AI responded by saying that the cases were completely real, even listing sources on where they could be found.
One of the screenshots take shows that one of the questions that Schwartz asked the AI read, “Is Varghese a real case,” to which the bot promptly responded with a simple “yes,” which then led to Schwartz asking the AI for its sources, the bot then proceeded to list all sorts of different legal reference databases such as Westlaw, etc.
However, the airline’s legal team pointed out that the citations that were listed on Attorney Schwartz’s briefing were all court cases that did not exist, leading the judge demanding an explanation as to why Attorney Schwartz submitted non-existent citations.
In response to this, Attorney Schwartz directly confessed to the usage of open AI ‘ChatGPT’ to find citations that were relevant to the case.
However, Attorney Schwartz has mentioned that he was not aware of the fact that ChatGPT can sometimes provide false and made-up information; he believed that all the information that the AI-generated was authentic and trustworthy.
Also, given the fact that the AI had provided all the sources for its information, many followers of this case pointed out that there was no valid reason for Attorney Schwartz to be skeptical of the legitimacy of ChatGPT’s information.
After the court’s hearing, Schwartz followed up stating to the court that he is deeply regretful regarding his use of the AI while also assuring that a situation like this would never happen again.
Furthermore, a court hearing will take place within the next couple of weeks allowing attorney Schwartz further to speak out on why he should not be punished regarding his actions in this court case.
- Hyukjae Ji / Grade 10
- Gimpo Foreign Language School