The New York State Bar Association Task Force on Artificial Intelligence recently issued a report to the NYSBA House of Delegates regarding artificial intelligence. The report provides recommendations and guidelines regarding the legal, social, and ethical implications of artificial intelligence and generative AI on the legal profession (“AI”).
The report is a thorough read regarding the history and evolution of AI. From the advent of the term “artificial intelligence” in 1956; the first “AI Winter” in the 1970s, which ultimately lasted until the mid-1990s; to AI Judges and Robo Courts.
Next, there is a focus on the benefits of AI in all aspects of life, without shying away from all the potential risks. Such benefits include providing more individuals with access to our court system. “Legal representation in a civil matter is beyond the reach of 92% of the 50 million Americans below 125% of the poverty line.” However, AI can potentially lead to a “two-tiered legal system,” where individuals in underserved communities could be forced into using an inferior AI-powered technology. Further concern has been highlighted in a January 2024 Stanford University study which found popular AI chatbots were inaccurate in the majority of cases when answering a legal question. That study further found that LLMs (large language models) provided incorrect results 75% of the time when responding to questions about a court’s core ruling. Unless living under a rock, we have all read or heard about some attorneys submitting briefs with non-existent (hallucinated) citations obtained from AI.
The report identified various Rules of Professional Conduct which are impacted by an attorneys’ use of AI.
- Rule 1.1 of the Rules of Professional Conduct requires a lawyer provide competent representation to a client. Comment 8 to RPC Rule 1.1 asserts that part of this competency pertains to lawyers keeping abreast of “the benefits and risks associated with technology” they use to provide services.
- RPC Rule 5.3 – attorneys have a supervisory obligation regarding nonlawyer work. The American Bar Association amended Model Rule 5.3 to clarify that the term “non-lawyers” also refers to artificial intelligence technologies.
- RPC Rule 5.1 – a lawyer shall not aid a nonlawyer in the unauthorized practice of law. As such, a human lawyer must still be involved when an AI program is utilized.
- RPC Rul 1.6 – a “lawyer shall not reveal information relation to the representation of a client unless the client gives informed consent.” Confidentiality becomes a concern when a lawyer enters information in AI engines (e.g., chatbots), and those entries are then used as a training set of data for the AI.
- Rule 3.3(a)(1) – prohibits lawyers from making false statements of fact or law to a court and requires correction of any false statements previously made during the case.
The Task Force 1) recommends the NYSBA adopt the AI guidelines outlined in their report; 2) recommends the NYSBA educate judges, lawyers, law students, and regulators to understand AI in order to apply existing laws to regulate it; 3) urges legislatures and regulators to identify risk associated with this technology which are not addressed by existing laws; and 4) identifies the need to examine the function of the law a governance tool.
The report concludes that there are no “conclusions.” There is a need for ongoing monitoring, continued refinement of these initial guidelines, as well as the need to audit the efficacy of these proposed rules and regulations.
Contacts:
Related Practice Areas: