The State of New York is considering legislation that could limit what AI can do in the field of law, medicine, and other ‘professional’ fields. But, it looks like protectionism that could harm some areas of legal tech and reduce access to justice is one possible outcome depending on one’s reading of the text. AL explores.
The new bill, (S7263), which is with the New York Senate, is labelled as legislation that ‘Imposes liability for damages caused by a chatbot impersonating certain licensed professionals’.
NY State Senator Kristen Gonzalez, who is leading the bill said in a press release earlier this month: ‘We have documented instances of AI chatbots providing false medical license numbers to users and documented cases of real harm from AI chatbots providing misinformation. This is what my common-sense legislation addresses—protecting users from misinformation, scams, and fraud. This legislation does not prohibit a user from asking a chatbot questions or receiving general information and advice, as long as the chatbot is not presenting that information as a licensed professional. It would also not prohibit licensed professionals from consulting AI technology in the course of their work. This bill creates protections for consumers and holds AI companies liable when their products harm New Yorkers.’
However, the rest of the bill’s text seems, to this site at least, to be much broader than that.
The bill could have been called something like ‘Considering sensible safeguards to make sure the public can use AI tools for legal and medical needs’. But it doesn’t.
The bill continues: ‘This bill would prohibit a chatbot to give substantive responses; information, or advice or take any action which, if taken by a natural person, would constitute unauthorized practice or unauthorized use of a professional title as a crime in relation to professions who licensure is governed by the education law or the judiciary law.’
So, this does not just cover the more extreme end of things, such as direct legal advice, i.e. ‘You should do X!’, but also includes responses and ‘information’ that alone may simply be helpful and provide useful grounding in the issues before a person, or a business, makes a decision.
It adds (their capitals) – ‘(II) WOULD VIOLATE THE PROVISIONS OF ARTICLE FIFTEEN OF THE JUDICIARY LAW PROHIBITING THE PRACTICE OR APPEARANCE AS AN ATTORNEY-AT-LAW WITHOUT BEING ADMITTED AND REGISTERED UNDER SUCH ARTICLE.
‘(B) A PROPRIETOR MAY NOT WAIVE OR DISCLAIM THIS LIABILITY MERELY BY NOTIFYING CONSUMERS THAT THEY ARE INTERACTING WITH A NON-HUMAN CHATBOT SYSTEM.’
Lawyers can use AI tools, and thus legal AI tools, and then give advice that taps those tools, but as soon as a regular member of the public or a business uses such a tool directly without the regulated lawyer intermediary in the mix, then – as least as far as AL can see from the wider text – this would be forbidden.
And, even if the AI tool explained what it was and how it was limited, and that you really should use a real lawyer, which most LLMs do today, that – it would appear – this would not be sufficient.
It also appears that the bill, which started back in 2025, has now advanced to a third reading – see https://www.nysenate.gov/legislation/bills/2025/S7263 here.
Is this a big deal?
Well, yes, it could be. Much depends on how things are amended, or not. Plus, it comes just as the White House has published a set of goals for its AI strategy, although they remain fairly vague.
Also, as noted, recent comments from the politicians behind the bill seem to make it much more narrow, i.e. literally impersonating a professional.
If passed ‘as is’, and the broader interpretation of text was used, this would appear to prevent anyone who is not a regulated lawyer using legal AI, or general AI, tools to find anything useful related to a legal matter, from information, to actual advice.
Although, as noted, Senator Kristen Gonzalez’s additional comment earlier this March seems to say something different: ‘This legislation does not prohibit a user from asking a chatbot questions or receiving general information and advice, as long as the chatbot is not presenting that information as a licensed professional.’
If we go with the wider interpretation then the first and most apparent impact would be on access to justice, and the bill would presumably make tools that gave legal support directly to people or businesses almost impossible to operate.
But, there is another area, which is unclear, and which could impact the field of commercial law and legal AI: this is the strategy of ‘self-serve’.
The bill above doesn’t really get into the weeds of how it applies in all commercial settings, but what happens with, for example a legal AI tool that allows staff inside the company to download and sign documents they need for their work. E.g. an AI-driven contract management system allows the sales team to find the right NDA for a customer, check what provisions it should include, and then sign it.
Now, is that ‘legal work’? If so, then presumably they would have to stop the AI system doing this? And the same perhaps would impact compliance teams, and risk teams, and HR teams, and in fact anyone who used a legal AI tool where the responses were not directly intermediated by a living, human lawyer.
If that were the outcome, then inhouse AI tools would become far less useful, as the inhouse lawyer would always have to be part of any action where legal matters were connected. Self-serve would be over and ‘legal-adjacent’ roles, e.g. compliance, would have to be staffed with regulated lawyers as only they could tap AI tools where a matter involved a legal aspect, which in areas such as compliance or HR, would be almost always.
How NewMod, AI-first law firms and ALSPs would be affected is also not clear. But, again, if we take the above at face value, unless a lawyer was involved directly in interpreting any output from an AI system, would an automated, or semi-automated legal workflow be allowed under this bill?
And the idea of having an AI tool feed legal-specific results directly into a company’s data record, would also not be allowed – presumably…. – as again it would demand that a lawyer is always not just in the loop, but overseeing the delivery of any information.
Conclusion
As noted, this bill seems to have got started with a negative approach to AI. In turn it could limit access to justice, inhouse legal AI tool use, and perhaps cause issues for ALSPs, AI-first providers and other new model legal businesses.
However, the more recent comment from the Senator seems to dial this all back to just covering actual ‘impersonation’ of a professional.
Overall, the outcome here is uncertain. If the goal is just a very narrow bill that stops actual impersonation then that’s a good thing, if the bill spreads out into stopping AI from handling legal questions in general, then that looks more like protectionism.
More on the Bill here.
—
Discover more from Artificial Lawyer
Subscribe to get the latest posts sent to your email.