
PromptArmor is a startup out of California that tests genAI vendors for security risks such as ‘indirect prompt injection’, and 26 risk vectors overall. They’re focusing on legal, health, and other key sectors, and Artificial Lawyer caught up with them to learn more.
San Francisco-based co-founder Shankar Krishnan told AL: ‘If a law firm is evaluating an AI vendor, they would send it to us. We would give them back a detailed risk report on the AI components of that vendor. We do that by testing those vendors for risks such as ‘indirect prompt injection’ which is a new security risk for LLM applications, specifically.
‘We check for 26 risk vectors, all mapped to leading security frameworks like the ‘OWASP LLM top 10’, ‘mitre Atlas’, ‘NIST AI RMF’, and others. Our speciality is our technology which productizes the scan aspect of testing these LLM applications.’
And before you ask, this site has never heard of NIST AI RMF either…..but prompt injection is a term that’s done the legal tech rounds before.
So, to give you a sense of what they’re all about, here is a statement from the UK-based Alan Turing Institute from last year: ‘Prompt injection is one of the most urgent issues facing state-of-the-art generative AI models.
‘The UK’s National Cyber Security Centre has flagged it as a critical risk, while the US National Institute for Standards and Technology has described it as ‘generative AI’s greatest security flaw’.
‘Simply defined, prompt injection occurs ‘when an attacker manipulates a large language model (LLM) through crafted inputs, causing the LLM to unknowingly execute the attacker’s intentions’, as the Open Worldwide Application Security Project puts it. This can lead to the manipulation of the system’s decision-making, the distribution of disinformation to the user, the disclosure of sensitive information, the orchestration of intricate phishing attacks and the execution of malicious code.
‘Indirect prompt injection is the insertion of malicious information into the data sources of a GenAI system by hiding instructions in the data it accesses, such as incoming emails or saved documents. Unlike direct prompt injection, it does not require direct access to the GenAI system, instead presenting a risk across the range of data sources that a GenAI system uses to provide context.’
So, there you go, serious stuff, and just one aspect of what PromptArmor covers. Of course, do such vulnerabilities crop up in legal tech products? Is this likely? The short answer is: we don’t know. Why don’t we know? The answer is because as far as this site is aware no-one has ever published a public list of such incidents which include well-known legal AI tools.
The company, which is backed by Y Combinator, among others, added that in terms of what they offer, they provide ‘continuous monitoring’ and that there are already some law firms that ‘send us their entire repository of vendors’.
‘We monitor those vendors for new AI features they are adding, or if they have introduced AI for the first time,’ Krishnan said.
And, in an example that may pique many firms’ interest, they added that: ‘We also scan for if there are privacy policy changes or terms changes (e.g. they are now training on your data), model changes (e.g. they have switched from Anthropic to OpenAI) and other relevant news.’
OK, all well and good. But, as noted above, AL has to ask: do law firms really need this? Is this something law firms need to think about? Krishnan underlined that they do.
‘Law firms need this because their innovation teams are bringing in AI vendors. Security teams don’t have the AI expertise to evaluate these vendors for novel AI security risk, so a lot of them get stuck in the PoC phase, or take longer to review.
‘Security teams also have a myriad of other things to do. We help security teams assess AI vendors faster so innovation teams can bring them in, faster, creating a win-win. The gap between innovation and security at law firms has been well documented, and we think of ourselves as bridging this gap.’
Well, there you go. But, now the cost. Krishnan noted that: ‘Cost is tiered, based on the number of vendors, and we discount based on if they want assessments, continuous monitoring, or both.’
As mentioned, is this a big deal for law firms? It’s hard to estimate. Do well-known products out there have potential security gaps made possible by this new approach to AI? The short answer is: we don’t really know. Do products and companies that people don’t know well suffer the same or worse risks? Again, we don’t know.
So, given that most law firms are quite rightly risk-averse, then perhaps – rather as with genAI performance standards – we need to have an open chat about these issues as well. Maybe it’s something that can be quickly understood and resolved. But, now is probably a good time to get clarity on it as more and more products enter law firms’ tech stacks.
You can find more information about PromptArmor here.
—
What are your views and experiences with this field? Have you stopped using a genAI vendor because of the above risks? Is this even something you’ve tested for?