Aracor Taps GPT-OSS for Security-Focused Dealmakers

Aracor AI, an AI-native deal analysis platform for inhouse counsel and VCs, is leveraging OpenAI’s new opensource LLM: GPT-OSS 120B. The goal is to provide deal teams with an open-weight model for ‘fast, secure document review’ (See below the AL In-depth interview with CEO, Katya Fisher).

As the company explained, one of the key benefits of an opensource model like this is that it is – at least in theory – a more secure way to run an LLM. It’s also able to be customized more easily, and also in theory, allows you to improve accuracy for specific use cases you’ve designed for.

It allows:

  • ‘Security-first deployment: Processing occurs entirely within Aracor’s US-hosted environment with end-to-end encryption, zero data retention, and no third-party sharing. This safeguards confidentiality and compliance across the deal lifecycle.
  • User-selectable models: GPT-OSS 120B now sits alongside other best-in-class models in Aracor. Deal professionals can select the model that aligns with internal policies or client requirements.
  • Clause-level accuracy: With Deal VerifierTM, users can compare every term-sheet provision to final deal documents in a single click. The tool instantly catches deletions, edits, or omissions before closing – empowering in-house counsel, deal leads, and operators alike.
  • And, GPT-OSS 120B is available today to all Aracor AI customers at no additional cost.’

The company continued: ‘Open-source weights allow teams to audit benchmarks, validate outputs, and, where needed, self-host the model. This addresses the increasing regulatory and client pressure for traceability in AI-assisted legal and transactional work. The open-weight approach combines the speed of generative AI with the governance required for high-stakes M&A and venture deals.’

Artificial Lawyer asked CEO Fisher some more.

– First, why now? 

OpenAI has recently released two open‑weight models – gpt‑oss‑120B and gpt‑oss‑20B – its first freely downloadable models since GPT‑2, complete with chain‑of‑thought support, Apache 2.0 licensing, and local deployment capability. This release responds to growing competition from open‑source rivals (e.g. DeepSeek, LLaMA) and geopolitical initiatives like the ATOM Project aimed at restoring U.S. leadership in open‑source AI. Highly regulated industries—like legal—have particularly strong motivation to run AI tools on‑premise for confidentiality and control, making this a perfect moment for legal‑tech players to embrace OSS models.

At Aracor we are bullish on opensource models and the future of security and privacy in GenAI. As an LLM agnostic company, alongside our default model we offer access to open source LLMs such as Llama, Mistral, and DeepSeek however in our experience none of them have compared in quality to GPT OSS 120 for providing an end-to-end solution without rigorous fine-tuning. 

Do clients really want that much security? 

Absolutely. Legal teams deal with extremely sensitive, confidential client data, bound by strict professional and ethical obligations. Many prefer models that never leave their controlled environments, ensuring data privacy and compliance.

Open‑source or open‑weight models allow firms to run everything locally and even embed PII safeguards (e.g., frameworks like LegalGuardian) so that sensitive information is masked before any processing or external exposure.

Everyone is accustomed to working in cloud these days, but it is important to keep in mind that GenAI can now extract, structure, and infer meaning from unstructured data at a scale never before possible. In the past, storing sensitive documents in the cloud seemed safe because data was less consolidated and effectively locked in raw form. With modern AI those files can be rapidly indexed, cross-referenced, and analyzed in ways that make even small exposures far more damaging. This amplifies the stakes for legal data, making controlled deployments not just preferable, but essential for many. 

Can you use OSS like this and also connect to an external LLM? Or does that create a conflict? 

Aracor offers a hybrid environment whereby the user can select whether to use OSS or our default model. That way OSS models can be deployed for certain highly sensitive tasks, while other LLMs can be used for functions requiring capabilities that the OSS models do not yet fully support.

For enterprise users we can offer on premise deployment and further customizations. It’s worth noting that just as a modern computer is not a single device but a network of interconnected systems, a modern AI-based application often relies on multiple models and even different AI technologies—such as computer vision, symbolic reasoning, and LLMs – working in parallel or assigned to specific stages of a workflow. Legal analysis is inherently complex and multi-stage, and customer requirements vary widely. It is therefore natural for Aracor to integrate and orchestrate a variety of AI tools to ensure optimal performance, accuracy, and compliance for each unique use case.

Do you see this as a differentiator? 

In legal tech and deal tech, the real differentiator isn’t how smart the LLM is but rather how securely it handles data. LLMs themselves are becoming commoditized, with new, more capable models appearing every few months. But for legal teams bound by confidentiality obligations, even the best answers are worthless if there’s risk of exposure. By offering an open-weight model that can run private (or even within an enterprise client’s own environment), we provide a path toward complete control, auditability, and compliance. Privacy and security remain the core value proposition, while model quality, though important, is now a secondary and interchangeable feature. 

Thanks!

So, there you go. Will other legal tech companies also bring in these OSS models to provide the type of benefits above, especially those working on sensitive deals? We shall see.

Legal Innovators Conferences in New York and London – Both In November ’25 – Inhouse Day and Law Firm Day at each conference.

If you’d like to stay ahead of the legal AI curve….then come along to Legal Innovators New York, Nov 19 + 20, where the brightest minds will be sharing their insights on where we are now and where we are heading. 

And also, Legal Innovators UK – Nov 4 + 5 + 6

Both events, as always, are organised by the awesome Cosmonauts team! 

Please get in contact with them if you’d like to take part.


Discover more from Artificial Lawyer

Subscribe to get the latest posts sent to your email.