Partner Press Releases

Thursday 6 June 2024

Tromero Tailor: From OpenAI to Your Own Privacy Preserving Model

Tromero Stand: P103
Tromero Tailor: From OpenAI to Your Own Privacy Preserving Model
Tromero Tailor: From OpenAI to Your Own Privacy Preserving Model

Interest in AI has increased drastically over the past 2 years. The total amount of investment in AI companies increased from $396 million in 2018 to $22.4 billion in 2023. While admittedly, a lion's share of this investment ($14 billion) is in a certain Sam Altman-led company with an ironic name, there has been a clear uptick in AI development and for a good reason.

Pick any profession, and it's likely that someone is creating a generative AI assistant for it. From accountants, lawyers, and doctors, to architects, financial advisors, marketing professionals, software developers, cybersecurity specialists, and sales representatives—AI copilots are already available for these fields. Incorporating AI offers businesses a distinctive competitive advantage, acting as a powerful differentiator in today's market. As industries become more competitive, AI provides tools that can enhance decision-making, optimise operations, and personalise customer interactions at scale.

However, relying on OpenAI’s or Anthropic’s API to power your application not only limits your application’s IP, but also requires you to send your clients’ data to a third party. LLMs are susceptible to producing "hallucinations"—outputs that are convincingly wrong, which is problematic across virtually all applications. To mitigate this, developers employ techniques such as Retrieval-Augmented Generation (RAG). This approach enhances the accuracy by incorporating relevant external information into the AI's responses, thereby reducing the likelihood of these errors. However, even with RAG, inaccuracies can persist. At this point, RAG is tending more towards an application specific art-form, rather than a science.

The best strategy to ensure an AI model performs reliably and accurately in a specific domain involves training the model on data that is closely aligned with its intended use case. This process, known as fine-tuning, customises the model's responses to be more precise and relevant to particular tasks or industries. By doing so, the model's performance is significantly improved in narrow, specialised domains, making it a more effective tool for businesses and users, particularly when combined with techniques like RAG.

While it is fun that a model like OpenAI has the ability to talk like Trump, a pirate, Shakespeare, and Boris Johnson, as well as review documents and write sales emails, the majority of applications don’t need the frills — they just need an AI model which gets the job done, and done well. By finetuning on a narrow domain, open source models can be made smaller to run faster and cheaper, and crucially, they allow companies to preserve their users privacy and protect their data.


Finetuning can be a complex endeavour — that is why Tromero is announcing the launch of their newest product at London Tech Week — Tromero Tailor. It allows businesses building on OpenAI and other leading LLMs to to collect data with a two-line code change or add their own. Users can automatically finetune and deploy a proprietary model with one click which matches the performance of GPT-4 for a fraction of the cost. Crucially, Tromero also offers clients the option to use their existing GCP, AWS, Azure credits, or host on Tromero’s network of GPUs which are half the price of the big three cloud providers.

For more information about our new product, Tailor AI, or to schedule a demo, please visit or contact us at

View all Partner Press Releases