Recent posts by Mona

Be the first to know about top trends within the AI / ML monitoring industry through Mona's blog. Read about our company and product updates.

Posts by Itai Bar Sinai, Co-founder and CPO:

Beyond Backtests: Bridging the Gap Between Simulation and Real-Time Trading

 

Backtesting is the backbone of quantitative finance, enabling quants to simulate strategies and assess performance in a controlled environment. But as any seasoned quant will tell you, much like a model is only as good as its representation of the real world, a backtest is only as good as its alignment with real-time trading. The leap from simulated strategies to live markets often reveals discrepancies that can erode profits, undermine confidence, and even jeopardize entire strategies.

Why do these gaps between backtesting and real-time trading occur? And more importantly, how can they be addressed? In this blog, we explore the common pitfalls that create these discrepancies, the role of intelligent monitoring in closing the gap, and how Mona helps quants detect and address issues before they impact the bottom line.

How Granularity in Model Monitoring Saves Quants from Costly Mistakes

If you’ve ever been responsible for monitoring models at a quant hedge fund, you know how tricky it can be. One minute everything looks fine; the next, you’re in a full-blown fire drill because of a performance issue you didn’t catch in time. In other cases, you learn way too late that there was something simple you could have changed that would have improved your results significantly, or some specific area that was completely overlooked. But why does this happen? The problem often comes down to granularity—or rather, the lack of it.

Everything You Need to Know About Model Hallucinations

Everything You Need to Know About Model Hallucinations

Have you ever asked an AI model a simple question—only to get an answer that sounds convincing but turns out to be completely false?

If so, you’ve encountered what’s known as a model hallucination—one of the most common (and potentially dangerous) challenges in working with large language models (LLMs). As these models become more deeply integrated into customer-facing tools, internal workflows, and high-stakes decision-making processes, the risk of hallucinated outputs becomes more than just a curiosity—it becomes a business liability.

In this article, we’ll unpack what model hallucinations really are, why they happen, and—most importantly—how you can detect and reduce them. Whether you're a developer, product owner, or business leader, understanding this phenomenon is key to using LLMs safely and effectively.

Let’s get into it.

Overcome cultural shifts from data science to prompt engineering

Overcome cultural shifts from data science to prompt engineering

The widespread use of large language models such as ChatGPT, LLaMa, and LaMDA has the tech world wondering whether data science and software engineering jobs will at some point be replaced by prompt engineering roles, rendering existing teams obsolete. While the complete obsolescence of data science and software engineering seems unlikely anytime soon, there’s no denying that prompt engineering is becoming an important role in its own right. Prompt engineering blends the skills of data science, such as a knowledge of LLMs and their unique quirks, with the creativity of artistic positions. Prompt engineers are tasked with devising prompts for LLMs that elicit a desired response. In doing so, prompt engineers rely on some techniques used by data scientists, such as A/B testing and data cleaning yet must also have a finely developed aesthetic sense for what constitutes a “good” LLM response. Furthermore, they need the ability to make iterative tweaks to a prompt in order to nudge a model in the correct direction. Integrating prompt engineers into an existing data science and engineering org therefore requires some distinct shifts in culture and mindset. Read on to find out how the prompt engineering role can be integrated into existing teams and how organizations can better make the shift towards a prompt engineering mindset.