Recent posts by Mona

Be the first to know about top trends within the AI / ML monitoring industry through Mona's blog. Read about our company and product updates.

How to Ensure Consistent Performance in Quant Trading Systems

How to Ensure Consistent Performance in Quant Trading Systems

The Challenges of Algorithmic Trading

Automated quantitative trading systems are sophisticated, but they face a myriad of challenges that can disrupt their performance. From issues with data and models to internal operations, there’s a lot that can go wrong. 

One of the biggest challenges is the reliance on third-party data sources. These sources can change without notice, causing disruptions that ripple through your entire system. Moreover, when you’re using different models and versions across various markets and asset classes, it becomes incredibly difficult to detect performance degradations (caused by a variety of things, e.g., data drift). The sheer volume and complexity of the data make identifying problems early like, “finding a needle in a haystack.”

Market behavior is inherently volatile, further complicating the process of distinguishing between real issues and noise. Unfortunately, many issues are only detected after they’ve already caused a decline in returns—by then, it’s too late. While most teams have some form of monitoring in place, it’s often standard, semi-manual, and reactive.

Efforts to deeply and automatically monitor these systems are hampered by a range of problems: big data challenges, organizational constraints, and most notably, the issue of false alarms, which can lead to alert fatigue within the team.

Case Study: Best Practices for Monitoring GPT-Based Applications

Case Study: Best Practices for Monitoring GPT-Based Applications
 
This is a guest post by Hyro - a Mona customer. 

What we learned at Hyro about our production GPT usage after using Mona, a free GPT monitoring platform

At Hyro, we’re building the world’s best conversational AI platform that enables businesses to handle extremely high call volumes, provide end to end resolution without a human involved deal with staff shortages in the call center, and mine analytical insights from conversational data, all at the push of a button. We’re bringing automation to customer support at a scale that’s never been seen before, and that brings with it a truly unique set of challenges. We recently partnered with Mona, an AI monitoring company, and used their free GPT monitoring platform to better understand our integration of OpenAI’s GPT into our own services. Because Hyro operates in highly-regulated spaces, including the healthcare industry, it is essential for us that we ensure control, explainability, and compliance in all our product deployments. We can’t risk LLM hallucinations, privacy leaks, and other GPT failure modes that could compromise the integrity of our applications. Additionally, we needed a way to monitor token usage and the latency of the OpenAI service in order to keep costs down and deliver the best possible experience to our customers.

Everything You Need to Know About Model Hallucinations

Everything You Need to Know About Model Hallucinations

If you’ve worked with LLMs at all, you’ve probably heard the term model hallucinations tossed around. So what does it mean? Is your model ingesting psychedelic substances? Or are you the crazy one and hallucinating a model that doesn’t actually exist? Luckily, the cultural parlance points to a problem that is less serious than it sounds. However, model hallucinations are something that every LLM user will encounter, and they can cause problems for your AI-based systems if not properly dealt with. Read on to learn about what model hallucinations are, how you can detect them, and steps you can take to remediate them when they inevitably do arise.

Overcome cultural shifts from data science to prompt engineering

Overcome cultural shifts from data science to prompt engineering

The widespread use of large language models such as ChatGPT, LLaMa, and LaMDA has the tech world wondering whether data science and software engineering jobs will at some point be replaced by prompt engineering roles, rendering existing teams obsolete. While the complete obsolescence of data science and software engineering seems unlikely anytime soon, there’s no denying that prompt engineering is becoming an important role in its own right. Prompt engineering blends the skills of data science, such as a knowledge of LLMs and their unique quirks, with the creativity of artistic positions. Prompt engineers are tasked with devising prompts for LLMs that elicit a desired response. In doing so, prompt engineers rely on some techniques used by data scientists, such as A/B testing and data cleaning yet must also have a finely developed aesthetic sense for what constitutes a “good” LLM response. Furthermore, they need the ability to make iterative tweaks to a prompt in order to nudge a model in the correct direction. Integrating prompt engineers into an existing data science and engineering org therefore requires some distinct shifts in culture and mindset. Read on to find out how the prompt engineering role can be integrated into existing teams and how organizations can better make the shift towards a prompt engineering mindset.