Recent posts by Mona

Be the first to know about top trends within the AI / ML monitoring industry through Mona's blog. Read about our company and product updates.

Posts by Itai Bar Sinai, Co-founder and CPO:

Common pitfalls to avoid when evaluating an ML monitoring solution

Common pitfalls to avoid when evaluating an ML monitoring solution

Machine learning operations (MLOps) is currently one of the hottest areas for startup investment, because while best practices for building machine learning models are relatively well understood, a great deal of innovation is being poured into devising ways to best operationalize them for production. Chief among the MLOps categories is ML monitoring. Making sense of the landscape of ML monitoring tools can be frustrating, time consuming, and just plain confusing. Our goal with this article is to chart its cartography and, in doing so, hopefully illuminate some of the common pitfalls around choosing an appropriate monitoring solution, thereby bringing order to the chaos.

Data drift, concept drift, and how to monitor for them

Data drift, concept drift, and how to monitor for them

Data and concept drift are frequently mentioned in the context of machine learning model monitoring, but what exactly are they and how are they detected? Furthermore, given the common misconceptions surrounding them, are data and concept drift things to be avoided at all costs or natural and acceptable consequences of training models in production? Read on to find out. In this article we will provide a granular breakdown of model drift, along with methods for detecting them and best practices for dealing with them when you do.

The secret to successful AI monitoring: Get granular, but avoid noise

The secret to successful AI monitoring: Get granular, but avoid noise

In the past 4 years I’ve been working with teams implementing automated workflows using ML, NLP, RPA, and many other techniques, for a myriad of business functions ranging from fraud detection, audio transcription all the way to satellite imagery classification. At various points in time, all of these teams realized that alongside the benefits of automation they have also added additional risk. They have lost their “eyes and ears on the field”, the natural oversight you get by having humans in the process.