In recent years, the topic of AI democratization has gained a lot of attention. But what does it really mean, and why is it important? And most importantly, how can we make sure that the democratization of AI is safe and responsible? In this blog post, we'll explore the concept of AI democratization, how it has evolved, and why it's crucial to closely monitor and manage its use to ensure that it is safe and responsible.
Posts by Itai Bar Sinai, Co-founder and CPO:
Machine learning operations (MLOps) is currently one of the hottest areas for startup investment, because while best practices for building machine learning models are relatively well understood, a great deal of innovation is being poured into devising ways to best operationalize them for production. Chief among the MLOps categories is ML monitoring. Making sense of the landscape of ML monitoring tools can be frustrating, time consuming, and just plain confusing. Our goal with this article is to chart its cartography and, in doing so, hopefully illuminate some of the common pitfalls around choosing an appropriate monitoring solution, thereby bringing order to the chaos.
Trusting in artificial intelligence systems is not easy. Given the variety of edge cases on which machine learning models may fail, as well as the lack of visibility into the processes underlying their predictions and the difficulty of correlating their outputs to downstream business results, it’s no wonder that business leaders often look upon AI with some skepticism.
Data and concept drift are frequently mentioned in the context of machine learning model monitoring, but what exactly are they and how are they detected? Furthermore, given the common misconceptions surrounding them, are data and concept drift things to be avoided at all costs or natural and acceptable consequences of training models in production? Read on to find out. In this article we will provide a granular breakdown of model drift, along with methods for detecting them and best practices for dealing with them when you do.