Untitled design (8)

CASE STUDY

 

gong

 

How Gong Uses Mona to Monitor

and Optimize AI at Scale

 

INDUSTRY

Revenue Intelligence / Conversational AI

CHALLENGE

Manual monitoring

couldn’t scale

SOLUTION

Proactive, segmented monitoring with Mona

RESULT

Gong's teams now detect issues early—before users ever notice

ABOUT GONG

 

Gong.io is the leader in Revenue Intelligence on a mission to empower companies to unlock their full revenue potential by turning sales conversations into actionable insights. Their Reality Platform™ helps customer-facing teams unlock insights from real-world interactions—calls, meetings, and emails—using AI models for transcription, speech recognition, and NLP.

 

Monitoring so many moving pieces—across different data sources, languages, and customer segments—required more than QA and support tickets. So Gong turned to Mona Labs for a smarter, scalable solution.

 

KEY MODELS IN USE

 

Language

Detection

Transcription &
Speaker Separation

Voice

Identification

NLP for Topic &

Intent Detection

WHO TOOK OWNERSHIP?

 

Gong’s Engineering Lead Yaniv Levi focused on keeping models reliable in production, while Research Lead Noam Lotner monitored model behavior across regions and customer segments. Together, they adopted Mona to gain full visibility and control over their AI systems.

 

THE CHALLENGE: SCALING WITHOUT LOSING CONTROL

 

As Gong grew—adding more customers, more languages, and new communication channels—their model ecosystem became more complex. Monitoring them manually, as they had in the past, was no longer sustainable.

The team began to face major challenges:

 

  • They couldn’t detect model issues until customers noticed them.
  • Performance would quietly degrade as new regions and data patterns emerged.
  • Releasing new models carried risk without full visibility into pre-launch behavior.

BUSINESS RISKS

 

Limited confidence when releasing new models

Hidden drift from new user behavior, languages, or data

Delayed responses due to lack of proactive alerting

No performance segmentation by customer or geography

“Before Mona, it was difficult to know what was going on with our AI models.

It is a tool we must have. Without it, I don't know.”

— Yaniv Levi, Engineering Team Lead, Gong

THE TURNING POINT: A FLEXIBLE MONITORING FRAMEWORK WITH MONA LABS

 
Gong adopted Mona for its flexibility and customization, but they also gained:
 
Full visibility across all production AI models

 

Gong’s team monitors every model—from transcription to NLP—in one unified platform, ensuring nothing falls through the cracks.

Real-time alerts for drift or low confidence

 

Mona automatically notifies the team when prediction quality dips, enabling fast, informed responses.

Correlated root cause analysis

 

Instead of guesswork, Gong gets clear context—what changed, where, and how it’s impacting model performance.

Custom metrics & dashboards 

 

Each team member tracks what matters most to their models, with personalized alerts, views, and benchmarks.

Shadow deployments for safe model testing

 

New models run alongside production in “stealth mode,” giving Gong insight into how they’ll perform—without user risk.

Segmentation by language/customer/channel

 

Performance is tracked across key dimensions, helping Gong pinpoint where and why issues arise—before they scale.

GONG'S USE CASES WITH MONA LABS

 

Speaker Identification

Mona tracks how well Gong’s AI distinguishes speakers on a call—crucial for segmenting seller vs. prospect dialogue.

NLP Classification

Gong uses Mona to monitor NLP model accuracy and ensure consistent tagging of conversation topics and intent across languages.

Shadow Deployments

Mona runs experimental models alongside production to compare performance—before customers are impacted.

THE RESULTS: MORE THAN A MONITORING TOOL

 
Since implementing Mona, Gong has seen meaningful improvements in how they manage their AI systems:
 
Issues are caught proactively—not after they impact users

 

Gong detects model problems early, preventing surprises and protecting user experience.

Model owners have full visibility into the segments they care about

 

Each team monitors performance by language, customer, or channel—no more blind spots.

New models are tested safely before deployment

 

Experimental models are validated in real-world conditions—without production risk.

The team trusts their platform’s reliability at scale

 

With Mona, Gong scales AI with trust, control, and operational peace of mind.

“It was actually very easy to implement Mona.

In a matter of one or two days, we were already using Mona.”

— Yaniv Levi, Engineering Team Lead, Gong

SEE WHAT YOUR MODELS ARE TELLING YOU

 
 If your ML team is scaling fast, and your models touch real customers, Gong’s journey may sound familiar.
Mona helps engineering and research teams get ahead of issues—before they escalate—and build trust in every model decision.