AI

I Thought I Was Solving for Churn - I Was Actually Solving for AI Reliability

Why explainability, traceability, and human values must power the systems we build

Dr. Zubia Mughal

Blog Post

5 minute read

Jul 16, 2025

When I set out to predict why our customers were canceling our service subscriptions, I focused on a few key goals: keep customer engagement high, protect revenue, and identify who was at risk of leaving. I built a model using every standard feature—recency of activity, frequency of purchases, and monetary value. 

And it worked! … until it didn’t.

I’ll never forget how our largest account—one that contributed millions annually—started triggering unexpected alerts. Despite a dedicated white-glove support team, service handoffs slipped, and our high-touch commitments frayed. We cleansed and validated the data multiple times, yet the anomaly persisted.

That’s when I stopped treating it as noise and asked myself: “What if this is telling us something the system was never designed for?”

It wasn’t just an outlier. It was a call to build systems that are transparent, traceable, and defensible—the heart of reliable AI.

From Forecasting to Diagnosing

In that pivotal moment, I shifted my mindset. The model was no longer a simple forecasting engine; it became a diagnostic tool to surface hidden operational gaps. Instead of quickly removing outliers, we investigated them:

  1. Traceability: Every alert needed a clear link back to source data and business logic.
  2. Explainability: Any stakeholder—from the CEO to a frontline rep—had to understand why the system flagged a risk.
  3. Correctability: Flags should invite review, and not be seen as final verdicts.

By reframing our work in these terms, the churn model transformed from a black-box warning system into a strategic partner, guiding leaders toward concrete retention actions instead of providing flimsy predictions.

What That Moment Taught Me About Ethical and Reliable AI

That experience rewrote my definition of AI reliability. It’s not a final checkbox; it’s a design philosophy woven into every stage of development:

  • Transparency: I learned to follow every strange data pattern, even when it meant pausing project timelines.
  • Auditability: I documented every calculation, data source, and decision rule to ensure full visibility.
  • Fairness: I corrected processes and labels to ensure our models reflected genuine customer experiences.
  • Human-Centered Design: I addressed root-cause issues in workflows rather than just masking them with smarter algorithms.

When teams can trace a recommendation to a clear rationale, they act with conviction, and that’s the competitive edge AI reliability delivers.

From Research to Prescription: Designing AI That Understands Context

My research background taught me that insight without action is wasted effort. So, I built systems that start with descriptive analysis, learn through pattern recognition, and  deliver prescriptive insights:

  • Descriptive: “Last login was 45 days ago.”
  • Diagnostic: “Billing dates overlapped, causing confusion.”
  • Prescriptive: “Schedule a dedicated review call and adjust service terms before the next cycle.”

For example, we created a “billing overlap” flag that pinpointed when renewal notices arrived before previous invoices were closed—revealing a gap that led to customer frustration. Another feature tracked support response-time spikes that signaled rising dissatisfaction. These microscopic, causational signals empowered proactive interventions rather than reactive firefighting.

Every feature had to pass three tests:

  1. Explainability: Could I describe it in a sentence?
  2. Traceability: Could I map it to raw data and logic?
  3. Defensibility: Could I stand by its business value under scrutiny?

That discipline turned raw data into pragmatic action plans aligned with organizational goals.

Teaching the Machine to Reflect Human Values

Data without a values framework is hollow. To ensure our AI behaved responsibly, we embedded four core principles:

  1. Agency: Empower users with choices, rather than a single “best” recommendation.
  2. Hope: Frame outcomes with realistic optimism, never empty promises.
  3. Dignity: Avoid reductive labels; respect users’ context and identity.
  4. Clarity: Make every suggestion explainable, even to non-technical stakeholders.

We then built feedback loops where subject matter experts reviewed AI outputs, scoring them on these values and providing corrections. The system learned not only what to recommend but how to recommend it—with empathy and rigor. Over time, the AI grew more attuned to organizational culture and customer needs.

How Systems Work Became Process Transformation

I’ve been asked to build everything from dashboards to full-blown recommendation engines. And each time, I discovered that the real work lay upstream of the model:

  • Workflow redesign: Standardizing definitions across teams.
  • Process documentation: Creating clear guides so everyone speaks the same language.
  • Cross-functional alignment: Ensuring data, product, support, and sales are all agreed on targets and metrics.

You can’t layer explainable AI on top of opaque processes. True reliability lives in the human effort to translate complexity into shared understanding, bridging silos and making systems worthy of trust.

The Hidden Work of Feature Development: From Descriptive to Prescriptive

Early in my career, I leaned on standard features—recency, frequency, and monetary. But I soon realized: descriptive features show what happened; prescriptive features guide next steps.

I began engineering deeper signals:

  • Behavioral flags: shifts in usage patterns that pre-empt churn.
  • Contextual clusters: grouping customers by subtle interactions, not just broad demographics.
  • Causal inference indicators: features designed to reveal root causes, like reporting lags or support bottlenecks.

Each new flag had to be justifiable:

  • Can I explain why it matters?
  • Can I trace it back to a logical data path?
  • Can I defend its impact in a board-level discussion?

This rigorous approach turned feature engineering into an ethical act, ensuring every data point served a clear business purpose and complied with transparency standards.

Final Reflection: Build AI You’re Proud to Explain

Every AI system I’ve helped with designing—from churn prediction to strategic content recommendations—has taught me one unshakable truth:

Clarity is the currency of trust.

  • If it can’t be explained, it won’t be believed.
  • If it can’t be traced, it won’t be defended.
  • If it can’t be actioned, it won’t be used.

That’s why I build for AI reliability—not because business demands it, but because responsibility demands it.

When you craft AI with transparency at its core, you create more than just accurate models; you forge lasting confidence. You enable leaders to move swiftly, knowing every recommendation stands on a foundation they can see and validate. You empower teams to act on, not second-guess, the system’s guidance.

The most powerful AI we can build is one we can confidently present, step by step, and say, “This system makes good decisions—and here’s exactly why.”

For more stories from the cutting edge of technology and how it works with business strategy, delivered directly to your inbox, subscribe to Impact’s newsletter, The Edge

Dr. Zubia Mughal's headshot

Dr. Zubia Mughal

Lead Data Researcher

Dr. Zubia Mughal is the Lead Data Researcher in the Department of AI at Impact, where she designs intelligent systems that help teams make smarter decisions and work more efficiently. Her focus is on translating complex business questions into structured data models that support prediction, pattern detection, and real-time reasoning. Zubia’s work blends experimentation, engineering, and machine learning to solve performance challenges that matter.

Read More About Author

Tags

AIDigital Transformation

Share

Impact Insights

Sign up for The Edge newsletter to receive our latest insights, articles, and videos delivered straight to your inbox.

More From Impact

View all Insights