Bar Winkler

Bar Winkler

|

|

Oct 6, 2025

Oct 6, 2025

How to Identify the Right AI Use Cases

How to Identify the Right AI Use Cases

How to Identify the Right AI Use Cases

A Practical Guide from Real Support Data

A Practical Guide from Real Support Data

Every executive today is being told the same story: agentic AI is ready to transform your operations. Agents can hold context, take actions, navigate systems, and increasingly behave like digital teammates. The promise is compelling - automate entire journeys, resolve customer issues end-to-end, and finally escape the gravity of manual workflows.

But here’s the part that gets lost in the excitement: agents don’t magically know which problems are worth solving.

Agentic AI often fails not because the models are weak, but because the use cases are wrong. Teams gravitate toward the workflows they assume are worth automating, the ones that feel important or visible from the outside. Meanwhile, the real operational bottlenecks - the places where agents could deliver disproportionate value remain untouched.

A few months ago, an executive from a large automotive services company came to us with exactly this challenge. His support organization was exploring agents with the goal of reducing cost, but early prototypes weren’t landing. “We’re trying to automate everything,” he admitted. “But nothing is moving the needle.”

The problem wasn’t the agent. The problem was where the agent was being pointed.

So we walked his team through a different approach - one grounded not in intuition, but in operational truth. What follows is the guide that emerged from that work: a practical, data-driven way to identify high-impact automation opportunities in customer support - the kind that allow agentic systems to actually demonstrate their value.


Start With Evidence, Not Assumptions

Every support team carries a mental model of its workload. A shared folklore about the issues customers raise, the conversations that consume time, the moments where human reps struggle. The reality is almost always different. If you want to deploy agents responsibly and effectively, you need a factual picture of what customers are actually asking for, not the story that has grown inside the organization.

The moment we examined the call transcripts, the support operation revealed itself in a way the team hadn’t seen before. The workload wasn’t as broad or evenly distributed as they assumed; it was concentrated, patterned, and moving along far more consistent paths than any reporting system suggested. What they believed was a wide field of disparate issues turned out to be something much more structured. That structure is what ultimately defined the automation roadmap.


Map the Support Landscape - Then Validate It With Real Data

Once we established the landscape, the next step was to actually map it. We created a structured view of the support domain and used the call data to populate it.

What emerged was a support operation that looked very different from what the team had imagined. Two domains: Appointment & Booking on one side, and Damage & Diagnostics on the other, were carrying most of the organization’s weight. Together, they accounted for nearly three-quarters of all support interactions. Everything else, from billing to insurance to general inquiries, was riding in the margins.

And the shape of the customer journey mattered just as much as the categories themselves. A typical call wasn’t a single question with a clean handoff. It involved multiple decisions, clarifications, and follow-ups - about four distinct topics per call, on average. More importantly, no matter where customers started, many of their paths bent toward the same outcome. Nearly 80% of calls that touched on “booking,” even in passing, eventually turned into a booking journey. Booking wasn’t just a support category; it was the gravitational center pulling most conversations toward it.

Every support organization has its own gravitational center. The only question is whether you’ve uncovered it.


Identify the High-Leverage Support Use Cases

Recognizing where conversations cluster is a start, but it’s not enough to decide what to automate. To understand where agents can have meaningful impact in reducing cost, you need to know not only what customers ask about, but where human agents spend their time.

When we layered handling time onto the conversation volumes, the leverage points became obvious. A small set of issues, the same ones that appeared frequently, were consistently consuming a disproportionate share of human effort. Booking complex services, rescheduling or confirming appointments: these were not scattered, exotic interactions - they repeatedly represented a drain on human agent time.

The combination of volume and effort creates the clearest signal an enterprise can hope for. These high-load journeys are where AI agents can demonstrate unmistakable value. They’re structured enough to automate, common enough to matter, and costly enough that improved efficiency changes not just metrics, but the lived experience of customers and staff.

Every organization has a handful of these high-leverage flows. They rarely match the internal mythology. But once you find them, the roadmap becomes surprisingly clear.


Prioritize the Flows With the Highest Implementability

Identifying the high-leverage flows is the moment where things finally come into focus - but it also creates a new kind of decision. Once you can clearly see the set of conversations that dominate your support workload, the question shifts from what matters to what’s actually feasible to automate first.

This is where the real-world analysis became invaluable. The ranking of call drivers didn’t just expose the issues carrying the most load; it revealed which of those issues repeated with a level of clarity and structural stability that made them suitable for agentic automation. Some flows, like booking a service or diagnosing straightforward damage, appeared hundreds of times with almost the same conversational shape. The questions were predictable. The decision logic was stable. The transitions between steps rarely deviated.

These weren’t just important flows - they were implementable ones. They had the operational maturity that allows an agent to perform reliably without getting lost in exceptions or ambiguity. In practice, this meant we could build, evaluate, and iterate with confidence because the underlying workflow behaved the same way across thousands of real interactions.

Other flows, however, told a very different story. Billing issues, certain insurance questions, and long-tail service concerns showed up in the data as structurally messy - inconsistent phrasing, context-heavy reasoning, unclear boundaries, or highly individualized decisions. Even if they mattered to the business, they weren’t the right place to begin. Their variability would have slowed progress, created friction, and undermined trust in the early agentic deployments.

By starting with the flows that combined high leverage with high implementability, the customer was able to generate early wins that were both meaningful and repeatable. And once those foundational workflows were automated, adjacent ones - often sharing similar vocabulary, logic, or patterns - became dramatically easier to extend into. Implementability isn’t just a way to pick a starting point; it defines the rest of the automation roadmap.


The Best Automation Strategy Starts With Operational Truth

What ultimately enabled this customer to make real progress wasn’t a bold vision for AI or a long list of potential automations. It was the decision to root the roadmap in how the support operation actually worked. Once the team saw where conversations concentrated, where handling time accumulated, and which flows had the structural consistency to support automation, the path forward became straightforward. The early deployments weren’t chosen because they were conceptually exciting - they were chosen because the data showed they would work.

That clarity accelerated everything. In this case, the first measurable gains showed up in cost reduction, but the same method can be aimed at improving NPS, reducing wait times, increasing conversion, or strengthening compliance. The specific outcome shifts with the business goal; the approach stays the same.

Operational truth doesn’t eliminate the complexity of automation, but it focuses the effort. It prevents teams from spreading attention across low-value ideas and directs agents toward the work that meaningfully shapes the customer experience. When you start there, adoption is faster, trust builds sooner, and each subsequent deployment becomes easier to justify - and easier to deliver.

Agentic AI isn’t about automating everything. It’s about automating the right things first.

Begin with what the evidence shows. Begin with the flows that actually define your operation.

That’s where meaningful impact comes from - and where automation becomes durable instead of experimental.