top of page

AI readiness

  • Writer: Gee Virdi
    Gee Virdi
  • Feb 16, 2024
  • 3 min read


The key to being AI-ready is to reframe the conversation. Skip the jargon and the hype and start, as I always say, with outcomes.

Here are four areas to focus on first—so you can bring some reality to your situation:

  1. What problems are we solving? AI for AI’s sake is a waste of resources. I’ve said the same thing about plenty of hype cycles before. Define use cases that clearly tie to value—whether that’s reducing operational inefficiencies, improving customer experience, or unlocking new revenue.

  2. What’s holding us back? Most organisations are slowed down by tangled systems, siloed data, and unclear decision-making. Fixing these foundations matters far more than chasing the latest algorithm. Sure, you can run experiments and do some “innovation”—but when you try to put those experiments into production, you’ll face a much steeper climb if you haven’t tackled the root causes.

  3. Who’s accountable? AI isn’t just a tech problem—it’s a business imperative. That requires clear accountability at the top, with leaders driving the cultural change needed to make AI stick. In most conferences and panel discussions I’m part of, the conversation starts with the tech… and eventually ends up at culture—because culture is what derails almost every transformation effort.

  4. How do we measure success? If you can’t define success in business terms, your AI strategy is flawed. Every initiative should tie back to measurable outcomes, from cost savings to revenue growth. Measurement isn’t about data or technology—it’s about whether you achieved the outcomes you set out to achieve. Too many transformations get stuck on measures like data-quality metrics. Those can matter from a use-case perspective, but they shouldn’t be front and centre. Treat them as back-office metrics—and keep business outcomes front and centre.

Your Data & AI Readiness Checklist

You might be tempted to start an AI readiness review with a long list of data and technical components. But that’s not where you should begin.

If I start by reviewing data quality, what’s the point—when I haven’t even defined the business outcomes or use cases yet? Until you’ve done that, anything related purely to data or technology should sit fairly low down the pecking order.

To help you get a clear picture of where you stand, here’s a checklist of questions designed to make sure your data and AI strategy is built around use cases that drive value. (I’ve used the five areas in Cisco’s research and added the questions I’d ask.)

Strategy

  • Have we identified specific business use cases that AI can address?

  • Are these use cases aligned with our strategic objectives and measurable outcomes?

  • Have we determined which type of AI (e.g., machine learning, neural networks, generative AI) is best suited for each use case—or do we actually need AI at all?

Data

  • Do we have access to the data needed to support our chosen use cases?

  • Is our data structured and clean enough to enable meaningful insights?

  • How well does our data align with the requirements of the AI models (e.g., training data for machine learning)?

Infrastructure

  • Do we have the technical capabilities to support the identified use cases?

  • Can we scale our infrastructure as the use cases evolve or expand?

  • Are our systems flexible enough to integrate with AI tools designed for these specific needs?

Governance

  • Are there clear governance frameworks to ensure AI use cases are ethical, compliant, and secure?

  • How do we monitor and manage risks specific to the use cases we are deploying?

  • Do we have a clear escalation process if AI outcomes don’t align with expectations?

Talent

  • Do we have the expertise to select and implement AI models that fit our use cases?

  • Are we investing in the right mix of technical and business skills to ensure use case success?

  • How are we fostering collaboration between data, technology, and business teams?

Culture

  • Is there a shared understanding across the organisation of how these use cases will drive value?

  • Are teams empowered to experiment with AI solutions for their specific challenges?

  • How are we addressing resistance to AI adoption tied to these use cases?

Comments


bottom of page