top of page

AI in Aviation: Reality check from EASA and NIST

  • Writer: Gee Virdi
    Gee Virdi
  • 5 days ago
  • 4 min read


A common scenario is unfolding in boardrooms: a polished AI demonstration is presented, stakeholders express approval, a roadmap is developed, and budgets are allocated. However, challenges quickly emerge: data is inconsistent, processes lack standardisation, and user trust is limited. EASA’s AI Roadmap 2.0 and NIST’s AI Risk Management Framework (AI RMF) address these issues by focusing on the requirements for safe and responsible AI adoption, rather than promoting AI itself. Their message is clear: most AI risk—and value—comes from data governance and human factors, not just algorithms. If you’re leading or funding transformation, this helps you ask sharper questions.

1) AI is not ‘just tech’. It’s a system.

EASA and NIST both describe AI as a socio-technical system. That’s a fancy way of saying the model is only one moving part. What matters is how it behaves within your organisation—your data, your people, and your operating environment.

  • Data quality and integrity

  • Human understanding and oversight

  • Operational context

Treating AI as a standalone tool or a simple purchase overlooks its impact as a new capability, requiring real changes to processes, roles, controls, and accountability. In practice, AI performance depends on three things working together.

2) Data: the foundation we tend to overestimate

Most organisations don’t lack data. They lack ready data—consistent, governed, traceable, and usable for the decision at hand. EASA and NIST keep coming back to this point for a good reason. It’s easy to assume you’re “data ready” because information exists somewhere in the business. The usual friction points are very ordinary:

  • Fragmentation across sites or business units

  • Inconsistent definitions and formats

  • Limited traceability and ownership

I’ve seen industrial pilots where the largest performance leap didn’t come from a more sophisticated model. It came from agreeing on common definitions, improving data capture, and assigning clear ownership. A rule of thumb: AI doesn’t address data problems—it amplifies them. Strong foundations result in fewer surprises.

3) Human factors: from operators to decision partners

Aviation has spent decades learning a hard lesson: automation can improve safety and performance, but only when humans understand what it’s doing—and when they feel confident challenging it. EASA brings this mindset into AI. NIST and EASA flag three predictable failure modes:

  • Over-reliance on automated outputs

  • Under-utilisation due to a lack of trust

  • Misinterpretation of AI recommendations

This situation isn’t because people are “anti-AI”. It’s because most workplaces haven’t been designed around probabilistic recommendations, uncertainty bands, and model limitations. As AI is introduced, everyday roles shift:

  • Operators become supervisors of automated decisions.

  • Engineers interpret probabilistic outputs rather than deterministic ones.

  • Managers remain accountable for outcomes supported by AI.

Without training, guidance, and time to adapt, you risk being both hesitant and over-reliant—getting the worst of both worlds.

The final pillar is risk management

Here’s why thinking of AI risk as an ongoing habit (not a project phase) is crucial. NIST makes an important point: AI risk is dynamic. The model you sign off on today can behave differently in six months because the world changes—customers, operations, and data change. EASA mirrors this with lifecycle oversight. In European terms, assessment matters, but ongoing monitoring is where trust is earned. Operationally, that means planning for:

  • Continuous validation of AI performance

  • Monitoring for drift, bias, or unexpected behaviour

  • Clear escalation paths when anomalies occur

Treating AI as a one-off raises risk and lowers value. Treating it as a living system lets you improve it safely over time.

With these three pillars in mind, what actions can leaders take to obtain the most from AI?

You don’t need to change direction or slow down. You do need to change what you pay attention to. A few practical shifts improve outcomes quickly:

  • Treat data governance as a value lever, not a back-office function—because it determines what AI can safely do.

  • Design for people early—training, guidance, and clear decision rights—so the organisation can use AI appropriately.

  • Integrate AI management into current safety and risk routines. Assign clear responsibilities so operational teams regularly review AI performance as part of their existing workflow. Avoid creating separate, disconnected review structures.

  • Keep accountability crystal clear: when decisions are AI-supported, leaders still own the outcome.

This isn’t about dampening innovation. It’s about building AI that people can trust, auditors can follow, and operations can sustain.

Three questions

If you want a quick way to cut through the noise, start here:

  • Data: Do we have one shared view of the key data we’ll rely on (definitions, owners, quality), or multiple competing versions?

  • People: Are teams trained and encouraged to question AI outputs—especially when the recommendation doesn’t match operational reality?

  • Risk: Are we set up to monitor performance over time and respond quickly when the model drifts or behaves unexpectedly?

These questions don’t slow you down. They stop you from scaling the wrong thing.

Conclusion

EASA and NIST formalise a simple truth: robust processes, transparency, and accountability are what make AI adoption work. Before green-lighting an AI initiative, request a one-page assurance summary covering: purpose and success criteria; data sources and quality; oversight and decision rights; and monitoring for drift and incidents. Make this a standard agenda item until you’re confident in the approach.

Comments


bottom of page