The Great Productivity Mirage


  • AI isn’t failing - our organisations are. The productivity drought is a leadership and structural problem, not a technological one
  • We’re performing AI, not adopting it. Pilots succeed, scaling fails, and cosmetic innovation masquerades as transformation
  • Efficiency ≠ productivity. Incremental automation delivers convenience, not the step-change gains many industries promise
  • Rigid 20th-century institutions can’t absorb 21st-century intelligence. Stagnant data estates, siloed structures and risk-averse cultures sabotage AI’s potential
  • The oasis exists - few have reached it. Outliers in healthcare like Mayo, Moderna and Kaiser prove that AI delivers only when organisations rebuild themselves around continuous learning and adaptive design

The Great Productivity Mirage

Spend ten minutes with today’s headlines and you will be assured that healthcare, pharma, biotech and MedTech stand at the dawn of an algorithmic renaissance - an AI-powered golden age promising to collapse cost curves, accelerate discovery, liberate clinicians, smooth supply chains and lift productivity to heights not seen since the invention of modern medicine. Tech CEOs describe this future with evangelical conviction. Governments publish forecasts with a confidence outpacing their comprehension of the technologies they reference. Investors declare that artificial intelligence will eclipse every previous technological revolution - from electrification to the internet - propelling the life sciences into an era of significant growth.

A future of abundance is presented as inevitable. To question this narrative is to risk sounding regressive. To express doubt feels irresponsible.

And yet, against this rising tide of triumphalism sits a stubborn, increasingly uncomfortable fact: the much-promised productivity boom is not materialising. Not in health systems straining to meet demand, where administrative drag still consumes up to half of clinicians’ time and waitlists continue to grow. Not in pharmaceutical pipelines, where development cycles have lengthened as R&D spending reaches record highs. Not in MedTech manufacturing, where efficiency gains remain incremental. Not in biotech labs, where experiments still unfold at the pace of manual workflows rather than automated discovery. Everywhere you look, productivity curves remain flat - barely flickering in response to the noise, investment and rhetoric surrounding the AI “revolution.”
A new episode of HealthPadTalks is available!
 
Should MedTech leaders be evaluated with the same rigour as airline pilots? Pilots undergo intensive, twice-yearly assessments because lives are at stake. Yet executives making life-impacting decisions are judged largely on short-term financial metrics. Pilot-Grade Leadership, the new episode of HealthPadTalks, argues for a pilot-inspired, holistic appraisal model - spanning ethics, crisis readiness, communication, compliance, and teamwork - for the MedTech C-suite. 
 
This is not a hidden truth; it is visible. Despite years of accelerating AI adoption, expanding budgets and soaring expectations, productivity across advanced economies continues to hover near historical lows, and healthcare is no exception. The gulf between AI’s transformative promise and its measurable economic impact widens each year, creating what might be called the Great Productivity Mirage - a shimmering horizon of anticipated progress that seems to recede the closer we get to it.

This paradox is not technological, but organisational. AI is not failing. We are failing to adopt it properly. And unless healthcare and life sciences leaders confront this fact with strategic honesty, the industry will continue pouring billions into tools that produce activity without impact. AI does not generate productivity. Organisations do. AI does not transform industries. Leaders do. AI is not the protagonist of this story. We are.

 
In this Commentary

This Commentary is a call to healthcare leaders to reconsider the foundations upon which AI is being deployed. It argues that the barrier to productivity is not the algorithms but the surrounding environment: the leadership mindset, the organisational architecture, the culture of work, the data landscape, the talent pool and the willingness to embrace disruption rather than decorate the status quo.
 
The Mirage in Plain Sight

Across advanced economies, productivity growth has been slowing markedly since the mid-2000s - a trend that has persisted despite rapid advances in digital and AI technologies. In healthcare and the life sciences, decades of technological advances have done little to shift the underlying reality: performance and productivity metrics have remained largely stagnant.

Hospitals continue to buckle under administrative load; workforce shortages deepen; and clinicians often spend more time navigating digital systems than engaging with patients. Supply chains remain opaque and fragile, while clinical-trial timelines stretch ever longer. R&D spend rises faster than inflation, and manufacturing operations still depend on legacy systems that resist integration. Meanwhile, the overall cost of care marches steadily upward. Perhaps most striking is the endurance of Eroom’s Law - the paradoxical pattern in which drug discovery grows slower and more expensive despite significant technological advances, a trajectory that still defines much of today’s R&D landscape.

This should not be happening. Historically, when general-purpose technologies reach maturity, their impact is unmistakable. Electricity radically reorganised industrial production and domestic life. The internal combustion engine reshaped cities and mobility. The internet collapsed distance and transformed nearly every aspect of organisational coordination. These technologies did not nibble at the edges; they delivered abrupt, structural changes.

 
By that logic, AI should be altering the trajectory of health and life sciences productivity. The data-rich, labour-constrained, complexity-intensive nature of the sector makes it theoretically ideal for algorithmic acceleration. Yet the promised boom fails to materialise. The needle barely flickers.

It is not that organisations lack enthusiasm. Everywhere you look, AI is showcased with confidence. Press releases trumpet “AI-enabled transformation.” Board presentations glow with colourful dashboards and heatmaps. Strategy documents overflow with algorithmic ambition. Conferences are filled with case studies describing pilots that “could revolutionise” clinical pathways, drug discovery, trial recruitment or manufacturing efficiency. But speak to the people doing the work, and the illusion begins to fracture.

The AI-enabled triage system that once dazzled executives now triggers alerts for almost half of all cases because its decision rules fail to capture the complexity and textual judgement inherent in clinical practice.

The predictive model that appeared infallible in controlled testing collapses when confronted with inconsistent, delayed, or missing patient data. The documentation automation designed to save time generates drafts that clinicians spend longer correcting than they would have spent writing themselves. The MedTech manufacturing optimiser that performed flawlessly in simulation proves brittle the moment an exception or unexpected deviation occurs. Hospital workflows splinter as clinicians move between multiple systems, attempting to reconcile conflicting outputs and unclear recommendations. The pattern repeats across organisations: AI is highly visible, yet the productivity it promised remains stubbornly out of reach.

In most cases, the technology is not the failure. The environment around it is. AI shines under controlled conditions but struggles in the complexity of real operational systems. What organisations interpret as an AI problem is nearly always an organisational one. The productivity mirage is not a technological paradox. It is a leadership and structural paradox.

 
Performing AI Instead of Adopting It

Most organisations are not implementing AI - they are performing it. They deploy AI as a theatrical signal of modernity, an emblem of innovation, a cosmetic layer added atop processes whose underlying assumptions have not been reconsidered for decades.

This performative adoption follows a familiar script. Leaders announce an AI initiative. A pilot is launched. Early results are celebrated. A success story is published. Keynotes are delivered. The pilot is slightly expanded. And then . . . nothing meaningful changes. The system remains structurally identical, only now adorned with a few machine-generated insights that rarely influence decisions in any significant way.
You might also like to listen to:
 
This cycle generates motion but not momentum. The organisation convinces itself that it is innovating, when in fact it is polishing pieces of a system that should have been redesigned. These incremental steps shave minutes off processes that need reengineering. They create pockets of efficiency without generating productivity. They allow organisations to appear modern while avoiding structural change.
In healthcare and the life sciences, this incrementalism is seductive. The sectors are risk-averse by design, bound by regulatory scrutiny, professional norms and institutional inertia. Leaders often seek the illusion of progress without confronting the complexity of change. But incrementalism is not neutral - it is a trap. It creates a false sense of advancement that prevents transformation. The result is an economy overflowing with AI activity but starved of AI impact.
 
The Leadership Gap: When 20th-Century Minds Meet 21st-Century Intelligence

A driver of the productivity mirage is the leadership mindset that dominates healthcare and the life sciences. Many senior leaders built their careers in an era that rewarded mastery of stability, long-range planning, controlled change and carefully optimised processes. They succeeded in systems where efficiency, predictability and compliance were the keys to performance.

But AI does not behave according to these rules. It is not linear, stable, predictable or controllable in the ways earlier technologies were. AI thrives on ambiguity; it improves through experimentation; it evolves through iteration; it rewards rapid learning and punishes rigidity. It is not a tool to be installed but a capability to be cultivated. It does not fit neatly within pre-existing governance frameworks; it demands new ones.

To leaders trained to minimise variability, AI’s adaptive nature appears chaotic. To leaders comfortable with regular, fixed decision cycles, AI’s dynamic responsiveness seems reckless. To leaders schooled in long-term planning, AI’s iterative experimentation feels unstructured. The consequence is significant: leaders often misunderstand what AI requires. They treat it as a procurement decision rather than an organisational transformation. They expect plug-and-play solutions when AI demands a rethinking of workflows, culture, incentives, governance structures and talent models. They look for quick wins while ignoring the long-term capability-building necessary to unlock value.

This leadership-capability gap is one of the most significant obstacles to realising AI’s productivity potential. AI punishes the wrong kind of intelligence - the intelligence optimised for linear stability rather than exponential change.

 
The Structural Incompatibility of AI and Traditional Healthcare Organisations

Even the most visionary leaders face a second barrier: the structural design of healthcare, pharma, biotech and MedTech organisations. These institutions were built for a world defined by control, standardisation and incremental improvement. Their architecture - hierarchical, siloed, compliance-heavy, process-centric - served them well in an era where efficiency was prized above adaptability.

AI, however, requires a different organisational substrate. It requires a system capable of continuous learning, not fixed processes. It demands fluid collaboration rather than rigid silos. It relies on rapid decision cycles rather than annual planning horizons. It thrives on cross-functional problem-solving rather than vertical escalation. It depends on an environment where data flows freely, not one where they are trapped in incompatible systems. It benefits from cultures that treat mistakes as learning events rather than career-damaging missteps.

In essence, AI requires organisations capable of adaptation. But healthcare organisations have been engineered for predictability. Their structures assume that change is the exception, not the norm. Their governance models assume that the safest decision is the slowest one. Their cultures reward caution, not experimentation.

This structural misalignment explains why so many AI initiatives collapse when moved from pilot conditions into real environments. Pilots are protected from organisational reality. Scaling exposes the system’s fragility. An organisation built for stability cannot suddenly behave like a learning system because a new technology has been introduced. You cannot place a learning system inside an organisation that has forgotten how to learn.

 
Data: Healthcare’s Silent Saboteur

Nowhere is the structural challenge more visible than in the sector’s data estates. Healthcare and life sciences organisations often insist they are “data rich.” In theory, this is true. But in practice, the data are fragmented, inconsistent, incomplete, duplicated, outdated, poorly labelled, or trapped in incompatible systems that cannot communicate.

In hospitals, critical patient data are trapped in electronic health records designed for billing rather than care. In pharmaceutical R&D, historical trial data are scattered across incompatible formats or locked within proprietary vendor systems. In clinical trials, important operational data are captured inconsistently across sites. In MedTech manufacturing, aging systems and paper-based records - often still maintained in handwritten ledgers - capture only a narrow view of what modern optimization requires. In biotech labs, experimental data are often stored in ad hoc formats or personal devices, rendering them unusable for machine learning.
Most organisations do not possess a unified, clean, connected data infrastructure. They possess industrial waste - abundant but unusable without extensive processing. And when AI systems fail, mis-predict, hallucinate or degrade, the blame is usually placed on the model rather than the environment. But intelligence, whether human or artificial, cannot thrive on contaminated inputs.
You might also like:

MedTech’s Comfort Crisis

The data problem is not a technical issue. It is an organisational one. It reflects decades of underinvestment in foundational infrastructure, incompatible incentives between departments and a cultural undervaluing of data governance. AI will not fix this. The environment must.
 
The Efficiency Trap: When Convenience Masquerades as Productivity

Healthcare organisations often conflate efficiency with productivity. They celebrate time savings or task automation as evidence of breakthrough transformation. They introduce AI-enabled documentation tools, intelligent scheduling assistants, automated reminders and workflow streamliners, believing these conveniences signify strategic progress.

But efficiency reduces cost; productivity increases value. Efficiency optimises the existing system; productivity redefines it. A hospital that automates documentation but leaves its care pathways unchanged has not become more productive. A biotech lab that accelerates data cleaning but leaves its experimental design untouched has not significantly increased discovery throughput. A pharmaceutical company that uses AI to scan chemical space more quickly but retains the same decision frameworks and governance structures has not accelerated R&D.

Convenience is not transformation. Marginal gains do not accumulate into structural change. The efficiency trap convinces organisations that they are evolving when in fact they are polishing the familiar.

 
Why AI Pilots Succeed but AI at Scale Fails

The healthcare and life-sciences landscape is strewn with promising AI pilots that never progress beyond their contained proving grounds. Pilots often succeed because they operate in isolation: they are sheltered from the organisational realities that determine productivity. In these controlled environments, teams can bypass inconsistent workflows, fragmented responsibilities, conflicting incentives, regulatory drag, brittle data pipelines, legacy IT constraints, procurement bottlenecks, risk-averse governance structures, and the professional identity concerns that shape day-to-day behaviour. A pilot succeeds because it is allowed to ignore the messy context in which value must be created.

Scaling, however, removes that insulation. When an AI system is introduced into routine operations, it collides with the frictions the pilot was designed to escape. Variability in clinical practice, the politics of cross-departmental collaboration, the inertia of entrenched processes, and the anxieties of staff asked to change their habits all reassert themselves. Data quality deteriorates once curated pipelines give way to real-world inputs. Compliance questions multiply. Accountability becomes ambiguous. What once looked like a technical victory is revealed to be an organisational challenge. The algorithm did not fail. The organisation did - not because it lacked technology, but because it lacked the conditions required for technology to take root.

 
The Hard Truth: AI Will Not Rescue Rigid Organisations

Many executives take comfort in the idea that the productivity gains promised by AI are deferred - that the next generation of models, the next leap in computational power, or the next wave of breakthrough applications will deliver transformative impact. This belief is understandable, but it is wishful thinking.

More powerful AI will not save organisations whose structures, cultures, and leadership models are misaligned with what AI needs to thrive. In fact, greater model capability often exposes organisational weaknesses rather than compensating for them. As AI systems become more capable, they demand clearer decision rights, cleaner data, faster iteration cycles, cross-functional cooperation, and leaders who can tolerate ambiguity and distribute authority. Where these conditions are absent, improvement stalls.

AI is an accelerant, not a remedy. It amplifies strengths and magnifies dysfunction. It rewards organisations that are adaptable - those willing to redesign workflows, challenge inherited norms, and cultivate teams able to integrate machine intelligence into everyday practice. But it punishes rigidity. Hierarchical bottlenecks, siloed teams, slow governance, and cultures resistant to experimentation become more obstructive when AI enters the system.

The result is divergence, not uplift. A small subset of organisations use AI to compound capability and pull further ahead, while many others - despite similar access to technology - see little return. The oasis of AI-driven productivity is real, but it will not materialise for organisations that attempt to modernise by applying new tools to old logic.

 
The Outliers: What Real Success Looks Like

Across healthcare, a handful of organisations - from Mayo Clinic’s AI-enabled clinical decision support programmes to Moderna’s algorithm-driven R&D engine and Kaiser Permanente’s predictive-analytics-powered care operations - have escaped the productivity mirage. They succeeded not by installing AI, but by rebuilding themselves around AI. Their trajectories offer a blueprint for what healthcare and life sciences could become.

These organisations treat data as a strategic foundation rather than an operational by-product. Moderna, for example, built a unified data and digital backbone long before it paid off, enabling its teams to iterate vaccine candidates in days instead of months. They collapse unnecessary hierarchy to accelerate decision-making - much like the Mayo Clinic task forces that integrate clinicians, data scientists, and engineers to deploy and refine AI safely inside clinical workflows. They empower multidisciplinary teams that blend domain expertise with technical skill, and they redesign workflows around intelligence rather than habit. Kaiser Permanente’s reconfigured care pathways for sepsis and hospital-acquired deterioration, guided by real-time machine-learning alerts, illustrate what this looks like in practice.

They manage risk through rapid experimentation rather than rigid prohibition, piloting fast, learning fast, and scaling only what works. They build continuous feedback loops in which humans and machines learn from each other - radiologists refining imaging models, or pharmacologists improving compound-screening algorithms - allowing both to evolve. Their gains are structural. They compress cycle times. They open new revenue streams. They elevate customer and patient experience. They increase innovation capacity. And critically, their employees feel more capable, not displaced, because AI augments human judgment rather than replaces it. These outliers prove the oasis exists. They also show how rare it is - and how much disciplined organisational work is required to reach it.

 
Healthcare’s Path Out of the Mirage

If healthcare, pharma, biotech and MedTech are to escape the Great Productivity Mirage, they must accept a truth: technology alone does not create productivity. The barrier is not the algorithm but the conditions into which the algorithm is deployed. Escaping the mirage requires a shift in leadership logic, organisational architecture, cultural norms, data discipline and talent models. It requires leaders willing to embrace ambiguity, nurture continuous learning and redesign the foundations rather than the surface. This is not an incremental challenge. It is a generational one.
 
Takeaways

The Great Productivity Mirage does not prove that AI is overhyped or ineffective. It proves that we have misjudged what AI requires and misunderstood what transformation demands. We have sought impact without capability, intelligence without redesign, revolution without revolutionary effort. But the promise remains real. The oasis is not fictional. It is visible in the healthcare organisations that have already rebuilt themselves around intelligence. The question now is whether others will do the same. AI is not the protagonist. We are. The future of healthcare depends not on the next breakthrough in models but on the next breakthrough in leadership. The productivity revolution is waiting. It is time to stop admiring the mirage - and start building the oasis.

Comments