The Trust Paradox

Why do we accept AI in banking but fear it in healthcare?

Dr Ahmad Moukli

11/21/20255 min read

A physician's perspective on the dangerous gap between technology and medical training

As a practicing physician who has implemented AI solutions in clinical practice, I recently conducted a workshop on AI in medicine for fifteen newly graduated GPs. What I discovered shocked me: these bright, freshly trained doctors knew little to nothing about the potential of voice recognition technology or AI applications in medicine. Here were our newest clinicians, entering a world where AI permeates every other profession, yet their medical education had left them completely unprepared for the technological reality of modern healthcare.

This revelation crystallised a troubling paradox I've long observed. Every day, millions of us freely entrust our most sensitive financial data to algorithms. We allow AI systems to monitor our spending, approve our mortgages, and manage our investments. We share our location with Google, our thoughts with Meta, our shopping habits with Amazon, and our entire financial DNA with banks.

Yet when it comes to healthcare – where AI could literally save lives – not only do we hesitate, we're not even educating our doctors about its existence. This isn't just illogical; it's a systemic failure that's costing lives daily.

The education crisis nobody's discussing

The knowledge gap I witnessed in that workshop represents a profound institutional failure. These weren't resistant older physicians struggling with new technology, they were digital natives who use AI constantly in their personal lives but had no idea how to harness it professionally. Their medical schools had taught them anatomy from centuries-old principles (still vital), diagnostic techniques from decades past (still important), but nothing about the AI tools that could make them exponentially more effective clinicians today.

Think about the absurdity: we're training physicians as if they'll practice in 1990, not 2030. They graduate knowing how to use a stethoscope but not how voice recognition could eliminate the hours they'll waste on documentation. They can read an X-ray but don't know that AI can catch cancers they might miss. They memorise drug interactions but aren't taught about clinical decision support systems that could prevent medication errors.

This educational blindness perpetuates the very resistance we need to overcome. How can young physicians advocate for AI in healthcare when they don't even know what's possible?

The sacred and the profane: understanding our double standard

I understand the psychology behind our resistance. Money feels recoverable – dispute a charge, rebuild credit. Health feels irreversible – a missed diagnosis, a life forever changed. There's something sacred about the doctor-patient relationship that doesn't exist with your banker. We want a human managing our mortality, even as we let algorithms manage everything else.

But here's what this emotional response obscures: the choice isn't between human care and AI care. It's between human care alone and human care enhanced by AI. More troublingly, it's between transparent, regulated AI that we can monitor and improve, versus the hidden AI that already permeates healthcare through insurance algorithms, hospital systems and pharmaceutical research.

The AI that's already here but hidden

The resistance to healthcare AI rests on an illusion: that we're keeping AI out of medicine. We're not. When your insurance company denies coverage, that's often an algorithm. When your appointment gets scheduled, that's AI. When your medication was developed, AI played a role. The question isn't whether to introduce AI to healthcare but whether to do it openly, ethically and effectively.

Consider what we already accept without question in finance:

  • Banks use AI to detect fraud across millions of transactions instantaneously

  • Credit companies employ algorithms that know our purchasing patterns intimately

  • Insurance firms use machine learning to assess risk and process claims

  • Investment platforms trust AI with retirement savings and life plans


A data breach in any of these systems could destroy someone's economic life. Yet we've collectively accepted these risks for the efficiency they provide. Why? Because the benefits demonstrably outweigh the risks and because we've built guardrails.

The guardrails exist – we need to strengthen and accelerate them

The concern about "appropriate guardrails" in healthcare AI is valid but already partially addressed:

  • Healthcare has HIPAA, GDPR and frameworks that make financial data protection look permissive in comparison

  • Medical AI undergoes rigorous clinical validation and regulatory approval

  • Professional liability and malpractice laws create accountability

  • Institutional review boards and ethics committees provide oversight


But here's where my workshop experience becomes relevant: we need authorities to speed up the regulatory framework, not slow it down. The current pace of regulation is creating a dangerous gap. Technology is advancing rapidly while regulation crawls, leaving physicians untrained and patients unprotected. We need robust guardrails established quickly so that people can have appropriate faith in the system and physicians can be trained properly.

The transformative power we're not teaching

In my own practice, I've implemented AI solutions that those newly graduated physicians had never heard of:

  • Voice recognition and clinical documentation: AI transcription captures patient consultations with remarkable accuracy, freeing physicians from the documentation burden that drives burnout. Those young doctors I taught will spend up to 49% of their time on paperwork, unnecessarily.

  • Diagnostic support: AI systems are matching or exceeding human specialists in detecting cancers, retinal diseases and cardiac conditions. Yet these new physicians weren't taught how to work with such systems.

  • Evidence-based medicine at scale: With medical knowledge doubling every 73 days, no physician can stay current. AI ensures the latest evidence is always accessible, yet we're not teaching doctors how to use these tools.

  • Predictive analytics: Machine learning identifies at-risk patients before symptoms appear, enabling prevention. But if doctors don't know these tools exist, how can they use them?


The moral dimension of our educational failure

Here's the uncomfortable truth: while we fail to educate doctors and debate privacy concerns, people suffer unnecessarily:

  • Diagnostic errors occur annually in approximately 4.3% of UK GP consultations, resulting in a potential 6 million cases of moderate or serious harm (see study here). Many of these would be preventable with AI assistance.

  • New physicians enter practice already burning out from documentation burdens that AI could eliminate.Rural communities lack specialists that AI could virtually provide, if only their doctors knew how.Treatment delays occur because overwhelmed, undertrained physicians can't process information fast enough.

  • When those fifteen young physicians left my workshop, they were amazed and frustrated in equal measure. Amazed at what was possible; frustrated that their education had failed to prepare them for it. How many patients will suffer because we're sending doctors into practice technologically blindfolded?


A three-part call to action

We need courage on three fronts:

  • Educational reform: Medical schools must immediately integrate AI and technology training into their curricula. We cannot continue sending physicians into 21st-century practice with 20th-century training. Every medical student should graduate understanding voice recognition, AI-assisted diagnosis and clinical decision support systems.

  • Regulatory acceleration: Authorities must speed up, not slow down, the establishment of robust regulatory frameworks. The current regulatory pace creates dangerous uncertainty that prevents adoption and leaves patients vulnerable. We need strong, clear guardrails established quickly so that both physicians and patients can have appropriate faith in the system.

  • Consistent trust: We must apply the same calculated trust to healthcare AI that we already demonstrate daily in finance. The next time you tap your phone to make a payment or check your bank balance, remember: you're trusting AI with information that could destroy your financial life. Why won't you trust it, with proper oversight, to save your actual life?


The sacred nature of healthcare doesn't mean we should exclude technology. It means we should deploy it with even greater care, transparency and ethical consideration than we do in commerce. But we can't deploy what doctors don't understand, regulate what authorities won't expedite or trust what we refuse to acknowledge.

The question isn't whether AI belongs in healthcare, it's how quickly we can educate our physicians, establish our guardrails and start saving the lives we're currently losing to our own hesitation.

brown concrete building with statue
brown concrete building with statue

Connect with Contempo and join the discussion!