Beyond Shadow AI: A Physician’s Blueprint For Clinical Sovereignty — Dr Lim Wan Chieh

AI cannot replicate the lived, human insight of a physician. It is simply the loom; the physician remains the weaver.

I am not a cybersecurity expert, nor am I a software engineer. I am simply a geriatrician trying desperately to survive while practising high-resolution medicine.

In geriatrics, we do not deal in single diagnoses; we manage clinical mosaics. A standard consultation involves untangling decades of medical history, polypharmacy, cognitive decline, and complex family dynamics.

Synthesising this into a coherent, personalised care plan takes hours of intense cognitive labour. Yet, our health care economy heavily rewards the procedural, such as the surgeon’s scalpel, while heavily discounting the cognitive.

For independent specialists, practising comprehensive, high-resolution medicine under stagnant fee schedules is economically punishing. If the system cannot financially value our time, we must urgently find ways to make our time invaluable. Enter Artificial Intelligence.

The Rise Of Shadow AI

As the Cabinet prepares to review the AI Governance Bill this June, a silent crisis is unfolding. Driven by exhaustion, doctors are rapidly adopting “shadow AI”, feeding sensitive patient histories into free, public Large Language Models (LLMs) to generate clinical summaries.

The efficiency is undeniable, but exposing patient identifiers to global algorithm training data is a catastrophic privacy breach. Conversely, doctors paralysed by the fear of liability are completely avoiding AI, missing a critical lifeline for productivity.

We need a middle path. I did not approach AI with hesitation; I embraced the potential, but I have always refused to compromise patient data.

Rather than waiting for top-down solutions, I formalised my own secure ecosystem into a standard architecture. I call this the “clinical vault,” and it relies on two practical pillars that bridge the gap between clinical necessity and data sovereignty.

It is a framework any hospital IT department can and should provision for their clinicians today.

Pillar One: The Legal Firewall

Under the PDPA (Amendment) Act 2024, transferring sensitive health data across borders via consumer AI without enterprise safeguards is a direct statutory violation. You do not need to build custom software to solve this.

By simply upgrading a clinic’s free consumer software to a paid, commercial workspace (like Google Workspace or a Microsoft 365 equivalent), we unlock enterprise-level security. However, privacy is not automatic.

Doctors must go into their administrator settings and manually execute a Business Associate Addendum (BAA) or a local privacy equivalent. Once signed, this legally transforms the tech giant from a data-miner into a data-protector. It explicitly bars your clinical data from training their public models.

While the BAA is an American legal standard, it is presently the strongest corporate mechanism we have to legally bind tech giants to data protection. But we cannot rely on foreign legal constructs forever.

As the Cabinet prepares to review Malaysia’s AI Governance Bill this June, our lawmakers and medical societies must step up. We need a localised, PDPA-compliant equivalent to the BAA.

Regulators must give independent practitioners a clear, legally sound mechanism to activate enterprise-level privacy.

Pillar Two: The De-Identified Ledger

Enterprise encryption is only half the battle; we must also respect institutional data governance. To ensure absolute clinical sovereignty, I employ a strict operational air-gap in my workflow.

The master ledger linking the pseudonym (such as Case_042) to the patient’s true identity resides strictly within the hospital’s official, managed infrastructure, like the institutional EMR.

The AI processing, however, occurs in an isolated, hospital-approved enterprise workspace. The AI synthesis engine never learns a patient’s name or IC number.

Instead, I feed the isolated AI environment a de-identified clinical mosaic. It then generates the synthesised documents, such as complex referral letters or translated care plans.

Privacy purists will rightly argue that in complex geriatrics, a detailed clinical mosaic functions as a biometric fingerprint. True anonymisation is nearly impossible. I concede this, but this is the stark reality of high-resolution medicine.

In other jurisdictions, patients pay thousands for this level of personalised synthesis. Here, stagnant fee schedules might reimburse RM235 for a case that takes hours to untangle. We do not have the luxury of endless time.

By combining an enterprise walled garden with strict pseudonymisation, we achieve a robust defence-in-depth. It is a necessary, calculated trade-off to keep high-resolution medicine economically viable without feeding the global data machine.

The Friction Paradox

To sceptics who argue this dual-ledger workflow is too clunky, I offer this reality: the few minutes spent on strict data hygiene is a mere fraction of the hours lost to manual, cognitive synthesis. It is a highly worthwhile tax for clinical sovereignty.

Eventually, regulated health-tech providers will likely emerge to seamlessly handle this secure infrastructure for us. But until they do, and until the law fiercely protects practitioners in the event these third-party startups suffer massive data leaks, physicians must take control of their own clinical vaults.

We urgently need hospital administrators to step up and provision these sanctioned dual-ecosystems, rather than leaving doctors to navigate Shadow IT alone.

The Empathy Engine

If a doctor wanted to use an AI as a lazy clinical shortcut to avoid thinking, geriatrics would be the worst specialty to choose. Standard guidelines constantly collide.

An algorithm cannot adjudicate multiple competing interactions and direct them into a care framework that protects what matters most to the patient. It cannot tell when to strategically concede on starting a clinically indicated medication simply because the immediate goal is to build crucial rapport. That nuanced negotiation is the art of medicine.

But when implemented safely as an administrative clerk, an LLM becomes an empathy engine. Complex referral letters are synthesised in seconds. Furthermore, while I speak to my patients in Mandarin, Hokkien, or Cantonese to offer comfort, the LLM instantly translates my complex care plans into perfectly written formal Chinese for their families to take home.

To those who argue that utilising an LLM is a clinical crutch, the reality is the opposite. In a secure workspace, every chat log is saved. As we implement this system, these Prompt Audit Trails serve as our definitive proof of work, documenting exactly how we instructed the model. The AI does not counsel the patient; it simply formats our clinical intent.

Because the Malaysian Medical Council (MMC) dictates that a physician’s duty of care is strictly non-delegable, this audit trail serves as the crucial empirical proof of human oversight.

A Call For Official Blueprints

Across the Causeway, under their Budget 2026 initiatives, the Singaporean government is actively subsidising premium AI access for citizens who complete SkillsFuture AI courses.

They understand that fluency requires experimentation. Malaysia’s independent practitioners cannot afford to be left behind.

I am sharing this framework because we must forge a way forward. Regulators, hospital administrators, and medical societies must pivot from merely policing AI to empowering doctors with step-by-step blueprints for securing these clinical vaults.

AI cannot replicate the lived, human insight of a physician. It is simply the loom; the physician remains the weaver. But if we adopt this technology safely today, we can spend less time transcribing the past, and far more time looking our patients in the eye.

Dr Lim Wan Chieh is a consultant geriatrician based at Sunway Medical Centre Ipoh.

  • This is the personal opinion of the writer or publication and does not necessarily represent the views of CodeBlue.

You may also like