KUALA LUMPUR, April 17 — Doctors remain legally responsible for medical decisions, even as artificial intelligence (AI) becomes more integrated into health care, industry experts said at a recent forum on AI in medicine.
While AI can enhance diagnostics and streamline workflows, its role should remain supportive rather than decisive, they said. Without clear AI-specific regulations, liability in medicolegal cases will likely fall on doctors rather than AI developers.
“The responsibility is on the health care worker—in this case, the doctors. We still don’t have a complete AI policy in our country yet, nor in our ministry,” said Dr Muhammad Azrul Azizi Amir Hamdan, chief growth officer of Rayatech Sdn Bhd, during a panel discussion at the Future Healthcare Asia 2025 conference on February 19.

He cited a case where an AI-generated report failed to detect lung issues, leading to a premature patient discharge. “This shows that AI reporting can have false negatives. That’s why it should act as a support tool and not make the final decision,” he said.
“If the AI makes a mistake, who goes to court? The AI developers? The AI itself, which is non-sentient? Or the doctors?” Dr Azizi asked. In the absence of AI-specific regulations, he said courts would rely on existing local and international standards such as the Personal Data Protection Act (PDPA) and the US Health Insurance Portability and Accountability Act (HIPAA).
Dr Azizi, who is also a medical doctor and developer, stressed that clinician oversight is crucial. “Every time I give a course on hospital information systems (HIS), I always say: ‘Safe login, safe logout.’ You are the doctor—you log in, you log out, and that’s your responsibility. Your account, your name, your responsibility.”
He likened AI’s role to that of a nurse assistant. “AI can suggest a course of action, but as a doctor, do you accept it outright? No, right? You are the decision-maker. Before you submit anything in the system, you can delete, modify, etc. If you don’t agree with AI, you don’t submit. If you submit, it means you accept the suggestion—just like you would from a nurse assistant.”
Military Approach: AI For Precision Medicine

Brigadier General Dr Faridzal Harrymen Mohd Din, Head of the Department of Military Medicine for the Malaysian Armed Forces, stressed the need for proper training and data integrity.
“Before implementation, we need to ensure training, education, and responsibility. If we don’t instill that in our workforce before they embark on this journey, then it’s on us,” he said.
Dr Faridzal noted that like the Health Ministry, the Armed Forces is working toward unified health records to enable AI-driven precision medicine.
“The data we collect, we hope that one day we can use it for predictive analytics, especially for personnel who need to be deployed for active duty. Our Armed Forces population is mainly young, between 20 and 40 years old,” he said.
Dr Faridzal believes that AI could help assess long-term health risks for military personnel even after retirement. “Precision AI, especially when using our data as a base model, can hopefully help us predict or conduct some form of recertification or risk identification while they are in service, allowing us to act early,” he said.
Data Quality A Key Concern

The discussion also touched on data reliability. One audience member noted, “Garbage in, garbage out,” while questioning the reliability of AI outputs.
Dr Faridzal said the military is prioritising real data over synthetic inputs. Synthetic inputs refer to artificially generated data rather than real patient records, which may affect AI accuracy and reliability.
“That’s why our journey starts with recording and obtaining true data, then using machine learning to provide better risk analysis for diseases,” Dr Faridzal said.
Dr Yap Wei Aun, Director of the Health Transformation Office at the Ministry of Health (MOH), who moderated the panel, noted that the military benefits from having an institutionalised population with longitudinal health records.
“In the military, they have an institutionalised population, which is why they have lifetime health records. So in that specific context, they don’t face the challenge of relying on fragmented or synthetic data—they can use long-term, continuous health records.
“I’m really glad you’re talking about the next step, which is linking that data to developing our own scoring system. I think that’s important.
“From the civilian sector’s perspective, things are a bit more fragmented. But if Dr Mahesh [Appannan] (director of the MOH’s digital health division) was on the panel, he’d probably remind me that we still have large, real data sets, such as the national health screening initiative, immunisation records, and initiatives like teleprimary care. The EMR system, however, is likely more fragmented than that,” Dr Yap said.
Despite AI’s growing role in diagnostics and patient management, experts agreed that ultimate responsibility must remain with human clinicians. “AI is not here to replace doctors,” Dr Aziz said. “It’s just a tool.”

