Health Systems Action

“Hey Siri, write my notes” – can SA leapfrog from paper to electronic medical records?

A number of electronic medical record systems have been adopted in the South African private practice setting, but handwritten notes are still common, and probably the norm. The medicolegal risks created by the frequent illegibility and incompleteness of these notes is often reported by malpractice insurers yet persists.

Ambient Clinical Intelligence is the term for Artificial Intelligence (AI)-based technology that automatically transcribes a clinical encounter (medical visit) and helps generate a comprehensive, structured medical note. It is the fruit of dramatic progress in voice recognition and Natural Language Processing, resulting in powerful Large Language Models and the public launch, just a year ago, of ChatGPT and its rivals.

This technology is potentially transformative in healthcare, with the potential to enable a “leapfrog” to electronic medical records (EMR) without the burnout-inducing clerical burden they have created in the USA.

While accurate and comprehensive notes will provide clear evidence of the care provided, which is crucial in defending against malpractice claims, several aspects of ACI need evaluation before the support of insurers and other stakeholders can be justified, and investments are made…

 Does ACI work “as advertised” to accurately record, transcribe and convert what is heard in the examination room into detailed, accurate and comprehensible medical records?

Voice recognition and Natural Language Processing (NLP) technologies are now good enough for everyday use. For consumers, examples include Apple’s Siri and Amazon’s Alexa; for clinicians, Microsoft’s Nuance and other commercial products. But capturing doctor-patient conversations is a new challenge; unpredictably varied accents, dialects and languages, and the noisiness of some clinical environments add to the difficulty.

Generative AI has shown remarkable ability to make sense of “unstructured” narrative data, in documents and in conversations, including in the clinical setting. But one reason for concern is that this form of AI is known for producing “hallucinations” – overt misinformation or fabrications delivered with confidence, that might have dangerous consequences if included in clinical records.

How will AI-generated notes be integrated into medical record systems? What is the workflow of note review, editing and acceptance? Who does it, and when? How much work and time is involved? The initial phase of integrating ACI into practice will involve a learning curve, during which time the risk of error could increase. In reality, some commercial ACI systems send the AI note to external (human) reviewers and return it only the next day, creating a safety net but somewhat diminishing the value.

With accurate and comprehensive electronic documentation comes the potential to reduce diagnostic errors, prescription mistakes, and other issues that lead to malpractice claims. This may happen through improved availability of good clinical documentation, but also via electronic clinical decision support systems (CDSS) potentially built upon these records. However such benefits are context dependent, and do not automatically or consistently materialise. How are records to be made accessible to other care team members, and patients? CDSS alerts and reminders are frequently shown to have low utility and are then ignored or disabled which may paradoxically increase risk.

How will patients feel about an AI “listening in” to the examination room? Patient engagement and communication could improve due to doctors being able to focus more on the patient and less on note-taking. But distrust is also possible; patients may fearful about their most private and sensitive concerns being recorded and worried about is possible misuse. Patients should be free to opt out.

 ACI systems, like any digital health tool, must ensure protection of data at all times. A breach will damage trust and open the door to litigation, especially when it comes to personal health information.

 As these systems are adopted concerns may arise about the consequences of over-reliance. What clinical skills will be lost? What happens when the system goes down – or patients do not provide consent for their use? What happens when the AI provides incorrect data or suggestions?

 The legal and ethical landscape surrounding the use of AI in healthcare is still evolving. The assignment of liability, especially in cases where technology may contribute to an adverse event, is a complex area that has to be closely scrutinised.

 Stakeholders will be interested in how well clinicians are trained to use ACI technology and how it is implemented. They would also need to monitor how ACI systems are adapted and improved over time, ensuring they continue to provide value and reduce risk. Insurers will be interested in acquiring firm evidence supporting the effectiveness of ACI in reducing malpractice incidents.

 Last but not least, what is the cost? Advertised costs for market leaders, priced in USD, seem well ahead of what could be borne locally by medical practices. Will system benefits be large, and will some costs therefore be borne willingly by other stakeholders?

 ACI is a promising tool for enhancing patient care and reducing risk, but malpractice insurers and other interested parties may need to be cautious and thorough in assessing implications for liability, privacy, and the changing dynamics in patient care and clinical decision-making.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top