Artificial intelligence (“AI“) is reshaping industries worldwide, and healthcare is no exception. The healthcare sector in South Africa experiences limited resources, staff shortages, and unequal access – however, AI presents an opportunity to enhance care delivery, improve diagnostic accuracy, and support overburdened healthcare professionals. From virtual assistants to predictive analytics, AI is redefining how medical diagnoses are made, care is provided, and clinical decisions are supported. Yet, with great innovation comes great responsibility. As AI becomes increasingly embedded in clinical practice, concerns around reliability, ethics, accountability, and regulation become more pressing. When deployed with the right safeguards, AI has the potential to deliver safer, more efficient, sustainable and accessible healthcare to communities across the country. This article explores how AI can be responsibly integrated into the South African healthcare system, focusing on its role in diagnostics, decision-making and sustainability, while outlining the ethical, legal, and practical guardrails essential to its safe and equitable use.
Although there have been various use cases of AI in the healthcare sector, a study published late in 2024 by Beth Israel Deaconess Medical Centre in Boston, Massachusetts, compared the diagnostic performance of ChatGPT-4 (“ChatGPT“) with that of practicing physicians. The results of the study found that ChatGPT achieved a 90% accuracy rate in diagnosing case reports, while physicians who were randomly assigned to use ChatGPT in conjunction with their traditional clinical diagnostic methods scored 76%, whereas those physicians who did not make use of ChatGPT at all scored 74%. While the study highlights AI’s ability to enhance clinical decision-making, it also exposes the challenges of implementing AI in the healthcare sector. When presented with AI-generated insights, many physicians in the study were reluctant to revise their initial diagnoses, highlighting the cognitive biases and trust barriers that can arise in human-AI collaboration. The study reflects both the promise and complexity of using AI in healthcare: successful implementation depends not only on performance, but also on trust, transparency, and clinical judgment. Even though the study did not specifically consider the sustainability aspects of incorporating AI in the healthcare in a responsible manner, the World Economic Forum (“WEF“), in its article entitled ‘The energy paradox in healthcare: How to balance innovation with sustainability‘, highlights that if global healthcare was currently a nation, it would be ranked as the fifth largest emitter of greenhouse gases, globally. While WEF recognises that training large AI models, such as ChatGPT, consumes significant amounts of electricity if not adequately managed, it notes that task-specific AI models (i.e. narrow AI), which requires less computational power, could provide the best benefits, being enhanced patient outcomes (as indicated in the study) while minimising the adverse environmental impacts of AI.
In South Africa, AI integration must be guided by existing legal and ethical frameworks and adapted to the country’s unique healthcare context. While there are no AI-specific regulations as yet, the Health Professions Council of South Africa (“HPCSA“) provides essential ethical standards through, inter alia, its General Ethical Guidelines for Healthcare Professionals and, in particular, the General Ethical Guidelines for Good Practice in Telehealth. These ethical guidelines underscore the importance of patient autonomy, informed consent, practitioner accountability and upholding confidentiality – principles that equally apply when AI tools are used. For instance, consistent with general ethical principles such as informed consent and practitioner accountability, patients should be informed when AI assists in their care, and clinicians must retain full responsibility for final decisions, even though current HPCSA guidelines do not yet explicitly address the use of AI tools in clinical care. Compliance with the provisions of the Protection of Personal Information Act 4 of 2013 (“POPIA“) and the National Health Act 61 of 2003 (“NHA“) is also essential to ensure lawful processing of health data.
The HPCSA’s guidelines on telehealth do not currently make provision for or substantively address the use of AI in clinical diagnostics. The existing guidelines remain anchored in an outdated telemedicine framework that assumes a face-to-face consultation and a physical examination by a registered practitioner is essential to ethical care. This model does not contemplate the semi-autonomous or autonomous role that AI tools are increasingly playing in diagnostic processes. As a result, there is no regulatory clarity on whether or how AI can be lawfully and ethically integrated into clinical decision-making. This gap is compounded by the fact that South African law does not recognise AI systems as legal actors, meaning they cannot function as “servicing practitioners” in partnership with practising clinicians.
This omission is not merely theoretical: it threatens to stifle innovation and limit the benefits that AI can offer in improving access to care and clinical accuracy. The guidelines, as they stand, may also be inconsistent with national digital health policy, which promotes technological innovation as a strategic imperative. To close this gap, the HPCSA’s ethical guidelines would need to be amended to explicitly permit the use of AI, and to provide guidance on key concerns such as informed consent, accountability, liability, and transparency in AI-assisted care. These reforms are not just desirable, they are necessary. A key issue that arises, however, is that the current common law framework of fault-based medical negligence may be ill-equipped to address harm arising from complex AI systems where no clear human error is traceable, leaving both patients and practitioners exposed to unacceptable legal uncertainty.
Contextual relevance is equally critical. Many popular AI health models are trained on datasets from high-income countries, limiting their accuracy in South African settings with different health trends, resource constraints, and systemic challenges. To ensure clinical value, these tools must be validated, or even retrained on local data. Without localisation, there is a risk of bias or inaccuracy, particularly in a healthcare landscape already marked by inequality. This applies not only to diagnostic tools but also to AI’s growing role in public health surveillance, predictive analytics, and telemedicine. In these spaces, AI-powered chatbots and triage tools are increasingly used to guide patients, recommend next steps, and help manage care, particularly where access to clinicians is limited.
Encouragingly, South Africa is not merely a passive consumer of global AI innovations, it is actively shaping its own. Local platforms like the SA Doctors App are pioneering the integration of AI-driven chatbots into telemedicine services, offering patients accessible, real-time support for triage, symptom checking, and appointment scheduling. These tools are already improving early engagement and streamlining care, particularly in rural and underserved communities. South African institutions are also developing proprietary algorithms trained on local clinical trial data, reflecting a growing commitment to building AI solutions that are not only innovative but also contextually relevant. These developments signal a shift toward a more confident, proactive approach to digital health, one that embraces AI as a tool for expanding access, enhancing quality, and reimagining the future of care.
This shift toward AI-integrated healthcare must be accompanied by clear operational, ethical, and regulatory guardrails. At a minimum, these should include –
- Human oversight and accountability: AI must support, not replace, clinical judgment;
- Local validation and transparency: AI systems must be tested on South African data and provide outputs that clinicians can understand and act on;
- Data Protection Compliance: Any AI application processing personal health information must comply with the POPIA and NHA, ensuring that all data handling practices meet legal and ethical standards;
- Informed Consent: Patients must be clearly informed about how their personal health data is collected, used, stored, and protected, and must provide explicit consent before any data processing occurs; and
- Equitable access and inclusivity: AI should close, not widen, healthcare gaps, particularly between public and private sectors, or urban and rural communities.
As AI continues to evolve, its integration into South African healthcare must be grounded in the broader goals of equity, safety, sustainability, and quality care. AI holds significant promise in public health surveillance, early outbreak detection, personalised treatment planning, and chronic disease management, offering new tools to strengthen healthcare delivery. Realising this potential will require coordinated effort: policymakers must create clear regulatory pathways, healthcare institutions must invest in infrastructure, training, and oversight and developers must prioritise ethical design and contextual relevance. The Beth Israel study affirms AI’s value in diagnosis, but also underscores the importance of trust, transparency, and human judgment in its application. With robust safeguards, local validation, and patient-centred implementation, South Africa is well-positioned to harness AI not just as a technological advancement, but as a strategic tool for building a more inclusive, effective, sustainable, and resilient healthcare system.
Written by Aphindile Govuza, Director; Boitumelo Moti, Director; Janice Geel, Associate; and Malique Ukena, Candidate Attorney; Werksmans
EMAIL THIS ARTICLE SAVE THIS ARTICLE ARTICLE ENQUIRY
To subscribe email subscriptions@creamermedia.co.za or click here
To advertise email advertising@creamermedia.co.za or click here