AI Law - International Review of Artificial Intelligence LawCC BY-NC-SA Commercial Licence ISSN 3035-5451
G. Giappichelli Editore

21/04/2026 - South Africa’s Draft AI Policy: Key Actions for Organisations Before the 10 June 2026 Consultation Deadline (South Africa)

argument: Notizie/News - Administrative Law

Source: Cliffe Dekker Hofmeyr

South Africa’s Draft National Artificial Intelligence Policy was approved by Cabinet on 25 March 2026 and gazetted on 10 April 2026, and the Department of Communications and Digital Technologies opened it for public comment with submissions due by 16h00 on 10 June 2026 to aipolicy@dcdt.gov.za. The Draft proposes a new AI governance ecosystem comprising a National AI Commission, an AI Ethics Board, an AI Regulatory Authority, an AI Ombudsperson, a National AI Safety Institute, and an AI Insurance Superfund intended to compensate individuals harmed by AI systems where liability is uncertain. It adopts a risk‑based regulatory framework modelled on international frameworks including the EU AI Act, with stricter regulatory requirements expected for higher‑risk AI systems deployed in sensitive sectors such as healthcare, financial services, law enforcement and critical infrastructure.

Significant gaps are identified: the Draft does not define high, medium or low risk AI systems, creating uncertainty for organisations that build or use generative AI, automated decision‑making tools and large language models. The proposed institutional roles, mandates, independence, funding and accountability mechanisms are unclear, creating risks of duplication and jurisdictional overlap with existing regulators. The AI Insurance Superfund is underdeveloped with no detail on funding sources, qualifying harms, causation assessment, or interaction with laws such as POPIA. The Draft seeks alignment with POPIA (including reference to automated decision making under section 71), promotes data protection by design, and proposes mandatory watermarking of training data, cross‑border data flow protocols to protect data sovereignty, and “sufficient explainability” for high‑risk systems. Organisations—particularly in financial services, healthcare, technology and digital media—are encouraged to map current and planned AI use to identify potential high‑risk exposure and to submit written comments during the consultation period.