Insights in this article provided by: By Zaakir Mohamed (Director, Head of Corporate Investigations and Forensics, CMS South Africa), Mawande Ntontela (Senior Associate, Corporate Investigations and Forensics, CMS South Africa), and Ayanda Luthuli (Candidate Attorney, CMS South Africa)
Since the turn of the millennium, 24 years ago, technology has accelerated at an unprecedented pace. In that time, we have seen the emergence of smartphones, cloud computing, and ultra-high-speed internet, among others. Each of these technologies has brought significant changes to business, society, and our personal lives.
The changes felt by businesses and other organisations have been particularly significant. Disruptive, cutting-edge tools transformed how organisations carry out their commercial operations. Through the optimisation of certain technologies and tools, organisations can, (amongst other things):
- automate certain back-office functions such as accounting, payroll and record keeping, which significantly improves efficiency;
- create secure environments for maintaining sensitive business and/or consumer information; and
- aggregate, analyse and process data to assist organisations in exploiting novel communication tools (such as social media) to increase sales and profitability.
The pace of technological change shows no signs of slowing down either. One need only look at the rapid advances in artificial intelligence (AI) over the past couple of years to see that.
While there have been significant gains in AI over the past decade or so, the explosion of generative AI into the public consciousness, coupled with significantly improved computing power and the increasing availability of data, has helped drive many businesses to consider how it could impact their operations.
While there are undoubtedly benefits for organisations willing to embrace AI, there are also risks. The rapid development and integration of AI systems in their business operations pose numerous challenges, specifically in managing and using data or personal information belonging to employees, customers, and suppliers.
Understanding AI
Broadly speaking, AI refers to systems that exhibit intelligent behaviour and are capable of rapidly analysing various activities and environments, making independent decisions, and achieving a specific objective. Conventional AI systems are characterised by their ability to perform activities typically associated with the human mind, such as the ability to perceive, learn, interact with an environment, problem-solve and, in certain instances, exhibit creativity.
All of those characteristics have significant benefits for organisations. Those benefits must, however, be balanced out by the compliance obligations placed on organisations regarding the processing and management of data and personal information. These requirements present numerous legal risks and challenges, which must be considered in detail before the deployment of AI systems and solutions.
AI and South African data privacy legislation
When it comes to achieving regulatory compliance, organisations must understand the two distinguishable AI systems that bear relevance to their business operations. Generative AI models use algorithms to generate content based on the analysis of patterns and data and can learn how to improve their own output. Conversely, Applied AI models use machine learning algorithms to analyse data and make predictions and/or decisions based on the data processed.
It is important to note that in South Africa, there is no comprehensive legislative framework to regulate the integration and use of AI and machine learning technologies. There are, however, provisions in the Protection of Personal Information Act No. 4 of 2013 (“POPI”) that impose certain obligations on how organisations can utilise and deploy AI systems.
While POPI does not explicitly address the comprehensive operational parameters and capabilities of modern AI systems, it does regulate the processing of data/personal information using automated means. Section 71 (1) of POPI provides that data subjects cannot be subjected to a decision based solely on automated decision-making, which results in legal consequences for the data subject and the data subject being profiled.
As an example, the use of automated decision-making tools to perform a credit assessment to establish the creditworthiness of a given data subject is generally not permitted in terms of section 71 of POPI, unless said decision:
- has been taken in connection with the conclusion or execution of a contract, and—
- the request of the data subject in terms of the contract has been met; or
- appropriate measures have been taken to protect the data subject’s legitimate interests; or is governed by a law or code of conduct in which appropriate measures are specified for protecting the legitimate interests of data subjects.
Should organisations not implement appropriate data protection compliance measures and programmes, in conjunction with the integration and deployment of AI systems, they risk enforcement notices, sanctions and fines. Organisations should therefore consider taking the following measures:
- conducting a POPI impact assessment on the AI-related systems that it utilises to ensure that such processing activities are carried out within the prescripts and parameters provided in POPI; or
- preparing and delivering to the Information Regulator a prior authorisation application in terms of section 57 of POPI, which generally requires responsible parties to obtain the prior authorisation of the Information Regulator if they intend to (i) process any unique identifiers of data subjects for another purpose than what was intended during the collection of such personal information; and (ii) to link the information with information processed by other responsible parties.
While the immense potential of AI systems may present meaningful opportunities for local organisations to optimise and/or diversify their commercial operations, such potential cannot be realised without implementing appropriate controls to ensure compliance with POPI and other related governing legislation that may apply to AI systems.