
Picture credit by HPE Emirates and Africa
AI Discussions around the responsible adoption of AI increasingly take centre stage in boardrooms across Africa. A significant number of CEOs now grapple with the ethical complexities that come with implementing this powerful technology. One pressing issue under growing debate is AI’s environmental footprint.
As the effects of climate change intensify bringing more frequent and severe weather events across the continent the need to prioritise environmental initiatives becomes more urgent. However, only a few African leaders feel confident that their companies will meet their net-zero commitments.
The substantial energy demands of AI workloads are a rising concern. To put it in perspective, AI models depend on extremely powerful computers that consume vast amounts of electricity. A typical AI-centric data centre uses as much electricity as 100,000 households. Even more alarming, the largest data centres under construction are expected to consume up to 20 times that amount. Since most local electricity generation still relies on greenhouse gas-emitting sources, rapid AI expansion could dramatically increase carbon emissions especially in South Africa, where businesses remain heavily dependent on fossil fuels.Read more here
To help organisations tackle this challenge, HPE developed a systematic approach that simplifies the complexities of AI sustainability. This method breaks AI sustainability into five key areas: equipment efficiency, energy efficiency, resource efficiency, software efficiency, and data efficiency. The aim is to get more from less maximising system performance to deliver greater output with fewer resources.
While all five areas matter, data efficiency often serves as the ideal starting point. Given the data-intensive nature of AI, optimising input datasets can significantly reduce environmental impact. Around 75% of African CEOs admit they lack confidence in their data readiness for generative AI, highlighting the importance of focusing on data efficiency early in the process.
First Steps to Data Efficiency
Map out your data strategy upfront
Know what data you need, where it originates, how frequently you’ll collect it, and how you’ll extract insights. Plan for how data moves between systems and where and how long you’ll store it. Consider consolidating or disposing of data and using low-impact storage like tape backups. Data not needed for immediate access can often be offloaded to low-energy storage media.
Clean up before you start
For traditional workloads, data efficiency meant storing only what was valuable. With AI, it’s critical to size and clean datasets properly before model training. Using off-the-shelf datasets without minimisation leads to unnecessary work, increasing energy use and reducing overall AI efficiency.
Get the training dataset right
Optimising datasets before training supports AI sustainability. You can then tune the model with customer-specific data. Starting with concise, relevant data ensures efficiency throughout the process.
Process data only once
Train and tune models using processed data just once. Apply further training only to newly collected data to avoid redundancy.
Avoid data debt
AI systems demand massive datasets, including unstructured data. Managing data properly by removing outdated, inaccurate, or duplicated content reduces strain on storage and improves model performance. Just like technical debt, data debt can hinder system effectiveness.
Location matters
Process data as close as possible to its point of origin. This approach cuts energy costs associated with data transfer and improved the timeliness of insights.
