Although over 60% of companies’ report using Generative AI, only around 20% of them have established policies governing employees’ use of AI and most are only mitigating some of the obvious risks such as inaccuracy and intellectual-property infringement, according to the McKinsey Global Survey on State of AI 2023.
The often neglected areas of security with regards Ai use include; new infrastructure deployments, ML – API exploitation, supply chain attacks, and the deployment of adversarial data by threat agents to circumvent protections.
As more Ai platforms are utilised within businesses, more attention should be given to develop new ethical and security safe usage guidelines and practices to safeguard data, employees and systems.
Generative AI can leverage text, video, audio, or images and other usage that is virtually unlimited and this makes controlling the security around Ai usage even more critical. The potential threat of deep-fakes and biometric cloning notwithstanding, there are countless new AI attack opportunities requiring staff and management in IT and cybersecurity to pay far more detail to these new challenges.
Here are the top five areas of security requiring attention from industry security leaders when incorporating the use of Gen-AI in corporate structures.
- Embracing the potential of Ai: As much as Ai use presents new security threats to organisations, there is equally much potential and opportunity that should be embraced in Generative AI presenting solutions to security leaders in improving their security operations. One of the challenges in many corporate ITC security teams is to find enough skilled staff to address all the requirements of a good security set-up. It is here where Gen AI can help with summarization and provide insights into the petabytes of data which is manually, and painfully queried by SOC Teams every day. Generative AI can also be leveraged to enhance and speed up investigations, triage, and response times to threats by the utilisation of a Gen AI interface for analysis guidance, streamlining workflows, and automation of threat documentation.
- Understanding the Models: Understanding the basics of Foundational, Custom and LLM models and LLM applications is Mission crittical. Security leaders need to know the difference between open-source and closed-source LLMS to be able to develop specific security measures for these systems. As other departments in the organization move towards adopting AI models, security leaders need to be proactively looking at ways to use these LLM applications in a way that will not compromise the confidentiality, integrity, and availability of the organization and its resources. The implementation of different types of LLMs can also create different levels of operational risk for the business. To fully leverage the power of Generative AI many specialized MLOps and LMOps approaches will likely need to be explored such as Live-model operations, monitoring systems, and real-time alerting which few organizations have been able to fully implement. In-depth awareness of these types of operational challenges, although challenging, is important for security leaders to address and to facilitate risk conversations and aid in security protocol decision-making in the business.
- Speed of Change in Ai Adaption: As Generative AI technology evolves rapidly, and business adoption increases, security leaders need to adapt and learn the full spectrum of the technology. A starting point would be to stay abreast of the changes in the market, and where and how AI is being used offensively by threat actors. Implementing AI-powered security defences can help counteract these innovative attack techniques through establishing their own machine learning and behavioural models that aim to predict and prevent them.
- Policies, Processes, and Procedures: The implementation of carefully crafted policies processes and procedures, will contribute towards a solid ethical, efficient, and effective integration of Ai throughout the entire organization. While Generative AI as technology offers many benefits, security leaders should focus on evaluating existing processes and assessing the employees’ current roles and responsibilities when deciding how to derive maximum benefits.
- Internal Communications: Awareness of the risks of AI usage must be communicated across the stakeholder spectrum and security leaders should take an active role in risk management practices within the organization. The organisation’s Governance, Risk, and Compliance (GRC) processes for Generative AI applications must be revised to reflect any new risk vulnerability potential. Ongoing communication should also be generated to keep high awareness of the changing technology landscape, and updating the relevant stakeholders accordingly.