Kaspersky has reaffirmed its commitment to responsible technology development by signing the AI Pact, an initiative of the European Commission aimed at preparing organizations for compliance with the EU’s AI Act. This landmark legal framework, introduced in 2024 and set to take full effect in 2026, represents the world’s first comprehensive legislation governing artificial intelligence. By endorsing the Pact, Kaspersky underscores its dedication to fostering safe and ethical AI deployment while addressing the associated risks within the cybersecurity domain.
Building Trustworthy AI Through the EU AI Act
The EU AI Act is designed to ensure that AI technologies adhere to rigorous safety and ethical standards while mitigating potential risks. The AI Pact facilitates this transition by encouraging organizations to proactively align their practices with key provisions of the AI Act. Through this initiative, the European Union aims to set a global benchmark for trustworthy AI governance.
Kaspersky’s Commitments Under the AI Pact
By signing the AI Pact, Kaspersky has pledged to advance three core commitments to support responsible AI usage:
- AI Governance Strategy: Establishing a robust AI governance framework to drive AI adoption within the company and ensure alignment with future compliance requirements under the AI Act.
- High-Risk System Mapping: Identifying AI systems that could fall under the Act’s high-risk category, ensuring they are effectively managed and regulated.
- AI Literacy Promotion: Enhancing awareness and education on AI among employees and other stakeholders, tailored to their roles, technical expertise, and the contexts in which AI systems are deployed.
- Beyond Compliance: Ethical and Transparent AI Practices
In addition to these commitments, Kaspersky has pledged to assess foreseeable risks posed by AI systems, provide transparency in AI interactions, and inform employees about AI deployment within the workplace. These actions align with the company’s broader mission to promote ethical AI practices and build public confidence in the technology.
“As AI technologies continue to evolve at a rapid pace, it’s essential to balance innovation with risk management,” said Eugene Kaspersky, founder and CEO of Kaspersky. “We are proud to join the global movement towards responsible AI adoption. By sharing our expertise and advancing ethical practices, we aim to help organizations securely benefit from AI while maintaining transparency and trust.”
For nearly two decades, Kaspersky has leveraged AI-powered automation to bolster cybersecurity, enhancing threat detection and protecting user privacy. With the increasing prevalence of AI systems, the company remains dedicated to sharing its knowledge through initiatives like its AI Technology Research Centre.
Empowering Practitioners with AI Security Guidelines
To further support organizations in securely deploying AI, Kaspersky has developed the “Guidelines for Secure Development and Deployment of AI Systems,” unveiled at the 2024 UN Internet Governance Forum. These guidelines offer actionable recommendations to mitigate risks associated with AI systems.
Additionally, Kaspersky has established ethical principles for AI use in cybersecurity and invites other industry players to adopt these standards, reinforcing a collective commitment to responsible AI practices.
Main Image: CXO Inside Middle East