Navigating AI Security: Strategies for Harnessing AI Power While Mitigating Risks
Published At: March 8, 2025, 9:02 a.m.

Navigating the AI Security Landscape

In an era where artificial intelligence (AI) intertwines with almost every facet of business, leading experts from Gartner emphasize the need for a balanced approach that harnesses AI's transformative power while safeguarding against its inherent risks. At the recent Security and Risk Management Summit in Sydney, senior Gartner analysts underlined how the rapid evolution of AI must be met with robust security measures that guide organizations through managing shadow AI and integrating best practice security controls.

Balancing Protection and Risk

Gartner's top voices, such as Richard Addiscott and Christine Lee, set the stage by highlighting the ongoing investment in cybersecurity, noting that while 87% of tech executives plan to increase security funding, 84% are poised to invest in AI initiatives. Their message was clear: there is no perfect protection. Instead, organizations must weigh cost against risk. Notably:

  • Protection-Level Agreements (PLAs): Similar in concept to service-level agreements, PLAs encourage a balanced approach that focuses on meeting business needs rather than defaulting to overly technical solutions.
  • Outcome-Driven Metrics (ODMs): These metrics foster alignment across all stakeholders by focusing on what truly matters for the organization’s overall security outcome.

Embracing AI Responsibly

The summit tackled the dual nature of AI by urging organizations to cultivate AI literacy and drive controlled experiments to reap its benefits responsibly. Addiscott advised cultivating internal AI champions who can advocate for responsible strategies, while Lee showcased practical applications such as a company-developed internal chatbot that supports cybersecurity coaching, freeing up engineers to concentrate on more advanced tasks.

Successful AI implementation isn’t solely about enhancing productivity—it’s about asking the right question: "What good can AI do for my use case?" This question forms the backbone of developing AI applications that are both innovative and secure.

Addressing the Risks of Shadow AI

A striking statistic from the summit revealed that 98% of surveyed organizations either have adopted or plan to adopt generative AI. This surge has given rise to the phenomenon of shadow AI—unauthorized use of AI applications that might introduce vulnerabilities such as data leakage and brand reputation risks. Gartner experts recommend:

  • Utilizing discovery tools to detect unauthorized AI usage.
  • Implementing security measures, like endpoint security and role-based access controls, to monitor and manage these risks.
  • Empowering teams to understand why unsanctioned tools are being used and improving internal approval processes accordingly.

Pete Shoard pointed out that while generative AI can boost efficiency, it also poses risks if left unchecked. The solution is a comprehensive policy that not only authorizes suitable tools for specific departments (e.g., content creation) but also enforces strict monitoring mechanisms.

Evolving Challenges: Deepfakes and AI Attacks

Deepfakes and other sophisticated attacks are rapidly emerging as critical threats. Analyst Leigh McMullen stressed that the most effective defense against these challenges is human oversight—implementing secure, verified communication channels to detect potential deepfakes. In addition, AI systems themselves may be directly targeted using techniques such as "mal-information" insertion to manipulate outputs. For instance, subtle alterations in imagery or video subtitles can drastically skew an AI’s perception, leading to erroneous outcomes that affect business-critical processes.

Organizations are also urged to incorporate early-stage collaboration between AI developers and security teams to ensure compliance with privacy and security protocols. Deploying an AI Trust, Risk, and Security Management (Trism) program can mitigate issues related to bias, fairness, and overall application integrity.

Harnessing AI Within Cybersecurity

The summit highlighted several quick wins through AI integration in cybersecurity operations, such as:

  • Security Testing: AI-driven tools can enhance the testing of in-house code and runtime systems.
  • Incident Response: AI aids in summarizing alerts, risk assessment, and formulating incident response playbooks.
  • Data Governance: AI can streamline data masking and strengthen DevSecOps practices.

Tom Scholtz and his peers advocated for metrics that not only gauge operational success but also inform executive decision-making. This includes focusing on input-heavy, tactical metrics that align with the broader strategic goals of the organization.

Future-Proofing Cybersecurity Measures

John Watts wrapped up the summit by addressing non-technology threats, identifying user error as a significant vulnerability. He warned that overly restrictive controls could lead employees to bypass them. Instead, security policies should serve as enabling tools that encourage safe practices.

Beyond immediate challenges, the potential impact of quantum computing on current cryptographic standards looms large. Organizations are advised to take stock of their cryptographic dependencies and pivot towards quantum-safe algorithms, with a long-term goal of complete transition by 2030.

In conclusion, the careful orchestration of AI initiatives with integrated security practices not only mitigates risks but also paves the way for innovative and secure operational strategies. Gartner’s insights offer a roadmap for organizations to both embrace and control the transformative power of AI while maintaining robust security frameworks.

Published At: March 8, 2025, 9:02 a.m.
Original Source: Managing security in the AI age (Author: Stephen Withers)
Note: This publication was rewritten using AI. The content was based on the original source linked above.
← Back to News