Speed versus safety defines the new debate over AI in cybersecurity

Written by

Published 3 Sep 2025

Fact checked by

We maintain a strict editorial policy dedicated to factual accuracy, relevance, and impartiality. Our content is written and edited by top industry professionals with first-hand experience. The content undergoes thorough review by experienced editors to guarantee and adherence to the highest standards of reporting and publishing.

disclosure

ai cybersecurity speed versus safety

Companies are rushing to adopt artificial intelligence (AI) for cybersecurity, but a growing divide emerges between sectors willing to take risks and those that cannot afford mistakes.

A new Arctic Wolf study of nearly 2,000 security decision-makers reveals a fundamental tension in how organizations approach AI adoption. While 73% have integrated AI into their security programs, the technology creates as many questions as it answers.

    Financial services and technology companies are the fastest to adopt AI, with over 80% adoption rates. These sectors can recover from AI errors relatively quickly. But utilities, government agencies, and transportation companies move much more slowly, around 60%.

    “Disruptions can result in devastating impacts and a potential risk to human lives,” Arctic Wolf researchers noted about safety-critical sectors. A wrong AI decision at a power plant or airport carries consequences that financial firms rarely face.

    This caution reflects a deeper problem with current AI tools. Matt Gorham from PwC explains the real change: “It’s not the what of CyberOps that AI is changing, but the how. It’s changing the speed at which we can do certain operations.”

    Speed creates both opportunities and risks. AI can process thousands of security alerts in minutes, but human analysts must still validate the results. Companies report that 67% believe AI needs substantial human oversight to work properly.

    The technology reshapes security teams in unexpected ways. Traditional entry-level positions disappear as AI handles basic tasks like alert triage. Security professionals must now learn to manage AI systems rather than just security threats.

    “We need to consider how many experts and which kinds of experts we need,” says Jeffrey Brown, a cybersecurity advisor at Microsoft and former state CISO. Organizations are shifting workers from handling alerts to validating AI decisions and managing automated systems.

    This creates skills gaps that many companies struggle to fill. Only 22% have clear policies for using AI tools safely, according to Accenture research. Most security leaders admit they lack the expertise to build AI systems themselves and just depend on vendors to integrate the technology.

    Privacy concerns complicate adoption plans. About 33% of organizations worry about sensitive data exposure when using AI tools, especially cloud-based systems. Cost issues follow closely, with 30% struggling to justify AI investments.

    The stakes keep rising as cybercriminals adopt AI for attacks. Wolfgang Goerlich from IANS Research predicts the future: “The future of security operations is going to be AI versus AI. It’s going to be machine on machine, with people in the cockpit.”

    This arms race pressures security teams to adopt AI quickly, even without proper governance frameworks. Companies report that 99% will consider AI capabilities when making security purchases this year.

    The human element remains critical despite automation advances. Dan Schiappa from Arctic Wolf emphasizes that “artificial intelligence is rapidly becoming a cornerstone of modern cybersecurity, but it benefits from human expertise to be truly effective.”

    Organizations face a balancing act between AI speed and human judgment. Those that move too fast risk data breaches or system failures. Those who move too slowly may fall behind defending against AI-powered attacks.

    Success depends on how well companies train workers to collaborate with AI systems while maintaining critical thinking skills that machines cannot replicate.