AI and cybersecurity – CISO warns of the ‘blight’ of losing skills to vibe coding Where does your code come from. An AI? So, it must be perfect, right? Wrong. A new report exposes the risks, and the vendor’s CISO warns of longer-term problems. - News | AI-U.com
AI and cybersecurity – CISO warns of the ‘blight’ of losing skills to vibe coding Where does your code come from. An AI? So, it must be perfect, right? Wrong. A new report exposes the risks, and the vendor’s CISO warns of longer-term problems.
Published: November 3, 2025 at 07:11 AM
News Article
mankind
society
competition-discipline
sport
machine-manufacturing
Content
Cybersecurity continues to present a complex landscape, where every new cloud innovation seems to open doors for fresh bugs and vulnerabilities. The old adage that an organization is only as secure as its weakest link, often human, remains painfully true. New tech challenges how we behave daily and tests who or what we can really trust. But now, the risk is creeping deeper—built into the core infrastructure and even into the apps companies push out. The scary part? Sometimes even the developers themselves can’t pinpoint these weaknesses, let alone fix them.
This all hits hardest for Chief Information Security Officers (CISOs) and their teams. The culprit? Artificial Intelligence, especially the surge of AI-generated code and the rise of vibe coding—where code is rapidly produced with AI’s help. A recent report by Aikido, a cloud and code security specialist, sheds light on this growing threat. Their "State of AI in Security and Development" report paints a vivid picture of the tug-of-war between speed and safety as AI adoption explodes. It’s clear teams are rushing products to market, often with a “release now, patch later” mindset, which only widens the attack surface.
The study, based on interviews with 450 professionals across Europe and the US—including developers, security leaders, and application security engineers—found that 69% of organizations have uncovered vulnerabilities linked to AI-generated code. Even more alarming, 20% reported serious security incidents traced back to this cause. Given that incidents are already common—with 27% of organizations hit hard in the past year—the question looms: How many breaches remain undetected?
The stakes couldn’t be higher. Recent headlines reveal how public-facing services and major brands have been compromised, sometimes due to fragile systems rather than direct attacks—a reality familiar to giants like Amazon and Microsoft. Automation speeds things up but often at the cost of introducing flaws that are tricky to find. And that leads us into murky waters around responsibility. Who’s to blame when AI-written code causes harm? Is it the coder who used the tool, the AI vendor who built a flawed system, or the security team that missed the exploit?
Legally, responsibility falls on senior leaders, which the report highlights by showing that 75% of CISOs have had to handle serious incidents recently—far more than the number of organizations admitting to major breaches. Despite that, many respondents are unsure who’s truly at fault. Over half blame the security team for missing exploits; nearly half blame developers for generating the risky code; fewer point fingers at the AI vendors. This tangled blame game underlines the governance challenges vibe coding introduces, creating a hall of mirrors where accountability is hard to pin down.
Trust is a big part of the problem. Are we really ready to trust AI tools for coding at this scale, especially when we know Large Language Models and chatbots can hallucinate, spread misinformation, and even breach copyright? Vibe coding is no different—it inherits these risks. Moreover, the proliferation of security tools meant to fight these issues ironically causes more headaches. The report notes that teams using many separate vendor tools face more incidents and longer fixes due to integration troubles, like duplicate alerts and inconsistent data. Integrated application and cloud security approaches, meanwhile, show lower incident rates.
From a human perspective, it’s clear security engineers remain crucial. A quarter of CISOs warn that losing even one top security pro could trigger serious breaches, delays in incident response, and slow product development. The human factor still matters deeply, despite the AI hype.
In an exclusive chat, Aikido’s CISO Mike Wilkes described the situation as “democratizing the ability to ship shitty code quickly.” He pointed out that automation and infrastructure as code didn’t improve code quality but just sped up releasing flawed software. Now, with vibe coding and low-code/no-code tools, anyone can churn out risky code at scale, much like how AI has democratized making mediocre art or music. This "democratization of mediocrity" poses real, tangible risks for the future.
Key Insights
Key facts extracted include the finding that 69% of organizations have discovered vulnerabilities in AI-generated code, and 20% have experienced serious security incidents linked to it, with data collected from 450 professionals across Europe and the US.
The stakeholders directly involved are developers, security teams, CISOs, and AI vendors, while indirectly, organizations relying on these systems and end-users are affected by potential breaches.
Immediate impacts include increased security incidents and fractured accountability, reminiscent of past software security crises where rapid development outpaced safeguarding efforts.
Historically, parallels can be drawn to early cloud adoption phases, where tool sprawl and integration issues caused similar challenges.
Looking ahead, optimistic scenarios involve tighter integration of security tools and improved AI governance, whereas risks include escalating vulnerabilities and erosion of critical human expertise, demanding preemptive human-centric security strategies.
From a regulatory viewpoint, recommendations include standardizing AI code auditing processes (high priority, moderate complexity), enforcing vendor transparency on AI training data (medium priority, complex implementation), and investing in upskilling security teams to handle AI-induced vulnerabilities (high priority, feasible implementation).