Image in image block

Behind the Hype: AI’s Real Impact on Web Application Security 

As organizations begin integrating AI-generated code and AI-driven features into their applications, the shape of the attack surface is starting to shift in subtle but important ways—often before teams fully understand the implications.

To explore how this transition is unfolding in practice, we spoke with Patrick Double, Security Engineer at Bureau Veritas Cybersecurity. Patrick has spent his entire career working with web applications, long before “cybersecurity” existed as its own discipline. His early interest in cryptography and secure coding gradually led him into the security field, where he now focuses on assessing applications and helping organizations navigate the risks that accompany rapid changes in development patterns.

Patrick has watched many new technologies enter the web development world with promises of speed and productivity. Over time, each has revealed vulnerabilities that were not immediately obvious. AI, he notes, is following a similar trajectory—transformative in many ways, but not exempt from the lessons that history has already taught the industry.

What leads someone into web application security as a discipline?  

Patrick’s path into cybersecurity began long before the industry took shape. He studied computer engineering, working across both hardware and software, but it was a university cryptography class that sparked his interest in how systems validate and protect information. “It was very math heavy and fascinating to me,” he recalls. “I couldn’t get enough of it.”  

Early in his career, he took a secure coding course that exposed him to the creativity behind exploiting and defending web applications. That experience changed his direction. Web applications were already part of his work, so moving into their security felt natural. As the field matured, he found himself increasingly drawn to the challenge of identifying vulnerabilities and understanding how even small design decisions can have significant consequences.

How does experience in web application security shape one’s view of AI?  

From working with multiple waves of development tools, Patrick sees recurring patterns in how teams embrace new technologies. Developers encounter complexity or inefficiency, create new abstractions or frameworks, and quickly embrace them. Security considerations often come afterwards, once the shortcomings of the new system become clear.

“People see a new cool thing and run with it,” he explains. “But when software frameworks come out, security usually isn’t thought of until after the problems show up.”  

He approaches AI with that same cautious mindset. From his perspective, it represents another powerful tool that must be understood deeply before being trusted. He avoids adopting brand-new technologies immediately, preferring to wait until early issues have surfaced. That instinct shapes how he evaluates AI today—not as inherently dangerous or inherently safe, but as something that must prove itself through scrutiny. 

 Where does AI provide real value, and where do its limitations appear?

Patrick believes AI provides meaningful help in generating the routine building blocks of applications. Most products require the same fundamental components: databases, identity management, API wrappers. Letting AI produce these pieces can accelerate development significantly. It frees developers to spend more time on the features that make their application unique.

Where AI becomes unrealistic is in trying to replace developers entirely or trusting it to produce complex, novel logic. AI is trained on existing patterns, not on the innovations that differentiate one product from another. “AI can only write what’s already been done,” he explains. “It can’t think independently. It doesn’t understand nuance or context unless you provide it.”  

He warns that assuming AI can build entire applications independently introduces significant risk. The most meaningful parts of software—the business logic, the edge cases, the design decisions—still require human understanding.

How is AI changing the landscape of security testing for web applications?

Patrick sees AI as a promising ally in security testing, particularly because modern web applications are so large. A single site may contain thousands of pages and dozens of different features. Human testers cannot feasibly explore every path with equal depth, but AI can rapidly analyze and categorize large application surfaces.

He has already used AI to assist in early analysis. “AI doesn’t get bored,” he notes. “It can look at an entire website and tell me which sections look risky and even find some vulnerabilities.”  

This allows human testers to concentrate on the nuanced vulnerabilities AI cannot detect: subtle authorization flaws, logic errors and creative exploitation techniques. Patrick does not believe AI will replace penetration testers, but he expects it to become an increasingly important part of their toolkit. 

How are threat actors using AI, and what does this mean for defenders?

One of Patrick’s most striking observations is that attackers are adopting AI faster than many defenders. Some are creating their own language models specifically for malicious purposes. These models are trained on attacker datasets, lack ethical guardrails and are designed to probe websites without restrictions.

“Attackers are producing their own LLMs,” Patrick explains. “They strip off the guardrails and train them with attacker data.”  

These tools enable even low-skill attackers to operate at scale. AI can evaluate thousands of targets at once and identify the easiest opportunities. This fundamentally changes the speed at which vulnerabilities are found. For defenders, it means that secure design, monitoring and timely remediation are more important than ever.

Are AI-related vulnerabilities beginning to appear in real-world applications?

Patrick says it is not always obvious whether a vulnerability was created by AI or by a developer, because insecure logic often looks similar regardless of the source. However, he has noticed a pattern: applications that integrate AI into their features or workflows tend to have more weaknesses in the AI-driven portions of the codebase than in the traditionally written ones.  

“It could be hard to tell if something came out of the AI or not,” he says, “but I do see a higher chance of finding vulnerabilities when a product is leveraging AI for part of its functionality.”  

He explains that when an application includes an endpoint or feature that calls out to an AI service, that area often contains less mature logic and less validation than the rest of the system. Developers may assume the AI-produced output is safe, or they may design the integration without fully understanding the security implications of passing user-controlled data to an AI model. These areas become hotspots for testing.  

“I expect to find more vulnerabilities in that part that’s calling out to AI,” he says, “than in the code that’s been written the traditional way.”  

For Patrick, this trend reinforces the need for careful design and review when integrating AI features. Even if AI is only one component of the workflow, it can introduce disproportionate risk if not handled with the same rigor as the rest of the application. This challenge is distinct from dependency-related risks, which arise when AI selects external libraries; in those cases, the concern is not the logic AI generates, but the sources it chooses to rely on.


Do AI-generated code suggestions introduce new dependency risks?

Web applications already depend heavily on third-party libraries, and AI-generated code follows that pattern. However, AI may select libraries based on prevalence in its training data rather than on security, maintenance history or reputation. This reduces the deliberate evaluation developers traditionally perform when selecting dependencies.

Patrick has also observed that attackers can manipulate AI models by flooding the internet with content that promotes malicious libraries. If enough blog posts, sample projects or repositories reference an attacker-controlled package, an AI model may begin recommending it. “AI will think that’s the library it should use,” he says, “because that’s what it has seen the most of.”  

He therefore treats AI suggestions as hints rather than decisions, verifying each recommendation manually.

Will AI change the types of vulnerabilities seen in web applications?

Patrick believes some vulnerabilities may decline while others emerge. SQL injection, for example, may become less common because AI tends to follow secure patterns that are well documented. Developers sometimes avoid these patterns because they take more effort, but AI does not share that reluctance.

However, he expects to see more vulnerabilities caused by misuse or misunderstanding of AI-generated components. Many weaknesses arise not from coding errors but from flawed assumptions about how the application should work. As developers rely more on AI, they may lose familiarity with underlying frameworks, increasing the likelihood of subtle logic and authorization issues.

What role could AI play in future CI/CD pipelines?

Patrick sees AI playing a useful role in diagnosing build pipeline errors. Today, developers commit code and quickly move on to the next task, only to be pulled back when the pipeline breaks. AI can help determine whether a failure stems from a brief network issue, a misconfiguration or a small oversight, and then propose a fix. This kind of assistance can reduce interruptions and keep development moving smoothly.

He explains that developers often want to stay “in the zone,” and anything that breaks that flow is frustrating. AI could absorb some of that operational noise. “Sure would be nice to have the AI go look at it and say, ‘This was a network problem, I’m just going to restart it,’ or, ‘The code’s broken and here’s the issue,’” he says.  

Patrick cautions, however, that AI should not bypass human review. Automated fixes must still be approved by the team, particularly when a change affects application logic or architecture. For him, AI is valuable as a diagnostic aid, but responsibility remains firmly with the developers.

How might AI reshape the future of web application development?  

Patrick anticipates that AI will influence development in both positive and challenging ways. It will accelerate parts of the process, especially routine components, and may streamline troubleshooting. At the same time, he expects pressure to increase for teams to use AI more aggressively, potentially at the expense of secure design.

He also foresees developers becoming further removed from the underlying systems powering their applications. “We’re going to see less understanding of how applications work,” he says. “And that’s dangerous when things break.”  

Attackers, meanwhile, may learn to identify patterns that reveal which AI model generated certain parts of an application, allowing them to predict its weaknesses. This dynamic will require defenders to remain vigilant, thoughtful and engaged in the full development lifecycle.

Building Secure, AI-Enabled Applications

AI is reshaping web development, but it cannot replace sound engineering principles. For Patrick, AI should be viewed as a powerful assistant rather than a substitute for secure design. It can accelerate mundane work, support early testing and diagnose simple issues, but it cannot understand context, innovate beyond what it has seen or ensure secure architecture on its own.

Bureau Veritas Cybersecurity helps organizations navigate this evolving landscape by assessing AI-integrated systems, testing applications and strengthening secure development practices. With the right balance of human oversight and AI-driven efficiency, teams can build applications that are both innovative and secure. 

More Information

Discover how cyber experts like Patrick Double, Security Engineer and interviewee, can help secure your organization with AI Security Services. Fill out the form, and we’ll contact you within one business day.

USP

Why choose Bureau Veritas Cybersecurity

Bureau Veritas Cybersecurity is your expert partner in cybersecurity. We help organizations identify risks, strengthen defenses and comply with cybersecurity standards and regulations. Our services cover people, processes and technology, ranging from awareness training and social engineering to security advice, compliance and penetration testing.

We operate across IT, OT and IoT environments, supporting both digital systems and connected products. With over 300 cybersecurity professionals worldwide, we combine deep technical expertise with a global presence. Bureau Veritas Cybersecurity is part of the Bureau Veritas Group, a global leader in testing, inspection and certification.