AI Essential to Secure Dev, Successful DevSecOps—Yet Risks Abound
When finding security vulnerabilities within software built by in-house developers, there’s good reason to believe that development teams, thanks in part to the help of AI tools, are actually “shifting left” when developing secure code.
At least, that’s one of the findings from a survey of 1,001 senior technology executives commissioned by GitLab, Inc. The Global DevSecOps Report: The State of AI in Software Development also found that while tech execs are optimistic about incorporating AI into their software development processes, concerns remain regarding AI and the security and privacy of their data.
The GitLab survey found that 83% of respondents cited implementing AI as necessary in their software development processes to keep up with market trends. Conversely, 79% cited AI tools’ access to sensitive information as a significant concern. Overall, the report found that DevSecOps practices are being driven by improved efficiency, faster cycle times and AI in the software development life cycle, helping to drive innovation.
There are also signs that, when it comes to AI and software development, organizations may be getting ahead of themselves, with 90% of participants reporting that they’re planning on using, or already are using, AI in their software development efforts—yet 81% said that they need more training to use the AI in their development work.
Despite organizations saying they provide AI training, respondents independently turn to readily available resources: Books, articles, online videos, educational courses, practicing with open source projects and learning from peers and mentors. Of those organizations that use, or are planning to use, AI for software development, 65% said their organization hired or will hire new talent to manage their AI implementation.
AI/ML Implementations Aren’t Expected to go Smoothly
Security professionals surveyed expected big bumps ahead in their implementation of AI. A full 67% of security respondents said they are concerned about the impact of AI/ML capabilities on their job, with 28% of those respondents saying they are “very” or “extremely” concerned about AI/ML capabilities impacting their job. A quarter of security respondents are concerned that AI/ML may create errors in coding that will make their jobs more challenging.
Senior technology executives also shared concerns about AI/ML in their organizations. Even more of these executives, at 39%, are concerned that AI-generated code will introduce security-related defects—even more (48%) are worried that AI-generated code may not have the same legal copyright protection as code developed by human developers. Finally, most tech execs said they prioritize privacy and protecting intellectual property when selecting an AI/ML tool.
“While the vast majority of DevSecOps professionals I speak to are using AI in software development or are planning to, a nearly equal proportion are concerned about AI tools having access to private information or intellectual property,” Josh Lemos, chief information security officer at GitLab, explained.
Lemos also pointed out that for intellectual property to be exposed through AI, the AI model would need to be trained on a company’s data or intellectual property.
Daniel Kennedy, information security and networking research director at 451 Research said that if intellectual property is being added to AI/ML models during their use, organizations must pay close attention to the terms of the engagement “and not agree to things they’re uncomfortable with.” Additionally, he noted that the situation is much less clear if the concern is about copyright ownership and licensing. “I have not yet seen good answers to that question in the industry, and much of it will work its way through the courts over time,” he said.
For GitLab’s part, Lemos explained that GitLab’s platform does not train models using its customer’s data. “Customers maintain control over their intellectual property as only small code snippets before and after the code the developer is working on is sent to the model for suggestions,” Lemos said. GitLab uses a pre-trained AI model for code suggestions, and the code snippets sent to the model are discarded after the suggestion is provided. “Other AI features that summarize issues, recommend reviewers, or explain vulnerabilities do not rely on any proprietary information — they are correlation engines that recognize security anti-patterns,” he added.
Everyone is Responsible for Application Security
The GitLab survey found fewer development and security silos and more AppSec collaboration among teams. This was demonstrated by fewer respondents believing their group was “completely” responsible for application security. This year, only 38% of respondents said they felt they were fully responsible for application security in their organization, compared to 48% of respondents who said they were fully responsible for AppSec a year ago. Further, 53% responded that they felt responsible for AppSec, but as part of a larger team. Only 44% answered the same way a year ago.
Interestingly, only 38% of security respondents said they’re primarily responsible for application security this year, compared to 78% a year ago. Developers were split equally, at 44%, between security being responsible principally for AppSec and 44% saying developers are primarily responsible. Finally, operations professionals reported being more likely than developers and about equally as likely as security professionals to express that their team is mainly responsible for AppSec.
451 Research’s Kennedy said being “primarily” responsible for application security is a “nebulous concept” in an organization. “Security professionals seeking to foist full or primary responsibility for AppSec on to developers, whose primary job is judged more on other factors including feature delivery, will find that during a breach, the idea that someone else was primarily responsible for looking after this security concern won’t wash with the C-suite,” he said.
Likewise, in DevOps organizations, security can never attempt to solve AppSec challenges independently and expect to succeed. “I see from the GitLab report that there is a lower percentage of overall respondents saying they are ‘completely’ responsible for AppSec year over year, and a higher percentage indicating they have a responsibility as part of a larger team. I think that’s a good thing,” he said.
“That trend, the shift in application security testing tool usage, does point to a greater level of developer involvement, and that’s the only way this will work. Problems need to be found pre-production, largely addressed as they are created, and the place to do that is while coding, so developers have to be empowered to own the day-to-day of correcting application security vulnerabilities as early in the development lifecycle as possible,” he added.
GitLab’s Lemos sees a lot of parallels between AppSec and other areas of enterprise tech, notably application performance management or observability. “While teams may have strength in that specialized domain, they have a shared responsibility for the code base. Second, security teams look a lot more like engineering teams now where they are integrating security tools into continuous integration systems and providing feedback in the developer’s workflow,” he said.
Additionally, AppSec teams are creating pre-built secure-by-design components for developers to use in their software. “This builds a shared responsibility model between security and engineering teams. Finally, technology teams are under intense resource constraints, where automation and AI can become a strategic resource,” Lemos concluded.
There’s no doubt that the goal of AI and automation is to be that strategic resource for AppSec and DevSecOps teams. Still, concerns around AI creating more security challenges in the code it creates than it solves in the code it tests persists, as do fears around AI coding and intellectual property protection. Whether such concerns can be adequately solved will perhaps be revealed in surveys to come in the years ahead.