Minh Anh Nguyen
Since the introduction of Large Language Models (LLMs), these systems have become popular tools for helping software developers write code. The AI tools built on-top of LLMs, often called Coding Assistants, have shown great potential in improving how efficiently developers can work; however, questions remain about whether they can truly help create secure software. For example, tools like GitHub Copilot may access entire codebases, raising concerns about privacy, especially in large organizations. Additionally, when AI-generated code is added to software, it may contain hidden security issues, and it is often unclear who is responsible for these vulnerabilities. Moreover, the supply chain of AI coding assistants presents its own security risks. If the developers of these assistants and the LLM providers are unaware of the security vulnerabilities that can arise during their development, these tools may introduce new risks when deployed in end-user environments, such as integrated development environments (IDEs) on the host machine. Therefore, software developed or assisted by AI should meet high security standards to prevent introducing new risks. This research aims to explore both the potential benefits and the challenges LLM-powered coding assistants introduce in secure software development.