OASIcs.ICPEC.2023.2.pdf
- Filesize: 0.54 MB
- 12 pages
Software security is an important topic that is gaining more and more attention due to the rising number of publicly known cybersecurity incidents. Previous research has shown that one way to address software security is by means of a serious game, the CyberSecurity Challenges, which are designed to raise awareness of software developers of secure coding guidelines. This game, which has been proven to be very successful in the industry, makes use of an artificial intelligence technique (laddering technique) to implement a chatbot for human-machine interaction. Recent advances in machine learning led to a breakthrough, with the implementation of ChatGPT by OpenAI. This algorithm has been trained in a large amount of data and is capable of analysing and interpreting not only natural language, but also small code snippets containing source code in different programming languages. With the advent of ChatGPT, and previous state-of-the-art research in secure software development, a natural question arises: to which extent can ChatGPT aid software developers in writing secure software?. In this paper, we draw on our experience in the industry, and also on extensive previous work to analyse and reflect on how to use ChatGPT to aid secure software development. Towards this, we run a small experiment using five different vulnerable code snippets. Our interactions with ChatGPT allow us to conclude on advantages, disadvantages and limitations of the usage of this new technology.
Feedback for Dagstuhl Publishing