,
Tiago Espinha Gasiba
,
Sathwik Amburi
,
Maria Pinto-Albuquerque
Creative Commons Attribution 4.0 International license
With the increasing integration of large language models (LLMs) into software development and programming education, concerns have emerged about the security of AI-generated code. This study investigates the security of three open source code generation models. Codestral, DeepSeek R1, and LLaMA 3.3 70B using structured prompts in Python, C, and Java. Some prompts were designed to explicitly trigger known vulnerability patterns, such as unsanitized input handling or unsafe memory operations, in order to assess how each model responds to security-sensitive tasks. The findings reveal recurring issues, including command execution vulnerabilities, insecure memory handling, and insufficient input validation. In response, we propose a set of recommendations for integrating secure prompt design and code auditing practices into developer training. These guidelines aim to help future developers generate safer code and better identify flaws in GenAI-generated output. This work offers an initial analysis of the limitations of GenAI-assisted code generation and provides actionable strategies to support the more secure and responsible use of these tools in professional and educational contexts.
@InProceedings{santosgaleano_et_al:OASIcs.ICPEC.2025.9,
author = {Santos Galeano, Maria Camila and Espinha Gasiba, Tiago and Amburi, Sathwik and Pinto-Albuquerque, Maria},
title = {{Are We There Yet? On Security Vulnerabilities Produced by Open Source Generative AI Models and Its Implications for Security Education}},
booktitle = {6th International Computer Programming Education Conference (ICPEC 2025)},
pages = {9:1--9:12},
series = {Open Access Series in Informatics (OASIcs)},
ISBN = {978-3-95977-393-5},
ISSN = {2190-6807},
year = {2025},
volume = {133},
editor = {Queir\'{o}s, Ricardo and Pinto, M\'{a}rio and Portela, Filipe and Sim\~{o}es, Alberto},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICPEC.2025.9},
URN = {urn:nbn:de:0030-drops-240395},
doi = {10.4230/OASIcs.ICPEC.2025.9},
annote = {Keywords: Generative AI, Code Security, Programming Education, Prompt Engineering, Secure Coding, Static Analysis}
}