Search Results

Documents authored by Amburi, Sathwik


Document
Enabling Secure Coding: Exploring GenAI for Developer Training and Education

Authors: Sathwik Amburi, Tiago Espinha Gasiba, Ulrike Lechner, and Maria Pinto-Albuquerque

Published in: OASIcs, Volume 133, 6th International Computer Programming Education Conference (ICPEC 2025)


Abstract
The rapid adoption of GenAI for code generation presents unprecedented opportunities and significant security challenges. Raising awareness about secure coding is critical for preventing software vulnerabilities. To investigate how Generative AI can best support secure coding, we built an AI Secure Coding platform, an interactive training environment that embeds a GPT-4 based chatbot directly into a structured challenge workflow. The platform comprises a landing page, a challenges page with three AI-generated tasks, and a challenge page where participants work with code snippets. In each challenge, developers (1) identify vulnerabilities by reviewing code and adding comments, (2) ask the AI for help via a chat based interface, (3) review and refine comments based on AI feedback, and (4) fix vulnerabilities by submitting secure patches. The study involved 18 industry developers tackling three challenges. Participants used the AI Secure Coding Platform to detect and remediate vulnerabilities and then completed a survey to capture their opinions and comfort level with AI assisted platform for secure coding. Results show that AI assistance can boost productivity, reduce errors, and uncover more defects when treated as a "second pair of eyes," but it can also foster over-reliance. This study introduces the AI Secure Coding platform, presents preliminary results from a initial study, and shows that embedding GenAI into a structured secure-coding workflow can both enable and challenge developers. This work also opens the door to a new research field: leveraging GenAI to enable secure software development.

Cite as

Sathwik Amburi, Tiago Espinha Gasiba, Ulrike Lechner, and Maria Pinto-Albuquerque. Enabling Secure Coding: Exploring GenAI for Developer Training and Education. In 6th International Computer Programming Education Conference (ICPEC 2025). Open Access Series in Informatics (OASIcs), Volume 133, pp. 2:1-2:15, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{amburi_et_al:OASIcs.ICPEC.2025.2,
  author =	{Amburi, Sathwik and Espinha Gasiba, Tiago and Lechner, Ulrike and Pinto-Albuquerque, Maria},
  title =	{{Enabling Secure Coding: Exploring GenAI for Developer Training and Education}},
  booktitle =	{6th International Computer Programming Education Conference (ICPEC 2025)},
  pages =	{2:1--2:15},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-393-5},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{133},
  editor =	{Queir\'{o}s, Ricardo and Pinto, M\'{a}rio and Portela, Filipe and Sim\~{o}es, Alberto},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICPEC.2025.2},
  URN =		{urn:nbn:de:0030-drops-240321},
  doi =		{10.4230/OASIcs.ICPEC.2025.2},
  annote =	{Keywords: Secure Coding, Industry, Software Development, Generative AI, Large Language Models, Teaching}
}
Document
Can Open Large Language Models Catch Vulnerabilities?

Authors: Diogo Gaspar Lopes, Tiago Espinha Gasiba, Sathwik Amburi, and Maria Pinto-Albuquerque

Published in: OASIcs, Volume 133, 6th International Computer Programming Education Conference (ICPEC 2025)


Abstract
As Large Language Models (LLMs) become increasingly integrated into secure software development workflows, a critical question remains unanswered: can these models not only detect insecure code but also reliably classify vulnerabilities according to standardized taxonomies? In this work, we conduct a systematic evaluation of three state-of-the-art LLMs - Llama3, Codestral, and Deepseek R1 - using a carefully filtered subset of the Big-Vul dataset annotated with eight representative Common Weakness Enumeration categories. Adopting a closed-world classification setup, we assess each model’s performance in both identifying the presence of vulnerabilities and mapping them to the correct CWE label. Our findings reveal a sharp contrast between high detection rates and markedly poor classification accuracy, with frequent overgeneralization and misclassification. Moreover, we analyze model-specific biases and common failure modes, shedding light on the limitations of current LLMs in performing fine-grained security reasoning.These insights are especially relevant in educational contexts, where LLMs are being adopted as learning aids despite their limitations. A nuanced understanding of their behaviour is essential to prevent the propagation of misconceptions among students. Our results expose key challenges that must be addressed before LLMs can be reliably deployed in security-sensitive environments.

Cite as

Diogo Gaspar Lopes, Tiago Espinha Gasiba, Sathwik Amburi, and Maria Pinto-Albuquerque. Can Open Large Language Models Catch Vulnerabilities?. In 6th International Computer Programming Education Conference (ICPEC 2025). Open Access Series in Informatics (OASIcs), Volume 133, pp. 4:1-4:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{gasparlopes_et_al:OASIcs.ICPEC.2025.4,
  author =	{Gaspar Lopes, Diogo and Espinha Gasiba, Tiago and Amburi, Sathwik and Pinto-Albuquerque, Maria},
  title =	{{Can Open Large Language Models Catch Vulnerabilities?}},
  booktitle =	{6th International Computer Programming Education Conference (ICPEC 2025)},
  pages =	{4:1--4:14},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-393-5},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{133},
  editor =	{Queir\'{o}s, Ricardo and Pinto, M\'{a}rio and Portela, Filipe and Sim\~{o}es, Alberto},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICPEC.2025.4},
  URN =		{urn:nbn:de:0030-drops-240340},
  doi =		{10.4230/OASIcs.ICPEC.2025.4},
  annote =	{Keywords: Large Language Models (LLMs), Secure Coding, CWE Classification, Machine Learning, Software Vulnerability Detection, Artificial Intelligence, Code Analysis, Big-Vul Dataset}
}
Document
Are We There Yet? On Security Vulnerabilities Produced by Open Source Generative AI Models and Its Implications for Security Education

Authors: Maria Camila Santos Galeano, Tiago Espinha Gasiba, Sathwik Amburi, and Maria Pinto-Albuquerque

Published in: OASIcs, Volume 133, 6th International Computer Programming Education Conference (ICPEC 2025)


Abstract
With the increasing integration of large language models (LLMs) into software development and programming education, concerns have emerged about the security of AI-generated code. This study investigates the security of three open source code generation models. Codestral, DeepSeek R1, and LLaMA 3.3 70B using structured prompts in Python, C, and Java. Some prompts were designed to explicitly trigger known vulnerability patterns, such as unsanitized input handling or unsafe memory operations, in order to assess how each model responds to security-sensitive tasks. The findings reveal recurring issues, including command execution vulnerabilities, insecure memory handling, and insufficient input validation. In response, we propose a set of recommendations for integrating secure prompt design and code auditing practices into developer training. These guidelines aim to help future developers generate safer code and better identify flaws in GenAI-generated output. This work offers an initial analysis of the limitations of GenAI-assisted code generation and provides actionable strategies to support the more secure and responsible use of these tools in professional and educational contexts.

Cite as

Maria Camila Santos Galeano, Tiago Espinha Gasiba, Sathwik Amburi, and Maria Pinto-Albuquerque. Are We There Yet? On Security Vulnerabilities Produced by Open Source Generative AI Models and Its Implications for Security Education. In 6th International Computer Programming Education Conference (ICPEC 2025). Open Access Series in Informatics (OASIcs), Volume 133, pp. 9:1-9:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{santosgaleano_et_al:OASIcs.ICPEC.2025.9,
  author =	{Santos Galeano, Maria Camila and Espinha Gasiba, Tiago and Amburi, Sathwik and Pinto-Albuquerque, Maria},
  title =	{{Are We There Yet? On Security Vulnerabilities Produced by Open Source Generative AI Models and Its Implications for Security Education}},
  booktitle =	{6th International Computer Programming Education Conference (ICPEC 2025)},
  pages =	{9:1--9:12},
  series =	{Open Access Series in Informatics (OASIcs)},
  ISBN =	{978-3-95977-393-5},
  ISSN =	{2190-6807},
  year =	{2025},
  volume =	{133},
  editor =	{Queir\'{o}s, Ricardo and Pinto, M\'{a}rio and Portela, Filipe and Sim\~{o}es, Alberto},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.ICPEC.2025.9},
  URN =		{urn:nbn:de:0030-drops-240395},
  doi =		{10.4230/OASIcs.ICPEC.2025.9},
  annote =	{Keywords: Generative AI, Code Security, Programming Education, Prompt Engineering, Secure Coding, Static Analysis}
}
Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail