Matching User Interfaces to Assess Simple Web Applications (Short Paper)

Authors Marco Primo , José Paulo Leal



PDF
Thumbnail PDF

File

OASIcs.ICPEC.2021.7.pdf
  • Filesize: 0.49 MB
  • 6 pages

Document Identifiers

Author Details

Marco Primo
  • Faculty of Sciences, University of Porto, Portugal
José Paulo Leal
  • Faculty of Sciences, University of Porto, Portugal
  • CRACS - INESC, Portugal

Cite As Get BibTex

Marco Primo and José Paulo Leal. Matching User Interfaces to Assess Simple Web Applications (Short Paper). In Second International Computer Programming Education Conference (ICPEC 2021). Open Access Series in Informatics (OASIcs), Volume 91, pp. 7:1-7:6, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021) https://doi.org/10.4230/OASIcs.ICPEC.2021.7

Abstract

This paper presents ongoing research aiming at the automatic assessment of simple web applications, like those used in introductory web technologies courses. The distinctive feature of the proposed approach is a web interface matching procedure. This matching procedure verifies if the web interface being assessed corresponds to that of a reference application; otherwise, provides detailed feedback on the detected differences. Since web interfaces are event-driven, this matching is instrumental to assess the functionality. After mapping web interface elements from two applications, these can be targeted with events and property changes can be compared. This paper details the proposed matching algorithm and the current state of its implementation. It also discusses future work to embed this approach in a web environment for solving web application exercises with automatic assessment.

Subject Classification

ACM Subject Classification
  • Information systems → Web interfaces
  • Applied computing → Computer-managed instruction
Keywords
  • automatic assessment
  • web interfaces
  • learning environments

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Dr SM Afroz, N Elezabeth Rani, and N Indira Priyadarshini. Web application-a study on comparing software testing tools. International Journal of Computer Science and Telecommunications, 2(3):1-6, 2011. Google Scholar
  2. Kirsti M Ala-Mutka. A survey of automated assessment approaches for programming assignments. Computer Science Education, 15(2):83-102, 2005. URL: https://doi.org/10.1080/08993400500150747.
  3. Tsung-Hsiang Chang, Tom Yeh, and Robert C. Miller. Gui testing using computer vision. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '10, page 1535–1544, New York, NY, USA, 2010. Association for Computing Machinery. URL: https://doi.org/10.1145/1753326.1753555.
  4. Igino Corona, Battista Biggio, Matteo Contini, Luca Piras, Roberto Corda, Mauro Mereu, Guido Mureddu, Davide Ariu, and Fabio Roli. Deltaphish: Detecting phishing webpages in compromised websites. In Simon N. Foley, Dieter Gollmann, and Einar Snekkenes, editors, Computer Security - ESORICS 2017, pages 370-388, Cham, 2017. Springer International Publishing. Google Scholar
  5. H. Fangohr, Neil S. O'Brien, A. Prabhakar, and Arti Kashyap. Teaching python programming with automatic assessment and feedback provision. ArXiv, abs/1509.03556, 2015. Google Scholar
  6. D. Lu and Q. Weng. A survey of image classification methods and techniques for improving classification performance. International Journal of Remote Sensing, 28(5):823-870, 2007. URL: https://doi.org/10.1080/01431160600746456.
  7. Felipe Pezoa, Juan L. Reutter, Fernando Suarez, Martín Ugarte, and Domagoj Vrgoč. Foundations of json schema. In Proceedings of the 25th International Conference on World Wide Web, WWW '16, page 263–273, Republic and Canton of Geneva, CHE, 2016. International World Wide Web Conferences Steering Committee. URL: https://doi.org/10.1145/2872427.2883029.
  8. S. Roopak and Tony Thomas. A novel phishing page detection mechanism using html source code comparison and cosine similarity. In 2014 Fourth International Conference on Advances in Computing and Communications, pages 167-170, 2014. URL: https://doi.org/10.1109/ICACC.2014.47.
  9. Riku Saikkonen, Lauri Malmi, and Ari Korhonen. Fully automatic assessment of programming exercises. SIGCSE Bull., 33(3):133–136, 2001. URL: https://doi.org/10.1145/507758.377666.
  10. Zahid Ullah, Adidah Lajis, Mona Jamjoom, Abdulrahman Altalhi, Abdullah Al‐Ghamdi, and Farrukh Saleem. The effect of automatic assessment on novice programming: Strengths and limitations of existing systems. Computer Applications in Engineering Education, 26, February 2018. URL: https://doi.org/10.1002/cae.21974.
  11. M. Varga and M. Kvassay. Unit testing in data structures graphical learning environment. In 2019 17th International Conference on Emerging eLearning Technologies and Applications (ICETA), pages 797-804, 2019. URL: https://doi.org/10.1109/ICETA48886.2019.9040071.
  12. Gaurav Varshney, Manoj Misra, and Pradeep K Atrey. A survey and classification of web phishing detection schemes. Security and Communication Networks, 9(18):6266-6284, 2016. Google Scholar
  13. Eric M Wilcox, J William Atwood, Margaret M Burnett, Jonathan J Cadiz, and Curtis R Cook. Does continuous visual feedback aid debugging in direct-manipulation programming systems? In Proceedings of the ACM SIGCHI Conference on Human factors in computing systems, pages 258-265, 1997. Google Scholar
  14. Lauren Wood, Arnaud Le Hors, Vidur Apparao, Steve Byrne, Mike Champion, Scott Isaacs, Ian Jacobs, Gavin Nicol, Jonathan Robie, Robert Sutor, et al. Document object model (dom) level 1 specification. W3C recommendation, 1, 1998. Google Scholar
  15. Jiří Štěpánek and Monika Šimková. Comparing web pages in terms of inner structure. Procedia - Social and Behavioral Sciences, 83:458-462, 2013. 2nd World Conference on Educational Technology Research. URL: https://doi.org/10.1016/j.sbspro.2013.06.090.
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail