Live Programming Environment for Deep Learning with Instant and Editable Neural Network Visualization

Authors Chunqi Zhao, Tsukasa Fukusato, Jun Kato, Takeo Igarashi



PDF
Thumbnail PDF

File

OASIcs.PLATEAU.2019.7.pdf
  • Filesize: 0.51 MB
  • 5 pages

Document Identifiers

Author Details

Chunqi Zhao
  • The University of Tokyo, Japan
Tsukasa Fukusato
  • The University of Tokyo, Japan
Jun Kato
  • National Institute of Advanced Industrial Science and Technology, Tsukuba, Japan
Takeo Igarashi
  • The University of Tokyo, Japan

Cite As Get BibTex

Chunqi Zhao, Tsukasa Fukusato, Jun Kato, and Takeo Igarashi. Live Programming Environment for Deep Learning with Instant and Editable Neural Network Visualization. In 10th Workshop on Evaluation and Usability of Programming Languages and Tools (PLATEAU 2019). Open Access Series in Informatics (OASIcs), Volume 76, pp. 7:1-7:5, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020) https://doi.org/10.4230/OASIcs.PLATEAU.2019.7

Abstract

Artificial intelligence (AI) such as deep learning has achieved significant success in a variety of application domains. Several visualization techniques have been proposed for understanding the overall behavior of the neural network defined by deep learning code. However, they show visualization only after the code or network definition is written and it remains complicated and unfriendly for newbies to build deep neural network models on a code editor. In this paper, to help user better understand the behavior of networks, we augment a code editor with instant and editable visualization of network model, inspired by live programming which provides continuous feedback to the programmer.

Subject Classification

ACM Subject Classification
  • Software and its engineering → Development frameworks and environments
  • Human-centered computing → Visualization toolkits
Keywords
  • Neural network visualization
  • Live programming
  • Deep learning

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Chainer. https://chainer.org/. Accessed: 2019-06-05. URL: https://chainer.org/.
  2. Hiddenlayer. https://github.com/waleedka/hiddenlayer/. Accessed: 2019-06-05. URL: https://github.com/waleedka/hiddenlayer/.
  3. Keras. http://keras.io/. Accessed: 2019-06-05. URL: http://keras.io/.
  4. Nn-svg: Lenet- and alexnet-style diagrams. http://alexlenail.me/NN-SVG/LeNet.html. Accessed: 2019-06-05. URL: http://alexlenail.me/NN-SVG/LeNet.html.
  5. Plotneuralnet. https://github.com/HarisIqbal88/PlotNeuralNet. Accessed: 2019-06-05. URL: https://github.com/HarisIqbal88/PlotNeuralNet.
  6. Pytorch. https://pytorch.org/. Accessed: 2019-06-05. URL: https://pytorch.org/.
  7. Tensorflow. https://www.tensorflow.org/. Accessed: 2019-06-05. URL: https://www.tensorflow.org/.
  8. Tensorspace.js. https://tensorspace.org/. Accessed: 2019-06-05. URL: https://tensorspace.org/.
  9. Philip J Guo. Online python tutor: Embeddable web-based program visualization for cs education. In Proceedings of the 2013 ACM SIGCSE, pages 579-584. ACM, 2013. Google Scholar
  10. Jun Kato. Visionsketch: Gesture-based language for end-user computer vision programming. In Proceedings of the 2013 ACM SIGPLAN, 2013. Google Scholar
  11. Jun Kato, Sean McDirmid, and Xiang Cao. Dejavu: Integrated support for developing interactive camera-based programs. In Proceedings of the 2012 ACM UIST, pages 189-196. ACM, 2012. Google Scholar
  12. Dan Maynes-Aminzade, Terry Winograd, and Takeo Igarashi. Eyepatch: prototyping camera-based interaction through examples. In Proceedings of the 2007 ACM UIST, pages 33-42. ACM, 2007. Google Scholar
  13. Akio Oka, Hidehiko Masuhara, and Tomoyuki Aotani. Live, synchronized, and mental map preserving visualization for data structure programming. In Proceedings of the 2018 ACM SIGPLAN, pages 72-87. ACM, 2018. Google Scholar
  14. Kanit Wongsuphasawat, Daniel Smilkov, James Wexler, Jimbo Wilson, Dandelion Mane, Doug Fritz, Dilip Krishnan, Fernanda B Viégas, and Martin Wattenberg. Visualizing dataflow graphs of deep learning models in tensorflow. IEEE Transactions on Visualization and Computer Graphics, 24(1):1-12, 2017. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail