AP: Artificial Programming

Authors Rishabh Singh, Pushmeet Kohli



PDF
Thumbnail PDF

File

LIPIcs.SNAPL.2017.16.pdf
  • Filesize: 0.9 MB
  • 12 pages

Document Identifiers

Author Details

Rishabh Singh
Pushmeet Kohli

Cite AsGet BibTex

Rishabh Singh and Pushmeet Kohli. AP: Artificial Programming. In 2nd Summit on Advances in Programming Languages (SNAPL 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 71, pp. 16:1-16:12, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)
https://doi.org/10.4230/LIPIcs.SNAPL.2017.16

Abstract

The ability to automatically discover a program consistent with a given user intent (specification) is the holy grail of Computer Science. While significant progress has been made on the so-called problem of Program Synthesis, a number of challenges remain; particularly for the case of synthesizing richer and larger programs. This is in large part due to the difficulty of search over the space of programs. In this paper, we argue that the above-mentioned challenge can be tackled by learning synthesizers automatically from a large amount of training data. We present a first step in this direction by describing our novel synthesis approach based on two neural architectures for tackling the two key challenges of Learning to understand partial input-output specifications and Learning to search programs. The first neural architecture called the Spec Encoder computes a continuous representation of the specification, whereas the second neural architecture called the Program Generator incrementally constructs programs in a hypothesis space that is conditioned by the specification vector. The key idea of the approach is to train these architectures using a large set of (spec,P) pairs, where P denotes a program sampled from the DSL L and spec denotes the corresponding specification satisfied by P. We demonstrate the effectiveness of our approach on two preliminary instantiations. The first instantiation, called Neural FlashFill, corresponds to the domain of string manipulation programs similar to that of FlashFill. The second domain considers string transformation programs consisting of composition of API functions. We show that a neural system is able to perform quite well in learning a large majority of programs from few input-output examples. We believe this new approach will not only dramatically expand the applicability and effectiveness of Program Synthesis, but also would lead to the coming together of the Program Synthesis and Machine Learning research disciplines.
Keywords
  • Neural Program Synthesis
  • Neural Programming
  • Neural FlashFill

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads

References

  1. Rajeev Alur, Rastislav Bodík, Eric Dallal, Dana Fisman, Pranav Garg, Garvit Juniwal, Hadas Kress-Gazit, P. Madhusudan, Milo M. K. Martin, Mukund Raghothaman, Shamwaditya Saha, Sanjit A. Seshia, Rishabh Singh, Armando Solar-Lezama, Emina Torlak, and Abhishek Udupa. Syntax-guided synthesis. In Dependable Software Systems Engineering, pages 1-25. 2015. Google Scholar
  2. Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, and Daniel Tarlow. Deepcoder: Learning to write programs. CoRR, abs/1611.01989, 2016. Google Scholar
  3. Sahil Bhatia and Rishabh Singh. Automated correction for syntax errors in programming assignments using recurrent neural networks. CoRR, abs/1603.06129, 2016. Google Scholar
  4. Surya Bhupatiraju, Rishabh Singh, Abdel-rahman Mohamed, and Pushmeet Kohli. Deep API Programmer: Learning to Program with APIs. CoRR, 2017. Google Scholar
  5. Pavol Bielik, Veselin Raychev, and Martin T. Vechev. PHOG: probabilistic model for code. In ICML, pages 2933-2942, 2016. Google Scholar
  6. Alan W. Biermann. The inference of regular lisp programs from examples. IEEE transactions on Systems, Man, and Cybernetics, 8(8):585-600, 1978. Google Scholar
  7. Rudy Bunel, Alban Desmaison, M. Pawan Kumar, Philip H. S. Torr, and Pushmeet Kohli. Learning to superoptimize programs. CoRR, abs/1611.01787, 2016. Google Scholar
  8. Rudy R. Bunel, Alban Desmaison, Pawan Kumar Mudigonda, Pushmeet Kohli, and Philip H. S. Torr. Adaptive neural compilation. In NIPS, pages 1444-1452, 2016. Google Scholar
  9. Alexander L. Gaunt, Marc Brockschmidt, Rishabh Singh, Nate Kushman, Pushmeet Kohli, Jonathan Taylor, and Daniel Tarlow. Terpret: A probabilistic programming language for program induction. CoRR, abs/1608.04428, 2016. Google Scholar
  10. Patrice Godefroid, Michael Y. Levin, and David A. Molnar. SAGE: whitebox fuzzing for security testing. Commun. ACM, 55(3):40-44, 2012. Google Scholar
  11. Patrice Godefroid, Hila Peleg, and Rishabh Singh. Learn&fuzz: Machine learning for input fuzzing. CoRR, abs/1701.07232, 2017. Google Scholar
  12. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, abs/1410.5401, 2014. URL: http://arxiv.org/abs/1410.5401.
  13. Sumit Gulwani. Automating string processing in spreadsheets using input-output examples. In POPL, pages 317-330, 2011. Google Scholar
  14. Sumit Gulwani, William Harris, and Rishabh Singh. Spreadsheet data manipulation using examples. Communications of the ACM, Aug 2012. Google Scholar
  15. Rahul Gupta, Soham Pal, Aditya Kanade, and Shirish Shevade. DeepFix: Fixing common C language errors by deep learning. In AAAI, 2017. Google Scholar
  16. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In ECCV, pages 630-645, 2016. Google Scholar
  17. Abram Hindle, Earl T. Barr, Mark Gabel, Zhendong Su, and Premkumar T. Devanbu. On the naturalness of software. Commun. ACM, 59(5):122-131, 2016. Google Scholar
  18. Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82-97, 2012. Google Scholar
  19. Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735-1780, 1997. Google Scholar
  20. Armand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 190-198, 2015. Google Scholar
  21. Łukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228, 2015. Google Scholar
  22. John R. Koza. Genetic programming: on the programming of computers by means of natural selection, volume 1. MIT press, 1992. Google Scholar
  23. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097-1105, 2012. Google Scholar
  24. Karol Kurach, Marcin Andrychowicz, and Ilya Sutskever. Neural random-access machines. In Proceedings of the 4th International Conference on Learning Representations 2016, 2015. URL: http://arxiv.org/abs/1511.06392.
  25. Zohar Manna and Richard J. Waldinger. Synthesis: Dreams - programs. IEEE Trans. Software Eng., 5(4):294-328, 1979. Google Scholar
  26. Zohar Manna and Richard J. Waldinger. A deductive approach to program synthesis. ACM Trans. Program. Lang. Syst., 2(1):90-121, 1980. Google Scholar
  27. Zohar Manna and Richard J. Waldinger. Fundamentals of deductive program synthesis. IEEE Trans. Software Eng., 18(8):674-704, 1992. Google Scholar
  28. Arvind Neelakantan, Quoc V Le, and Ilya Sutskever. Neural programmer: Inducing latent programs with gradient descent. arXiv preprint arXiv:1511.04834, 2015. Google Scholar
  29. Emilio Parisotto, Abdel-rahman Mohamed, Rishabh Singh, Lihong Li, Dengyong Zhou, and Pushmeet Kohli. Neuro-symbolic program synthesis. CoRR, abs/1611.01855, 2016. Google Scholar
  30. Oleksandr Polozov and Sumit Gulwani. Flashmeta: a framework for inductive program synthesis. In Proceedings of the 2015 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications, OOPSLA 2015, part of SPLASH 2015, Pittsburgh, PA, USA, October 25-30, 2015, pages 107-126, 2015. Google Scholar
  31. Yewen Pu, Karthik Narasimhan, Armando Solar-Lezama, and Regina Barzilay. sk_p: a neural program corrector for moocs. CoRR, abs/1607.02902, 2016. Google Scholar
  32. Veselin Raychev, Martin T. Vechev, and Andreas Krause. Predicting program properties from "big code". In POPL, pages 111-124, 2015. Google Scholar
  33. Scott E. Reed and Nando de Freitas. Neural programmer-interpreters. In Proceedings of the 4th International Conference on Learning Representations 2016, 2016. URL: https://arxiv.org/abs/1511.06279.
  34. Sebastian Riedel, Matko Bosnjak, and Tim Rocktäschel. Programming with a differentiable forth interpreter. CoRR, abs/1605.06640, 2016. URL: http://arxiv.org/abs/1605.06640.
  35. Eric Schkufza, Rahul Sharma, and Alex Aiken. Stochastic program optimization. Commun. ACM, 59(2):114-122, 2016. Google Scholar
  36. Armando Solar-Lezama. Program Synthesis By Sketching. PhD thesis, EECS Dept., UC Berkeley, 2008. Google Scholar
  37. Armando Solar-Lezama. Program sketching. STTT, 15(5-6):475-495, 2013. Google Scholar
  38. Phillip D. Summers. A methodology for lisp program construction from examples. Journal of the ACM (JACM), 24(1):161-175, 1977. Google Scholar
  39. Abhishek Udupa, Arun Raghavan, Jyotirmoy V. Deshmukh, Sela Mador-Haim, Milo M. K. Martin, and Rajeev Alur. TRANSIT: specifying protocols with concolic snippets. In PLDI, pages 287-296, 2013. Google Scholar
  40. W. Xiong, Jasha Droppo, Xuedong Huang, Frank Seide, Mike Seltzer, Andreas Stolcke, Dong Yu, and Geoffrey Zweig. The microsoft 2016 conversational speech recognition system. CoRR, abs/1609.03528, 2016. Google Scholar
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail