Hey there's DALILA: a DictionAry LearnIng LibrAry

Authors Veronica Tozzo, Vanessa D'Amario, Annalisa Barla

Thumbnail PDF


  • Filesize: 0.9 MB
  • 14 pages

Document Identifiers

Author Details

Veronica Tozzo
Vanessa D'Amario
Annalisa Barla

Cite AsGet BibTex

Veronica Tozzo, Vanessa D'Amario, and Annalisa Barla. Hey there's DALILA: a DictionAry LearnIng LibrAry. In 2017 Imperial College Computing Student Workshop (ICCSW 2017). Open Access Series in Informatics (OASIcs), Volume 60, pp. 6:1-6:14, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Dictionary Learning and Representation Learning are machine learning methods for decomposition, denoising and reconstruction of data with a wide range of applications such as text recognition, image processing and biological processes understanding. In this work we present DALILA, a scientific Python library for regularised dictionary learning and regularised representation learning that allows to impose prior knowledge, if available. DALILA, differently from the others available libraries for this purpose, is flexible and modular. DALILA is designed to be easily extended for custom needs. Moreover, it is compliant with the most widespread ML Python library and this allows for a straightforward usage and integration. We here present and discuss the theoretical aspects and discuss its strength points and implementation.
  • Machine learning
  • dictionary learning
  • representation learning
  • alternating proximal gradient descent
  • parallel computing


  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    PDF Downloads


  1. Charu C Aggarwal and ChengXiang Zhai. Mining text data. Springer Science &Business Media, 2012. Google Scholar
  2. Ludmil B Alexandrov, Serena Nik-Zainal, David C Wedge, Peter J Campbell, and Michael R Stratton. Deciphering signatures of mutational processes operative in human cancer. Cell reports, 3(1):246-259, 2013. Google Scholar
  3. Yoshua Bengio, Aaron C. Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell., 35(8):1798-1828, 2013. URL: http://dx.doi.org/10.1109/TPAMI.2013.50.
  4. Jérôme Bolte, Shoham Sabach, and Marc Teboulle. Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Mathematical Programming, 146(1-2):459-494, 2014. Google Scholar
  5. Andrzej Cichocki and PHAN Anh-Huy. Fast local algorithms for large scale nonnegative matrix and tensor factorizations. IEICE transactions on fundamentals of electronics, communications and computer sciences, 92(3):708-721, 2009. Google Scholar
  6. Dask Development Team. Dask: Library for dynamic task scheduling, 2016. URL: http://dask.pydata.org.
  7. John Duchi, Shai Shalev-Shwartz, Yoram Singer, and Tushar Chandra. Efficient projections onto the l 1-ball for learning in high dimensions. In Proceedings of the 25th international conference on Machine learning, pages 272-279. ACM, 2008. Google Scholar
  8. Roger Grosse, Rajat Raina, Helen Kwong, and Andrew Y Ng. Shift-invariance sparse coding for audio classification. arXiv preprint arXiv:1206.5241, 2012. Google Scholar
  9. Aapo Hyvärinen, Juha Karhunen, and Erkki Oja. Independent component analysis, volume 46. John Wiley &Sons, 2004. Google Scholar
  10. Ian Jolliffe. Principal component analysis. Wiley Online Library, 2002. Google Scholar
  11. Daniel D Lee and H Sebastian Seung. Learning the parts of objects by non-negative matrix factorization. Nature, 401(6755):788-791, 1999. Google Scholar
  12. Chih-Jen Lin. Projected gradient methods for nonnegative matrix factorization. Neural Computation, 19(10):2756-2779, 2007. URL: http://dx.doi.org/10.1162/neco.2007.19.10.2756.
  13. Julien Mairal, Francis Bach, Jean Ponce, and Guillermo Sapiro. Online Learning for Matrix Factorization and Sparse Coding. Journal of Machine Learning Research, 11:19-60, 2010. URL: http://dl.acm.org/citation.cfm?id=1756006.1756008.
  14. Julien Mairal, Jean Ponce, Guillermo Sapiro, Andrew Zisserman, and Francis R Bach. Supervised dictionary learning. In Advances in neural information processing systems, pages 1033-1040, 2009. Google Scholar
  15. Stephane Mallat. A wavelet tour of signal processing: the sparse way. Academic press, 2008. Google Scholar
  16. Neal Parikh, Stephen Boyd, et al. Proximal algorithms. Foundations and Trendsregistered in Optimization, 1(3):127-239, 2014. Google Scholar
  17. Alain Rakotomamonjy. Direct optimization of the dictionary learning problem. IEEE Transactions on Signal Processing, 61(22):5495-5506, 2013. Google Scholar
  18. Saiprasad Ravishankar and Yoram Bresler. MR image reconstruction from highly undersampled k-space data by dictionary learning. IEEE transactions on medical imaging, 30(5):1028-1041, 2011. Google Scholar
  19. Saverio Salzo, Salvatore Masecchia, Alessandro Verri, and Annalisa Barla. Alternating proximal regularized dictionary learning. Neural computation, 26(12):2855-2895, 2014. Google Scholar
  20. Gideon Schwarz et al. Estimating the dimension of a model. The annals of statistics, 6(2):461-464, 1978. Google Scholar
  21. Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267-288, 1996. Google Scholar
  22. Andreĭ Nikolaevich Tikhonov, Vasiliĭ Iakovlevich Arsenin, and Fritz John. Solutions of ill-posed problems, volume 14. Winston Washington, DC, 1977. Google Scholar
  23. Ivana Tosic and Pascal Frossard. Dictionary learning. IEEE Signal Processing Magazine, 28(2):27-38, 2011. Google Scholar
  24. Joel A Tropp. Just relax: Convex programming methods for identifying sparse signals in noise. IEEE transactions on information theory, 52(3):1030-1051, 2006. Google Scholar
  25. Joel A. Tropp and Anna C. Gilbert. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Information Theory, 53(12):4655-4666, 2007. URL: http://dx.doi.org/10.1109/TIT.2007.909108.
  26. Martin Vetterli, Jelena Kovacevic, and Vivek K Goyal. Fourier and wavelet signal processing. Book site, 2013. Google Scholar
  27. Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301-320, 2005. Google Scholar
Questions / Remarks / Feedback

Feedback for Dagstuhl Publishing

Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail