Browse by topic
Type of publication
Word Vector Representation. 2016. Seminar of the PRHLT groupIn this presentation we will take a journey through distributed word representations. We will begin with Bengio seminal paper . Here, they proposed a neural probabilistic language model. After they learnt word representation, it stands to reason to move forward to word composionality. Socher et al. proposed different approaches to deal with this problem. We will cover the following models: - Recursive neural networks [3, 4] - Matix-Vector Recursive Neural Networks  - Recursive Autoencoders. For those who want to go deeper, some of these concepts, along with some tips and tricks are covered in Socher et al.  References:  Bengio, Y., Ducharme, R., Vincent, P., and Janvin, C. (2003). A neural probabilistic language model. The Journal of Machine Learning Research, 3:1137–1155.  Socher, R., Huval, B., Manning, C. D., and Ng, A. Y. (2012). Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1201–1211. Association for Computational Linguistics.  Socher, R., Lin, C. C., Manning, C., and Ng, A. Y. (2011a). Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 28th international conference on machine learning (ICML-11), pages 129–136.  Socher, R., Manning, C. D., and Ng, A. Y. (2010). Learning continuous phrase representations and syntactic parsing with recursive neural networks. In Proceedings of the NIPS-2010 Deep Learning and Unsupervised Feature Learning Workshop,pages 1–9.  Socher, R., Pennington, J., Huang, E. H., Ng, A. Y., and Manning, C. D. (2011b). Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 151–161. Association for Computational Linguistics.  Socher, R., Bengio, Y., & Manning, C. D. (2012, July). Deep learning for NLP (without magic). In Tutorial Abstracts of ACL 2012 (pp. 5-5). Association for Computational Linguistics.