Enriching Word Vectors with Subword Information
Top Cited Papers
Open Access
- 1 December 2017
- journal article
- Published by MIT Press in Transactions of the Association for Computational Linguistics
- Vol. 5, 135-146
- https://doi.org/10.1162/tacl_a_00051
Abstract
Continuous word representations, trained on large unlabeled corpora are useful for many natural language processing tasks. Popular models that learn such representations ignore the morphology of words, by assigning a distinct vector to each word. This is a limitation, especially for languages with large vocabularies and many rare words. In this paper, we propose a new approach based on the skipgram model, where each word is represented as a bag of character n-grams. A vector representation is associated to each character n-gram; words being represented as the sum of these representations. Our method is fast, allowing to train models on large corpora quickly and allows us to compute word representations for words that did not appear in the training data. We evaluate our word representations on nine different languages, both on word similarity and analogy tasks. By comparing to recently proposed morphological word representations, we show that our vectors achieve state-of-the-art performance on these tasks.Keywords
Other Versions
This publication has 6 references indexed in Scilit:
- Distributional Memory: A General Framework for Corpus-Based SemanticsComputational Linguistics, 2010
- From Frequency to Meaning: Vector Space Models of SemanticsJournal of Artificial Intelligence Research, 2010
- Producing high-dimensional semantic spaces from lexical co-occurrenceBehavior Research Methods, Instruments & Computers, 1996
- Indexing by latent semantic analysisJournal of the American Society for Information Science, 1990
- Distributional StructureWORD, 1954
- The Proof and Measurement of Association between Two ThingsThe American Journal of Psychology, 1904