Reasoning over Arabic WordNet Relations with Neural Tensor Network
Abstract
Arabic WordNet is an important resource for many tasks of natural language processing. However, it suffers from many problems. In this paper, we address the problem of the unseen relationships between words in Arabic WordNet. More precisely, we focus on the ability for new relationships to be learned ‘automatically’ in Arabic WordNet from existing relationships. Using the Neural Tensor Network, we investigate how it can be an advantageous technique to fill the relationship gaps between Arabic WordNet words. With minimum resources, this model delivers meaningful results. The critical component is how to represent the entities of Arabic WordNet. For that, we use AraVec, a set of pre-trained distributed word representation for the Arabic language. We show how much it helps to use these vectors for initialization. We evaluated the model, using a number of tests which reveal that semantically-initialized vectors provide considerable greater accuracy than randomly initialized ones.
Keywords
Arabic wordnet, natural language processing, neural tensor network, AraVec, word representation, word embedding