Wikipedia Navigation Vectors

2017-02-10T03:14:40Z (GMT) by Ellery Wulczyn
<p>In this project, we learned embeddings for Wikipedia articles and <a href="https://www.wikidata.org/wiki/Wikidata:Main_Page">Wikidata</a> items by applying <a href="https://en.wikipedia.org/wiki/Word2vec">Word2vec</a> models to a corpus of reading sessions.</p><p>Although Word2vec models were developed to learn word embeddings from a corpus of sentences, they can be applied to any kind of sequential data. The learned embeddings have the property that items with similar neighbors in the training corpus have similar representations (as measured by the <a href="https://en.wikipedia.org/wiki/Cosine_similarity">cosine similarity</a>, for example). Consequently, applying Wor2vec to reading sessions results in article embeddings, where articles that tend to be read in close succession have similar representations. Since people usually generate sequences of semantically related articles while reading, these embeddings also capture semantic similarity between articles.</p><p>There have been several approaches to learning vector representations of Wikipedia articles that capture semantic similarity by using the article text or the links between articles. An advantage of training Word2vec models on reading sessions, is that they learn from the actions of millions of humans who are using a diverse array of signals, including the article text, links, third-party search engines, and their existing domain knowledge, to determine what to read next in order to learn about a topic.</p><p>An additional feature of not relying on text or links, is that we can learn representations for <a href="https://www.wikidata.org/wiki/Help:Items">Wikidata items</a> by simply mapping article titles within each session to Wikidata items using <a href="https://www.wikidata.org/wiki/Help:Sitelinks">Wikidata sitelinks</a>. As a result, these Wikidata vectors are jointly trained over reading sessions for all Wikipedia language editions, allowing the model to learn from people across the globe. This approach also overcomes data sparsity issues for smaller Wikipedias, since the representations for articles in smaller Wikipedias are shared across many other potentially larger ones. Finally, instead of needing to generate a separate embedding for each Wikipedia in each language, we have a single model that gives a vector representation for any article in any language, provided the article has been mapped to a Wikidata item.</p><p>For detailed documentation, see the <a href="https://meta.wikimedia.org/wiki/Research:Wikipedia_Vectors" rel="mw:ExtLink">wiki page</a>.</p>