figshare
Browse
1/1
2 files

UK Twitter word embeddings

dataset
posted on 2016-10-22, 16:36 authored by Vasileios LamposVasileios Lampos
Word embeddings trained on Twitter content geo-located in the United Kingdom

The total number of tweets used was approximately 215 million, dated from February 1, 2014 to March 31, 2016. Word2vec has been applied as implemented in the gensim library (https://radimrehurek.com/gensim/).

Settings: Continuous bag-of-words representation (CBOW), the entirety of a tweet as a window, negative sampling (5 noise words), and a dimensionality of 512. 

After filtering out words with less than 500 occurrences, an embedding corpus of 137,421 unigrams was obtained (see vocabulary.txt). The corresponding 512-dimensional embeddings are held in vectors.zip.

History