Språkbanken Text,
Model published 2025 via Språkbanken Text
Embeddings (mappings of linguistic units, such as words, sentences, characters, to vectors of real numbers) are playing an extremely important role in modern language technology. Training the embedding models is often costly, which is why pretrained embeddings are widely used. On this page we provide lists of various pretrained embeddings for Swedish and of studies that focus on evaluating Swedish embeddings. If you have suggestions or comments, please contact us.
Embeddings
Facebook FastText models: Common Crawl + Wiki, Wiki, Wiki with cross-lingual alignment
NLPL repository: Word2Vec Continuous Skipgram (CoNLL17 corpus); ELMO (CoNLL17 corpus), ELMO (Wiki)
NLPLAB at Linköping University: a pretrained Word2Vec model (trained on a Göteborgs-Posten corpus); a script for training both cbow and sgns Word2Vec; a paper comparing Word2Vec and GloVe to Saldo
The National Library's (Kungliga bibliotekets) models: BERT, BERT fine-tuned for NER, ALBERT
The Public Employment Service's (Arbetsförmedlingens )models: BERT
Polyglot
Kyubyong Park's models: trained on Wiki, Word2Vec and FastText
Flair models. See also our