Embeddings

Words with Consistent Diachronic Usage Patterns are Learned Earlier: A Computational Analysis Using Temporally Aligned Word Embeddings.

In this study, we use temporally aligned word embeddings and a large diachronic corpus of English to quantify language change in a data‐driven, scalable way, which is grounded in language use.

Language in a (Search) Box: Grounding Language Learning in Real-World Human-Machine Interaction

We investigate grounded language learning through real-world data, by modelling a teacher-learner dynamics through the natural interactions occurring between users and search engines.

FEEL-IT: Emotion and Sentiment Classification for the Italian Language

Sentiment analysis is a common task to understand people's reactions online. Still, we often need more nuanced information: is the post negative because the user is angry or because they are sad? An abundance of approaches has been introduced for …

Fantastic Embeddings and How to Align Them: Zero-Shot Inference in a Multi-Shop Scenario

In this paper we work on aligning product embeddings that come from different shops. We use techniques from machine translation to provide an effective method for alignment.

Training Temporal Word Embeddings with a Compass

We introduce a novel model for word embedding alignment and test it on temporal word embeddings obtaining SOTA results.

Towards Encoding Time in text-based Entity Embeddings

Knowledge Graphs (KG) are widely used abstractions to represent entity-centric knowledge. Approaches to embed entities, entity types and relations represented in the graph into vector spaces - often referred to as KG embeddings - have become …