Generating Pseudo-ground Truth for Predicting New Concepts in Social Streams

📅 December 3, 2013 🕐 13:57 🏷 Blog and Research

*Update*

Several Dutch media have picked up our work:

Original press release:

Press coverage:

Slides of my talk at #ECIR2014 are now up on Slideshare;

*Original post*

Our paper “Generating Pseudo-ground Truth for Predicting New Concepts in Social Streams” with Manos Tsagkias, Lars Buitinck, and Maarten de Rijke got accepted as a full paper to ECIR 2014!

Download a pre-print: Graus, D., Tsagkias, E., Buitinck, L., &  de Rijke, M., “Generating pseudo-ground truth for predicting new concepts in social streams,” in 36th European Conference on Information Retrieval (ECIR’14), 2014. [PDF, 258KB]

Abstract

The manual curation of knowledge bases is a bottleneck in fast paced domains where new concepts constantly emerge. Identification of nascent concepts is important for improving early entity linking, content interpretation, and recommendation of new content in real-time applications. We present an unsupervised method for generating pseudo-ground truth for training a named entity recognizer to specifically identify entities that will become concepts in a knowledge base in the setting of social streams. We show that our method is able to deal with missing labels, justifying the use of pseudo-ground truth generation in this task. Finally, we show how our method significantly outperforms a lexical-matching baseline, by leveraging strategies for sampling pseudo-ground truth based on entity confidence scores and textual quality of input documents.

Layman explanation

This blog post is intended as a high level overview of what we did. Remember my last post on entity linking? In this paper we want to do entity linking on entities that are not (yet) on Wikipedia, or:

Recognizing (finding) and classifying (determining their type: persons, locations or organizations) unknown (not in the knowledge base) entities on Twitter (this is where we want to find them)

These entities might be unknown because they are newly surfacing (e.g. a new popstar that breaks through), or because they are so-called ‘long tail’ entities (i.e. very infrequently occurring entities).

Method

To detect these entities, we generate training data, to train a supervised named-entity recognizer and classifier (NERC). Training data is hard to come by: it is expensive to have people manually label Tweets, and you need enough of these labels to make it work. We automate this processing by using the output of an entity linker to label Tweets. The advantage is this is very cheap and easy to create a large set of training data. The disadvantage is that there might be more noise: wrong labels, or bad tweets that do not contain enough information to learn patterns to recognize the types of entities we are looking for.

To address this latter obstacle, we apply several methods to filter Tweets which we deem ‘nice’. One of these methods involves scoring Tweets based on their noise. We applied very simple features to determine this ‘noise-level’ of a tweet; amongst others how many mentions (@’s), hashtags (#’s) and URLs it contains, but also the ratio between upper-case to lower-case letters, the average word length, the tweet’s length, etc. An example for this Twitter-noise-score is below (these are Tweets from the TREC 2011 Microblog corpus we used):

Top 5 quality Tweets

  1. Watching the History channel, Hitler’s Family. Hitler hid his true family heritage, while others had to measure up to Aryan purity.
  2. When you sense yourself becoming negative, stop and consider what it would mean to apply that negative energy in the opposite direction.
  3. So. After school tomorrow, french revision class. Tuesday, Drama rehearsal and then at 8, cricket training. Wednesday, Drama. Thursday … (c)
  4. These late spectacles were about as representative of the real West as porn movies are of the pizza delivery business Que LOL
  5. Sudan’s split and emergence of an independent nation has politico-strategic significance. No African watcher should ignore this.

Top 5 noisy Tweets

  1. Toni Braxton ~ He Wasnt Man Enough for Me _HASHTAG_ _HASHTAG_? _URL_ RT _Mention_
  2. tell me what u think The GetMore Girls, Part One _URL_
  3. this girl better not go off on me rt
  4. you done know its funky! — Bill Withers “Kissing My Love” _URL_ via _Mention_
  5. This is great: _URL_ via _URL_

In addition, we filter Tweets based on the confidence score of the entity linker, so as not to include Tweets that contain unlikely labels.

Experimental Setup

It is difficult to measure how well we do in finding entities that do not exist on Wikipedia, since we need some sort of ground truth to determine whether we did well or not. As we cannot manually check for 80.000 Tweets whether the identified entities are in or out of Wikipedia, we take a slightly theoretical approach.

If I were to put it in a picture (and I did, conveniently), it’d look like this:

method

In brief, we take small ‘samples’ of Wikipedia: one such sample represent the “present KB”; the initial state of the KB. The samples are created by removing out X% of the Wikipedia pages (from 10% to 90% in steps of 10). We then label Tweets using the full KB (100%) to create the ground truth: this full KB represents the “future KB”. Our “present KB” then labels the Tweets it knows, and uses the Tweets it cannot link as sources for new entities. If then the NERC (trained on the Tweets labeled by the present KB) manages to identify entities in the set of “unlinkable” Tweets, we can compare the predictions to the ground truth, and measure performance.

Results & Findings

We report on standard metrics: Precision & Recall, on two levels: entity and mention level. However, I won’t go into any details here, because I encourage you to read the results and findings in the paper.

yourHistory – Entity linking for a personalized timeline of historic events

📅 September 14, 2013 🕐 13:31 🏷 Blog and Research

Download a pre-print of Graus, D., Peetz, M-H., Odijk, D., de Rooij, Ork., de Rijke, M. “yourHistory — Semantic linking for a personalized timeline of historic events,” in CEUR Workshop Proceedings, 2014.

Update #1

I presented yourHistory at ICT.OPEN 2013:

The slides of my talk are up on SlideShare:

yourHistory – entity linking for a personalized timeline of historic events from David Graus

And we got nominated for the “Innovation & Entrepreneurship Award” there! (sadly, didn’t win though ;) ).

nominated

Original Post

yourHistory - OKConference poster

For the LinkedUp Challenge Veni competition at the Open Knowledge Conference (OKCon), we (Maria-Hendrike Peetz, me, Daan Odijk, Ork de Rooij and Maarten de Rijke) created yourHistory; a Facebook app that uses entity linking for personalized historic timeline generation (using d3.js). Our app got shortlisted (top 8 out of 22 submissions) and is in the running for the first prize of 2000 euro!

Read a small abstract here:

In history we often study dates and events that have little to do with our own life. We make history tangible by showing historic events that are personal and based on your own interests (your Facebook profile). Often, those events are small-scale and escape history books. By linking personal historic events with global events, we to link your life with global history: writing your own personal history book.

Read the full story here;

And try out the app here!

It’s currently still a little rough around the edges. There’s an extensive to-do list, but if you have any feedback or remarks, don’t hesitate to leave me a message below!

How many things took place between 1900 and today? DBPedia knows

📅 June 7, 2013 🕐 16:42 🏷 Blog and Research

For a top-secret project, I am looking at retrieving all entities that represent a ‘(historic) event’, from DBPedia.

Now I could rant about how horrible it is to actually formulate a ‘simple’ query like this, using the structured anarchistic Linked Data format, so I will: this request “give me all entities that represent ‘events’ from DBPedia” takes me 3 SPARQL queries, since different predicates represent the same thing, but probably I need a lot more to get a proper subset of the entities I’m looking for. Currently, I filter for entities that have a dbpedia-owl:date property, a dbprop:date property (yes, these predicated express the exact same property) and entities that belong to the Event class.

Anyway, if we count for each year how many event entities there are, we get the following graph:

events

Which is interesting, because it shows how there are loads of events in the near past, and around WWII, and around WWI. I could now say something about how interesting it is that our collective memory is focused on the near past, but then I looked at the events and saw loads of sports events, so I won’t, but rather say that back in the days we were terrible at organizing sports events. Still, the knowledge that between 1900 and today a total of 16.589 events happened seems significant to me.

Semantic Search in E-Discovery: An Interdisciplinary Approach

📅 May 23, 2013 🕐 17:18 🏷 Papers and Research
Title Semantic Search in E-Discovery: An Interdisciplinary Approach [link]
Author Graus, D.P., Ren, Z., van Dijk, D., van der Knaap, N., de Rijke, M. & Henseler, H.
Publication type Workshop Proceedings
Workshop name Workshop on Standards for Using Predictive Coding, Machine Learning, and Other Advanced Search and Review Methods in E-Discovery (DESI V Workshop)
Conference name ICAIL 2013
Conference location Rome, Italy
Abstract We propose an interdisciplinary approach to applying and evaluating semantic search in the e-discovery setting. By combining expertise from the fields of law and criminology with that of information retrieval and extraction, we move beyond “algorithm-centric” evaluation, towards evaluating the impact of semantic search in real search settings. We will approach this by collaboration in an interdisciplinary group of four PhD candidates, applying an iterative two-phase work cycle to four subprojects that run in parallel. The first phase we work individually. We determine the use and needs of search in e-discovery (subproject 1), and simultaneously explore and develop state-of-the-art semantic search approaches (subprojects 2–4). In the second phase we collaborate, designing user experiments to evaluate how and where semantic search can support the analysts’ search process. By repeating this cycle multiple times we gain specific and in-depth knowledge and propose solutions to specific challenges in search in e-discovery
Full paper PDF (144 KB)

We won the WoLE2013 Challenge

📅 May 14, 2013 🕐 14:07 🏷 Blog and Research

With our SemanticTED demo, we (Daan OdijkEdgar Meij, Tom Kenter and me) won the Web of Linked Entities 2013 Workshop’s “Doing Good by Linking Entities” Developers Challenge (at WWW2013).

Read the paper of our submission here:

  • [PDF] D. Odijk, E. Meij, D. Graus, and T. Kenter, “Multilingual semantic linking for video streams: making “ideas worth sharing” more accessible,” in Proceedings of the 2nd international workshop on web of linked entities (wole 2013), 2013.
    [Bibtex]
    @inproceedings{odijk2013multilingual,
    title={Multilingual semantic linking for video streams: Making “ideas worth sharing” more accessible},
    author={Odijk, Daan and Meij, Edgar and Graus, David and Kenter, Tom},
    booktitle={Proceedings of the 2nd International Workshop on Web of Linked Entities (WoLE 2013)},
    year={2013}
    }

Now we get to share an iPad.

Hooray!

SemanticTED

Multilingual semantic linking for video streams: making “ideas worth sharing” more accessible

📅 🕐 13:58 🏷 Papers and Research
Title Multilingual Semantic Linking for Video Streams: Making “Ideas Worth Sharing” More Accessible
Author D. Odijk, E. Meij, D. Graus, and T. Kenter
Publication type Workshop Proceedings
Workshop name The 2nd International Workshop on Web of Linked Entities (WoLE2013)
Conference name WWW 2013
Conference location Rio de Janeiro, Brazil
Abstract This paper describes our submission to the Developers Challenge at WoLE2013, Doing Good by Linking Entities.” We present a fully automatic system which provides intelligent suggestions in the form of links to Wikipedia articles for video streams in multiple languages, based on the subtitles that accompany the visual content. The system is applied to online conference talks. In particular, we adapt a recently proposed semantic linking approach for streams of television broadcasts to facilitate generating contextual links while a TED talk is being viewed. TED is a highly popular global conference series covering many research domains; the publicly available talks have accumulated a total view count of over one billion at the time of writing. We exploit the multilinguality of Wikipedia and the TED subtitles to provide contextual suggestions in the language of the user watching a video. In this way, a vast source of educational and intellectual content is disclosed to a broad audience that might otherwise experience diculties interpreting it.
Full paper PDF

Context-based Entity Linking

📅 February 2, 2013 🕐 15:52 🏷 Blog and Research

The goal of this post is to make the research I’m doing understandable to the general public. You know, to explain what I’m doing in a way not my peers, but my parents would understand. In part because the majority of the returning visitors of my blog are composed of my parents, in part because lots of people think it’s a good idea for scientists to blog about their work, and in part because I like blogging. And finally, I suppose, because this research is made possible by people who pay their taxes ;-).

In this post I’ll try to explain the paper ‘Context-Based Entity Linking – University Of Amsterdam at TAC 2012’ I wrote with Edgar Meij, Tom Kenter, Marc Bron and Maarten de Rijke. It will also hopefully provide some basic understanding of machine learning.

Paper: ‘Context-Based Entity Linking – University Of Amsterdam at TAC 2012’ (131.24 KB)
Poster: Here

Entity Linking

Entity linking is the task of linking a word in a piece of text, to an ‘entity’ or ‘concept’ from a knowledge base (think: Wikipedia). Why would we want to? Because it allows us to automatically detect what is being talked about in a document, as opposed to seeing what words it is composed of. It allows us to generate extra context, it allows us to generate metadata which can improve searching and archiving. It moves one step beyond simple word-based analysis. We want that.

entity linking example

The Text Analysis Conference is a yearly ‘benchmark event’, where a dataset is provided (lots of documents, a knowledge base, and a list of queries, words or ‘entity mentions’ that occur in the documents). I describe the task in more detail here. We participated in this track, by building and modifying a system that was created for entity-linking tweets.

We start by taking our query, and search the knowledge base for entities that match it. Let’s take an example:

Query: Tank
Reference document:

Chicago Bears defensive tackle Tank Johnson was sentenced to jail on Thursday for violating probation on a 2005 weapons conviction. A cook county Judge gave Johnson a prison sentence that the Chicago Tribune reported on its website to be 120 days. According to the report, Johnson also was given 84 days of home confinement and fined 2,500 dollars. Johnson, who has a history of gun violations, faced up to one year in prison. “We continue our support of Tank and he will remain a member of our football team,” the Bears said in a statement. “Tank has made many positive changes to better his life. We believe he will continue on this path at the conclusion of his sentence.” A 2004 second-round pick, Johnson pleaded guilty to a misdemeanor gun possession charge in November 2005 and was placed on 18 months probation. Johnson was arrested December 14 when police raided his home and found three handguns, three rifles and more than 500 rounds of ammunition. He pleaded guilty on January 10 to 10 misdemeanor charges stemming from the raid. Two days after his arrest, Johnson was at a Chicago nightclub when his bodyguard and housemate, Willie B. Posey, was killed by gunfire. Johnson required special permission from the court to travel out of state to play in the Super Bowl in Miami on February 4.

For the sake of simplicity, let’s assume we find two candidate entities in our knowledge base: Tank (Military vehicle) and Tank Johnson (football player). For each query-candidate pair, we calculate different statistics, or features. For example, some features of our initial ‘microblog post entity linker’ could be:

  1. How similar is the query to the title of the candidate?
  2. Does the query occur in the candidate title?
  3. Does the candidate title occur in the query?
  4. How frequently does the query occur in the text of the candidate?

For our example, this would be:

Tank Johnson

  1. Tank and Tank Johnson share 4 letters, and differ 7 (7)
  2. Yes (1)
  3. No (0)
  4. 3 times (6)

Tank

  1. Tank and Tank share 4 letters, and differ none (0)
  2. Yes (0)
  3. Yes (1)
  4. 20 times (20)

Given these features, we generate vectors of their values like so:

Query Candidate feature 1 feature 2 feature 3 feature 4
Tank Tank Johnson 7 1 0 6
Tank Tank 0 0 1 20

Given these features (of a multitude of examples, as opposed to just 2), we ‘train’ a machine learning algorithm. Such an algorithm aims to learn patterns from the data it receives, allowing it to predict new data. To train this algorithm, we label our examples, by assigning classes to them.

In our case we have two classes: correct examples (class: 1) and incorrect examples (class: 0). This means that for a machine learning approach, we need ground truth to train our algorithm with. Examples of query-candidate pairs where we know which is correct. Typically, we use data from previous years of the same task, to train our system on. In our example case, we know the correct entity is the 1st, so we label this as ‘correct’. The other entity(ies) is labelled ‘incorrect’.

So, that’s the general approach, but what’s new, in our approach?

Add Context

We extended our initial approach with two methods that use information from the reference document (as you might have noticed, the previous features were mostly about the query, and the candidate, ignoring the reference document). In this post, I’ll talk about one of those two.

Hyperlinks = related entities

Hyperlinks = related entities

This approach takes advantage of the ‘structure’ of our knowledge base, in our case: hyperlinks between entities. For example: the text of Tank Johnson contains links to other entities like Chicago Bears (the football team Tank played for), Gary, Indiana (Tank’s place of birth), Excalibur nightclub (the place where Tank was arrested), etc. The page for the military vehicle Tank contains links to main gun, battlefield, Tanks in World War I, etc.

We assume there exists some semantic relationship between these entities and Tank Johnson (we don’t care about how exactly they are related), and we try to take advantage of this by seeing whether these ‘related entities’ occur in the reference document. The assumption is that if we find a lot of related entities for a candidate entity, it is likely to be the correct entity.

We generate features that are about these related entities in the reference document. For example, how many related entities do we find? What’s the proportion of these entities over the total amount of related entities of the candidate? We do so by searching the reference documents for surface forms of the related entities: titles of related entities, but also anchor texts (the text in blue, above) which allows us to calculate statistics to approximate the likeliness of a surface form actually linking to the entity we assume it links to. For our example, this would result in the following discovered related entities

Screen Shot 2013-02-02 at 15.14.51 PM

The document is clearly about Tank Johnson. However, in this example, we see plenty of surface forms that support Tank (in green). Since Tank was convicted for gun possession, we find lots of references to weapons and arms. In this case Tank might look like a correct link too.

However, this is where the machine learning comes in. It’s not about which entity has the most related entities (even though this is an intuition behind the approach), but its about patterns that emerge after having had enough examples. Patterns that might not be directly obvious for us, mere humans. Remember that there in a typical approach, there are lots of examples (for our submission we had on average around 300 entity candidates per query). And there are similarly lots of features (again, for our submission we calculated around 75 features per query-entity pair). Either way, to cut a long story short, our context-based approach managed to correctly decide that Tank Johnson is the correct entity in this example case!

University of Amsterdam at TAC 2012

📅 January 31, 2013 🕐 14:44 🏷 Papers and Research
Title Context-Based Entity Linking – University Of Amsterdam At TAC 2012 [link]
Author Graus, D.P., Kenter, T.M., Bron, M.M., Meij, E.J., de Rijke, M.
Publication type Conference Proceedings
Conference name Text Analysis Conference 2012
Conference location Gaithersburg, MD
Abstract This paper describes our approach to the 2012 Text Analysis Conference (TAC) Knowledge Base Population (KBP) entity linking track. For this task, we turn to a state-of-the-art system for entity linking in microblog posts. Compared to the little context microblog posts provide, the documents in the TAC KBP track provide context of greater length and of a less noisy nature. In this paper, we adapt the entity linking system for microblog posts to the KBP task by extending it with approaches that explicitly rely on the query’s context. We show that incorporating novel features that leverage the context on the entity-level can lead to improved performance in the TAC KBP task.
Export BibTex
Full paper PDF (131.42 KB)