Context-based Entity Linking

The goal of this post is to make the research I’m doing understandable to the general public. You know, to explain what I’m doing in a way not my peers, but my parents would understand. In part because the majority of the returning visitors of my blog are composed of my parents, in part because lots of people think it’s a good idea for scientists to blog about their work, and in part because I like blogging. And finally, I suppose, because this research is made possible by people who pay their taxes ;-).

In this post I’ll try to explain the paper ‘Context-Based Entity Linking – University Of Amsterdam at TAC 2012’ I wrote with Edgar Meij, Tom Kenter, Marc Bron and Maarten de Rijke. It will also hopefully provide some basic understanding of machine learning.

Paper: ‘Context-Based Entity Linking – University Of Amsterdam at TAC 2012’ (131.24 KB)
Poster: Here

Entity Linking

Entity linking is the task of linking a word in a piece of text, to an ‘entity’ or ‘concept’ from a knowledge base (think: Wikipedia). Why would we want to? Because it allows us to automatically detect what is being talked about in a document, as opposed to seeing what words it is composed of. It allows us to generate extra context, it allows us to generate metadata which can improve searching and archiving. It moves one step beyond simple word-based analysis. We want that.

entity linking example

The Text Analysis Conference is a yearly ‘benchmark event’, where a dataset is provided (lots of documents, a knowledge base, and a list of queries, words or ‘entity mentions’ that occur in the documents). I describe the task in more detail here. We participated in this track, by building and modifying a system that was created for entity-linking tweets.

We start by taking our query, and search the knowledge base for entities that match it. Let’s take an example:

Query: Tank
Reference document:

Chicago Bears defensive tackle Tank Johnson was sentenced to jail on Thursday for violating probation on a 2005 weapons conviction. A cook county Judge gave Johnson a prison sentence that the Chicago Tribune reported on its website to be 120 days. According to the report, Johnson also was given 84 days of home confinement and fined 2,500 dollars. Johnson, who has a history of gun violations, faced up to one year in prison. “We continue our support of Tank and he will remain a member of our football team,” the Bears said in a statement. “Tank has made many positive changes to better his life. We believe he will continue on this path at the conclusion of his sentence.” A 2004 second-round pick, Johnson pleaded guilty to a misdemeanor gun possession charge in November 2005 and was placed on 18 months probation. Johnson was arrested December 14 when police raided his home and found three handguns, three rifles and more than 500 rounds of ammunition. He pleaded guilty on January 10 to 10 misdemeanor charges stemming from the raid. Two days after his arrest, Johnson was at a Chicago nightclub when his bodyguard and housemate, Willie B. Posey, was killed by gunfire. Johnson required special permission from the court to travel out of state to play in the Super Bowl in Miami on February 4.

For the sake of simplicity, let’s assume we find two candidate entities in our knowledge base: Tank (Military vehicle) and Tank Johnson (football player). For each query-candidate pair, we calculate different statistics, or features. For example, some features of our initial ‘microblog post entity linker’ could be:

  1. How similar is the query to the title of the candidate?
  2. Does the query occur in the candidate title?
  3. Does the candidate title occur in the query?
  4. How frequently does the query occur in the text of the candidate?

For our example, this would be:

Tank Johnson

  1. Tank and Tank Johnson share 4 letters, and differ 7 (7)
  2. Yes (1)
  3. No (0)
  4. 3 times (6)

Tank

  1. Tank and Tank share 4 letters, and differ none (0)
  2. Yes (0)
  3. Yes (1)
  4. 20 times (20)

Given these features, we generate vectors of their values like so:

Query Candidate feature 1 feature 2 feature 3 feature 4
Tank Tank Johnson 7 1 0 6
Tank Tank 0 0 1 20

Given these features (of a multitude of examples, as opposed to just 2), we ‘train’ a machine learning algorithm. Such an algorithm aims to learn patterns from the data it receives, allowing it to predict new data. To train this algorithm, we label our examples, by assigning classes to them.

In our case we have two classes: correct examples (class: 1) and incorrect examples (class: 0). This means that for a machine learning approach, we need ground truth to train our algorithm with. Examples of query-candidate pairs where we know which is correct. Typically, we use data from previous years of the same task, to train our system on. In our example case, we know the correct entity is the 1st, so we label this as ‘correct’. The other entity(ies) is labelled ‘incorrect’.

So, that’s the general approach, but what’s new, in our approach?

Add Context

We extended our initial approach with two methods that use information from the reference document (as you might have noticed, the previous features were mostly about the query, and the candidate, ignoring the reference document). In this post, I’ll talk about one of those two.

Hyperlinks = related entities
Hyperlinks = related entities

This approach takes advantage of the ‘structure’ of our knowledge base, in our case: hyperlinks between entities. For example: the text of Tank Johnson contains links to other entities like Chicago Bears (the football team Tank played for), Gary, Indiana (Tank’s place of birth), Excalibur nightclub (the place where Tank was arrested), etc. The page for the military vehicle Tank contains links to main gun, battlefield, Tanks in World War I, etc.

We assume there exists some semantic relationship between these entities and Tank Johnson (we don’t care about how exactly they are related), and we try to take advantage of this by seeing whether these ‘related entities’ occur in the reference document. The assumption is that if we find a lot of related entities for a candidate entity, it is likely to be the correct entity.

We generate features that are about these related entities in the reference document. For example, how many related entities do we find? What’s the proportion of these entities over the total amount of related entities of the candidate? We do so by searching the reference documents for surface forms of the related entities: titles of related entities, but also anchor texts (the text in blue, above) which allows us to calculate statistics to approximate the likeliness of a surface form actually linking to the entity we assume it links to. For our example, this would result in the following discovered related entities

Screen Shot 2013-02-02 at 15.14.51 PM

The document is clearly about Tank Johnson. However, in this example, we see plenty of surface forms that support Tank (in green). Since Tank was convicted for gun possession, we find lots of references to weapons and arms. In this case Tank might look like a correct link too.

However, this is where the machine learning comes in. It’s not about which entity has the most related entities (even though this is an intuition behind the approach), but its about patterns that emerge after having had enough examples. Patterns that might not be directly obvious for us, mere humans. Remember that there in a typical approach, there are lots of examples (for our submission we had on average around 300 entity candidates per query). And there are similarly lots of features (again, for our submission we calculated around 75 features per query-entity pair). Either way, to cut a long story short, our context-based approach managed to correctly decide that Tank Johnson is the correct entity in this example case!

University of Amsterdam at TAC 2012

Title Context-Based Entity Linking – University Of Amsterdam At TAC 2012 [link]
Author Graus, D.P., Kenter, T.M., Bron, M.M., Meij, E.J., de Rijke, M.
Publication type Conference Proceedings
Conference name Text Analysis Conference 2012
Conference location Gaithersburg, MD
Abstract This paper describes our approach to the 2012 Text Analysis Conference (TAC) Knowledge Base Population (KBP) entity linking track. For this task, we turn to a state-of-the-art system for entity linking in microblog posts. Compared to the little context microblog posts provide, the documents in the TAC KBP track provide context of greater length and of a less noisy nature. In this paper, we adapt the entity linking system for microblog posts to the KBP task by extending it with approaches that explicitly rely on the query’s context. We show that incorporating novel features that leverage the context on the entity-level can lead to improved performance in the TAC KBP task.
Export BibTex
Full paper PDF (131.42 KB)

Entity Linking – From Words to Concepts

The last couple of weeks I’ve been diving into the task of entity linking, in the context of a submission to the Text Analysis Conference Knowledge Base Population track (that’s quite a mouthful – TAC KBP from now on). A ‘contest’ in Knowledge Base Population with a standardized task, dataset and evaluation. Before I’m devoting my next post to our submission, let me first explain the task of entity linking in this post :-).

Knowledge Base Population

Knowledge Base Population is an information extraction task of generating knowledge bases from raw, unstructured text. A Knowledge Base is essentially a database which describes unique concepts (things) and contains information about these concepts. Wikipedia is a Knowledge Base: each article represents a unique concept, the article’s body contains information about the concept, the infobox provides ‘structured’ information, and links to other pages provide ‘semantic’ (or rather: relational) information.

Filling a Knowledge Base from raw textual data can be broadly split up into two subtasks: entity linking (identifying unique entities in a corpus of documents to add to the Knowledge Base) and slot filling (finding relations of and between these unique entities, and add these to the concepts in the KB).

Entity Linking

Entity linking is the task of linking a concept (thing) to a mention (word/words) in a document. Not unlike semantic annotation, this task is essentially about defining the meaning of a word, by assigning the correct concept to it. Consider these examples:

Bush went to war in Iraq
Bush won over Dukakis
A bush under a tree in the forest
A tree in the forest
A hierarchical tree

It is clear that the bushes and trees in these sentences refer to different concepts. The idea is to link the bold words to the people or things they refer to. To complete this task, we are given:

  • KB: a Wikipedia-derived knowledge base containing concepts, people, places, etc. Each concept has a title, text (the Wikipedia page’s content), some metadata (infobox properties), etc.
  • DOC: the (context) document in which the entity we are trying to link occurs (in the case of the examples: the sentence)
  • Q: The entity-mention query in the document (the word)

The goal is to identify whether the entity referred to by Q is in the KB: this means that if it isn’t, it should be considered a ‘new’ entity. All the new entities should be clustered; that means when two documents refer to the same ‘new’ entity, this must be represented in assigning the same new ID to both words.

A common approach

1. Query Expansion: This means finding more surface forms that refer to entity Q. Two approaches:
I: This can be derived from the document itself, using for example ‘coreference resolution’. In this case you try to identify all strings referring to the same entity. In the Bush example, you might find “President Bush” or “George W. Bush” somewhere in the document.
II: This can also be done by using external information sources, such as looking up the word in a database of Wikipedia anchor texts, page titles, redirect strings or disambiguation pages. Using Wikipedia to expand ‘Q=cheese’ could lead to:
TitleCheeseRedirects: Home cheesemaking, Cheeses, CHEESE, Cheeze, Chees, Chese, Coagulated milk curd. Anchors: cheese, Cheese, cheeses, Cheeses, maturation, CHEESE, 450 varieties of cheese, Queso, Soft cheese, aging process of cheese, chees, cheese factor, cheese wheel, cheis, double and triple cream cheese, formaggi, fromage, hard cheeses, kebbuck, masification, semi-hard to hard, soft cheese, soft-ripened, wheel of cheese, Fromage, cheese making, cheese-making, cheesy, coagulated, curds, grated cheese, hard, lyres, washed rind, wheels.

2. Candidate Generation: For each surface form of Q, try to find KB entries that could be referred to. Simple approaches are searching for Wikipedia titles that contain the surface form, looking through anchor link texts (titles used to refer to a specific Wikipedia page in another Wikipedia page), expanding acronyms (if you find a string containing only uppercase-letters, try to find a matching word sequence).

3. Candidate Ranking: The final step would be selecting the most probable candidate from the previous step. Simple approaches can be comparing the similarity of the context document to each candidate document (Wikipedia page), more advanced approaches involve measuring semantic similarity on higher levels: e.g. by finding ‘related’ entities in the context document.

4. NIL Clustering: Whenever no candidate can be found (or only candidates with a low probability of being the right one – measured in any way), it could be decided that the entity referred to is not in the KB. In this case, the job is to assign a new ID to the entity, and if it ever is referred in a later document, attach this same ID. This is a matter of (unsupervised) clustering. Successfull approaches include simple string similarity (same ‘new’ entities being referred to by the same word), document similarity (using simple comparisons) or more advanced clustering approaches such as LDA/HDP.

Read More

Now this is just a general introduction. If you are interested in the technical side of some common approaches, take a look at the TAC Proceedings. Particularly the Overview of the TAC2011 Knowledge Base Population Trac [PDF], and the Proceedings Papers. If you are unsure why Wikipedia is an awesome resource for entity linking (or any form of extracting structured information from unstructured text), I’d recommend you read ‘Mining Meaning from Wikipedia‘ (Medelyan et al., 2009)

Next post will hopefully be about ILPS’ award-winning entity linking system, so stay tuned ;-).