NieuwsInzicht proposal for innovating the press

📅 April 8, 2015 🕐 11:43 🏷 Blog

Together with 904LabsWouter Weerkamp and Manos Tsagkias I’ve submitted a project proposal for funding by the “Stimuleringsfonds voor de Journalistiek” for innovating the press. The idea in a nutshell is automated knowledge base construction for news (paper) archives.

For more information see the abstract below (in Dutch), our website: nieuwsinzicht.nu and our submitted proposal at Persinnovatie.nl!

nieuwsinzicht

NieuwsInzicht is een automatisch samengestelde, gestructureerde kennisbank waarin onderwerpen uit het nieuws centraal staan.

Nieuws gaat inherent over personen, plaatsen, organisaties, of producten. NieuwsInzicht is een online kennisbank waarin deze onderwerpen uit regionaal en landelijk nieuws centraal staan. In tegenstelling tot Wikipedia vullen niet gebruikers, maar algoritmen deze online kennisbank.

Wanneer politici in opspraak is het zaak voor journalisten om in nieuws- en krantenarchieven te graven naar achtergrondinformatie: wat is er bekend over deze personen? Waar hebben ze gewerkt? Met wie hebben ze gewerkt? Nu zijn journalisten daarvoor nog aangewezen op het handmatig doorzoeken van archieven als LexisNexis, Google News, of zelfgeselecteerde bronnen.

NieuwsInzicht scrapet content van regionale en landelijke kranten en nieuwssites, en identificeert met behulp van automatische tekstanalyse personen, plaatsen, producten en organisaties die genoemd worden. NieuwsInzicht organiseert deze onderwerpen in individuele pagina’s, met links naar de bronnen waar ze genoemd worden, en analyses van de verzamelde content. Zo biedt NieuwsInzicht in een oogopslag een overzicht van welke onderwerpen in de media aan bod zijn gekomen, wat er gepubliceerd is, uit welke bronnen, wanneer en hoe verschillende onderwerpen met elkaar samenhangen.

 

Generating Pseudo-ground Truth for Predicting New Concepts in Social Streams

📅 January 17, 2014 🕐 11:44 🏷 Papers
Title Generating Pseudo-ground Truth for Predicting New Concepts in Social Streams
Author David Graus, Manos Tsagkias, Lars Buitinck, Maarten de Rijke
Publication type Full paper
Conference name 36th European Conference on Information Retrieval (ECIR ’14)
Conference location Amsterdam, The Netherlands
Abstract The manual curation of knowledge bases is a bottleneck in fast paced domains where new concepts constantly emerge. Identification of nascent concepts is important for improving early entity linking, content interpretation, and recommendation of new content in real-time applications. We present an unsupervised method for generating pseudo-ground truth for training a named entity recognizer to specifically identify entities that will become concepts in a knowledge base in the setting of social streams. We show that our method is able to deal with missing labels, justifying the use of pseudo-ground truth generation in this task. Finally, we show how our method significantly outperforms a lexical-matching baseline, by leveraging strategies for sampling pseudo-ground truth based on entity confidence scores and textual quality of input documents.
Full paper PDF [256 KB]

Generating Pseudo-ground Truth for Predicting New Concepts in Social Streams

📅 December 3, 2013 🕐 13:57 🏷 Blog and Research

*Update*

Several Dutch media have picked up our work:

Original press release:

Press coverage:

Slides of my talk at #ECIR2014 are now up on Slideshare;

*Original post*

Our paper “Generating Pseudo-ground Truth for Predicting New Concepts in Social Streams” with Manos Tsagkias, Lars Buitinck, and Maarten de Rijke got accepted as a full paper to ECIR 2014!

Download a pre-print: Graus, D., Tsagkias, E., Buitinck, L., &  de Rijke, M., “Generating pseudo-ground truth for predicting new concepts in social streams,” in 36th European Conference on Information Retrieval (ECIR’14), 2014. [PDF, 258KB]

Abstract

The manual curation of knowledge bases is a bottleneck in fast paced domains where new concepts constantly emerge. Identification of nascent concepts is important for improving early entity linking, content interpretation, and recommendation of new content in real-time applications. We present an unsupervised method for generating pseudo-ground truth for training a named entity recognizer to specifically identify entities that will become concepts in a knowledge base in the setting of social streams. We show that our method is able to deal with missing labels, justifying the use of pseudo-ground truth generation in this task. Finally, we show how our method significantly outperforms a lexical-matching baseline, by leveraging strategies for sampling pseudo-ground truth based on entity confidence scores and textual quality of input documents.

Layman explanation

This blog post is intended as a high level overview of what we did. Remember my last post on entity linking? In this paper we want to do entity linking on entities that are not (yet) on Wikipedia, or:

Recognizing (finding) and classifying (determining their type: persons, locations or organizations) unknown (not in the knowledge base) entities on Twitter (this is where we want to find them)

These entities might be unknown because they are newly surfacing (e.g. a new popstar that breaks through), or because they are so-called ‘long tail’ entities (i.e. very infrequently occurring entities).

Method

To detect these entities, we generate training data, to train a supervised named-entity recognizer and classifier (NERC). Training data is hard to come by: it is expensive to have people manually label Tweets, and you need enough of these labels to make it work. We automate this processing by using the output of an entity linker to label Tweets. The advantage is this is very cheap and easy to create a large set of training data. The disadvantage is that there might be more noise: wrong labels, or bad tweets that do not contain enough information to learn patterns to recognize the types of entities we are looking for.

To address this latter obstacle, we apply several methods to filter Tweets which we deem ‘nice’. One of these methods involves scoring Tweets based on their noise. We applied very simple features to determine this ‘noise-level’ of a tweet; amongst others how many mentions (@’s), hashtags (#’s) and URLs it contains, but also the ratio between upper-case to lower-case letters, the average word length, the tweet’s length, etc. An example for this Twitter-noise-score is below (these are Tweets from the TREC 2011 Microblog corpus we used):

Top 5 quality Tweets

  1. Watching the History channel, Hitler’s Family. Hitler hid his true family heritage, while others had to measure up to Aryan purity.
  2. When you sense yourself becoming negative, stop and consider what it would mean to apply that negative energy in the opposite direction.
  3. So. After school tomorrow, french revision class. Tuesday, Drama rehearsal and then at 8, cricket training. Wednesday, Drama. Thursday … (c)
  4. These late spectacles were about as representative of the real West as porn movies are of the pizza delivery business Que LOL
  5. Sudan’s split and emergence of an independent nation has politico-strategic significance. No African watcher should ignore this.

Top 5 noisy Tweets

  1. Toni Braxton ~ He Wasnt Man Enough for Me _HASHTAG_ _HASHTAG_? _URL_ RT _Mention_
  2. tell me what u think The GetMore Girls, Part One _URL_
  3. this girl better not go off on me rt
  4. you done know its funky! — Bill Withers “Kissing My Love” _URL_ via _Mention_
  5. This is great: _URL_ via _URL_

In addition, we filter Tweets based on the confidence score of the entity linker, so as not to include Tweets that contain unlikely labels.

Experimental Setup

It is difficult to measure how well we do in finding entities that do not exist on Wikipedia, since we need some sort of ground truth to determine whether we did well or not. As we cannot manually check for 80.000 Tweets whether the identified entities are in or out of Wikipedia, we take a slightly theoretical approach.

If I were to put it in a picture (and I did, conveniently), it’d look like this:

method

In brief, we take small ‘samples’ of Wikipedia: one such sample represent the “present KB”; the initial state of the KB. The samples are created by removing out X% of the Wikipedia pages (from 10% to 90% in steps of 10). We then label Tweets using the full KB (100%) to create the ground truth: this full KB represents the “future KB”. Our “present KB” then labels the Tweets it knows, and uses the Tweets it cannot link as sources for new entities. If then the NERC (trained on the Tweets labeled by the present KB) manages to identify entities in the set of “unlinkable” Tweets, we can compare the predictions to the ground truth, and measure performance.

Results & Findings

We report on standard metrics: Precision & Recall, on two levels: entity and mention level. However, I won’t go into any details here, because I encourage you to read the results and findings in the paper.