It’s a wrap!

📅 April 13, 2012 🕐 12:04 🏷 Blog and Thesis (MSc)

Photo by Thijs Niks This category can soon be archived ;)! Earlier this week I handed in my final paper, and yesterday was the day of my final presentation. It was a great day and I’m really excited about embarking on my next adventure. I will soon start as a PhD candidate at the University of Amsterdam, on a very exciting project in ‘Semantic Search in e-Discovery’ at the Information and Language Processing Systems group. Naturally, this blog will keep the world informed of my work and projects ;). Exciting times!

Paper

Download my paper: Automatic Annotation of Cyttron Entries using the NCIthesaurus [PDF – 328 KB] Download the supplementary data (graphs, tables and viz): Supplementary Data [PDF – 2.27 MB]

Demo

Similarity graph demo Check out the D3.js-powered demo of a similarity graph (comparing expert & computer-generated annotations) (more…)

Subgraph Similarity Example

📅 April 10, 2012 🕐 13:45 🏷 Blog

“(a) A sagittal reconstruction of a coronally acquired magnetic resonance imaging (MRI) scan, at the level on which the cingulate gyrus was measured. The area outlined represents the portion of the scan used to orient the operator to the landmarks of the cingulate. A box has been placed over the region of interest in one hemisphere. (b) A diagram of the cingulate gyrus divided into the rostral portion of the anterior cingulate (RAC), the caudal portion of the interior cingulate (CAC), and the posterior cingulate (PC). Adjoining landmarks include the corpus callosum (CC), the lateral ventricle (Lat. Vent.), and the thalamus (Thal.). (c) The region of the cingulate gyrus measured in the present study, as delineated on the MRI scan of a control subject. […] ” (snippet)

» Check out the d3js demo here «

Two sets of annotations (Expert 1 & Expert 3)

 +

Result in the following similarity graph:

Pathfinder finds paths!

📅 November 20, 2011 🕐 15:43 🏷 Thesis (MSc)

First results are in! My computer spent about 20 hours to retrieve and store neighboring concepts of over 10.000 concepts which my Breadth-First-Search algorithm passed through to find the shortest paths between 6 nodes. But here is the result, first the ‘before’ graph, which I showed earlier: all retrieved concepts with their parent relations. Below that is the new graph, which relates all concepts by finding their shortest paths (so far only the orange concepts – from the Gene Ontology).

Before

After

So?

What were two separate clusters in the first to the left is now one big fat cluster… Which is cool!

Less cool is the time it took… But oh well, looks like I’m going to have to prepare some examples as proof of concepts. Nowhere near realistic realtime performance so far… (however I got a big speed increase by moving my Sesame triple store from my ancient EeePC900 to my desktop computer… Goodbye supercomputer). The good news is that all neighboring nodes I processed so far are cached in a local SQLite database, so those 20 hours were not a waste! (considering my total ontology database consists out of over 800.000 concepts, and 10.000 concepts took 20 hours, is something I choose not to take into consideration however :p).

It is important to note that the meaning or interpretation of the resulting graph (and particularly the relations between concepts) is not the primary concern here: the paths (their lengths, the directions of the edges and the node’s ‘depths’) will be primarily used for the ontology-based semantic similarity measure I wrote about in this post.

Add some #dataviz in the mix

📅 October 25, 2011 🕐 07:59 🏷 Thesis (MSc)

All this time I have been working on my ontology-powered topic identification system: combining semantic web technologies with natural language processing and text-mining techniques to extract (a) topic(s) from a text. But I never really decided what my program should output: a topic? A list of potential topics with their ‘likeliness’? That would mean the relationships between topics my SPARQL-powered ‘pathfinder’ finds would disappear in an algorithm that used those paths to calculate semantic similarity, which would be a pitty. This weekend it finally all came together:

Visualize!

I already played around with drawing graphs with Gephi, as a method to check the results in a way other than processing huge lists in my python interpreter. But now I realize it could be the perfect ‘final product’. What I want to create is a ‘semantically-augmented tag-cloud-graph‘. As opposed to a standard tagcloud, my augmented tag graph will be a visualization of the text, composed of both interlinked concepts and solitary concepts, concepts that can be found literally in the text and concepts that aren’t mentioned anywhere in the text. The tag graph will benefit from the linked data nature of ontologies to:

1. Show relationships between concepts
2. Show extra concepts which do not occur literally in the text: nodes that occur in a path between two nodes, and maybe ‘superClass’ and ‘subClass’ nodes.

In this way, it will be similar to a tag cloud as it conveys the content of a text, but augmented as it could convey the meaning of a text and relationships between tags. It will also be similar to the DBPedia RelFinder, but augmented as it will show links but also contain a hierarchy of more to less important concepts. I hope that in the end it could be a viable alternative of graphic text-representation.

Visualize what?

As I see it, my semantic graph-cloud should communicate at least these three things:
1. The most likely topic of the text
2. The concepts which occur literally in the text versus concepts that do not occur in the text
3. Clusters of similar concepts

My first idea is to model these three properties by size, alpha-channel and colour respectively, the bigger node, the more likely the topic, transparant nodes are the ones that do not occur in the text, and coloured nodes to group semantically similar nodes. But that’s just my initial plan, I might have to give it some more thoughts and experiment with it.

Next, I should think about a method to ‘measure’ whether my semantic tagcloud conveys more information, or has ajy added value. In the end, I am totally unsure whether the resulting tag graphs will make sense as it depends on multiple factors such as the efficiency of my keyword extraction, the efficiency of the vector-space based string comparison, the quality of the ontologies, etc.

The good news is, most of the technical work is in a close-to-finished state. I have various methods of extracting keywords from large texts (which I will be validating soon), I have a functional method to find ‘new’ concepts by comparing a text to all the descriptions of ontology concepts, I have a functional shortest-path finder to explore how two concepts are related (and produce a graph of it). It’s just a matter of putting it all together and selecting a suitable tool to draw the graphs. I don’t think I’ll use Gephi, as I want to fully integrate the graph drawing in my script, so who knows, maybe it’s back to NetworkX, igraph, or maybe Protovis?

In the mean time I’m looking at validating the data I generated with the various keyword extraction algorithms. By using human experts and cross-validating with existing keywords from certain entries, I’ll be able to evaluate which method of keyword extraction is the most efficient for my purposes. Once that’s done, and  I’ve picked my graph-drawing method, I can start showing some preliminary results!

More text-mining. Popularity contest: Drosophila Melanogaster vs. C. elegans

📅 October 9, 2011 🕐 17:17 🏷 Blog

 vs 

While waiting on several word-counting scripts to finish counting, I picked up my cancerCounter script to count something else. This time, I wanted to see what organism was more popular and more frequently mentioned in biomedical studies: the ever-present Drosophila Melanogaster, aka common fruit fly, or the aptly named Caenorhabditis elegans (one cannot deny that the 1nm-long worm has quite the elegant wiggle). Two model organisms in biomedical research.

Both have a lot going for themselves:
– Elegans was the first organism to ever have its entire genome sequenced (go worm!)
– The worm reproduces and mutates quickly and easily

The fruit fly on the other hand is quite the suitable lab-rat as well:
– Drosophila breeds easily
– Does not need much space nor care
– Has to pay for invading my kitchen each year during summer

I started counting the occurrence of ‘drosophila melanogaster’ or ‘d. melanogaster’ AND ‘caenorhabditis elegans’ or ‘c. elegans’ in the lowercased article-body of my 99.000-and-something BioMedCentral articles-corpus, and took a looksy. First comes the total amount of articles published a year, with the amount of articles mentioning the fruit fly/worm:

As we can see, worryingly, scientists hardly spend enough time performing research with worms and fruit flies. Since 2003, they do consistently play more with the worms than with fruit flies, though. But it’s hard to see, let’s ditch the total articles:

When we subtract the drosophila articles from the elegans articles, we can see how much the worm has on the fruit fly. The red bars represents by how many articles Elegans wins over Drosophila, and blue bars indicate with how many articles Drosophila wins over Elegans.

But absolute numbers is not what we’re looking for. As we have seen in the first graph, the frequency of articles is far from evenly distributed. So let’s see what the ratio is, of the difference between both organisms:

This evens out some of the bigger differences in the previous graph; Drosophila had ‘only’ a +5 win over Elegans in 2001, but relatively this is a bigger victory than Elegans’ +34 win in 2006, and even its +79 victory in 2009.

Conclusion: Elegans wins.