More text-mining. Popularity contest: Drosophila Melanogaster vs. C. elegans

 vs 

While waiting on several word-counting scripts to finish counting, I picked up my cancerCounter script to count something else. This time, I wanted to see what organism was more popular and more frequently mentioned in biomedical studies: the ever-present Drosophila Melanogaster, aka common fruit fly, or the aptly named Caenorhabditis elegans (one cannot deny that the 1nm-long worm has quite the elegant wiggle). Two model organisms in biomedical research.

Both have a lot going for themselves:
– Elegans was the first organism to ever have its entire genome sequenced (go worm!)
– The worm reproduces and mutates quickly and easily

The fruit fly on the other hand is quite the suitable lab-rat as well:
– Drosophila breeds easily
– Does not need much space nor care
– Has to pay for invading my kitchen each year during summer

I started counting the occurrence of ‘drosophila melanogaster’ or ‘d. melanogaster’ AND ‘caenorhabditis elegans’ or ‘c. elegans’ in the lowercased article-body of my 99.000-and-something BioMedCentral articles-corpus, and took a looksy. First comes the total amount of articles published a year, with the amount of articles mentioning the fruit fly/worm:

As we can see, worryingly, scientists hardly spend enough time performing research with worms and fruit flies. Since 2003, they do consistently play more with the worms than with fruit flies, though. But it’s hard to see, let’s ditch the total articles:

When we subtract the drosophila articles from the elegans articles, we can see how much the worm has on the fruit fly. The red bars represents by how many articles Elegans wins over Drosophila, and blue bars indicate with how many articles Drosophila wins over Elegans.

But absolute numbers is not what we’re looking for. As we have seen in the first graph, the frequency of articles is far from evenly distributed. So let’s see what the ratio is, of the difference between both organisms:

This evens out some of the bigger differences in the previous graph; Drosophila had ‘only’ a +5 win over Elegans in 2001, but relatively this is a bigger victory than Elegans’ +34 win in 2006, and even its +79 victory in 2009.

Conclusion: Elegans wins.

Textmining BioMedCentral: Cancer – a trending topic?

*Update*
I added a graph which shows the ratio of articles containing the word ‘Cancer’ to total articles per year. It sadly still suffers from the incomplete data of earlier years:

*Original post*

This is my first attempt to get some data to get some data out of the BioMedCentral dataset, the freely available, Open Access archive of over 40 years of Biomedical research articles. I’ll use this set as a training corpus for my thesis, to extract domain-specific features to use when comparing the similarity between two documents. The dataset consists out of 103.782 articles from 1969 to today.

My text-mining experiment was a very simple one: count the occurrence of the word ‘cancer’ in every article of the journal. My expectation was that the term would occur more frequently as time progresses: as a science journalist I frequently came across (obscure) biomedical research which concluded its findings by in some way linking to (promising a potential way to discover a potential cure for:) cancer. I always figured it had to do with funding. But I’m no expert.

Anyway, to test this I threw together a simple Python script to parse each (xml-formatted) article and extract its date and the frequency of the word cancer, and output this data to a csv-file. I averaged the amount of counts per year per article. Resulting in the following graph:

I hoped to be able to provide an overview of the frequency of the word in ~40 years of BMC. I wasn’t. The first couple of years seem very incomplete: there aren’t many articles (in the hundreds instead of in the thousands as in later years), and lots of “(To access the full article, please see PDF)”-references (yay to Open Access). Anyway, I figured the last 10 years WERE okay, so I graphed the average occurrence of the word cancer of those last couple of years.

Some initial thoughts:

  • The average (word count per article) might be the wrong metric here. Articles dedicated to cancer-related topics skew the average too much. I am actually looking for the papers which do not contain the word frequently.
  • A better metric could be the ratio of articles that DO contain the word (at least once). I’ll give that a shot later and update this post.
  • There does seem to be some increase in occurrence, however I wouldn’t say it’s enough to support my observation.

Results? Thesis #5

As promised, I have spent the last two weeks generating a lot (but not quite 120) results. So let’s take a quick look at what I’ve done and found.

First of all, the Cyttron DB. Here I show 4 different methods of representing the Cyttron database, the 1st is as-is (literal), the 2nd by keyword extraction (10 most frequently occurring words, after filtering for stopwords), the 3rd is by generating synonyms with WordNet for each word in the database, the 4th is by generating synonyms with WordNet for each word of the keyword representation.

Cy-literal Cy-keywords Cy-WN Cy-key-WN
Unique 19,80 3,23 97,58 17,59
Total 30,58 3,18 248,19 25,53

Next up, the Wikipedia-page for Alzheimer’s disease. Here I have used the literal text, the 10 most frequently occurring bigrams (2-word words), the 10 most frequently occurring trigrams (3-word words), the 10 most frequently occurring keywords (after stopwords filtering) and the WordNet-boosted text (generating synonyms with WordNet for each word).

Alz-Literal Alz-bigrams Alz-trigrams Alz-keywords Alz-WN
Unique 803 8 1 5 1385
Total 3292 8 1 6 22.195

The other approach, using the ontologies’ term’s descriptions didn’t quite fare as well as I’d hoped. I used Python’s built-in difflib module, which at the time seemed like the right module to use, but after closer inspection did not quite get the results I was looking for. The next plan is to take a more simple approach, by extracting keywords from the description texts to use as a counting measure in much the same way I do the literal matching.

All the results I generated are hard to evaluate, as long as I do not have a method to measure the relations between the found labels. More labels is not necesarily better, more relevant labels is the goal. When I ‘WordNet’-boost a text (aka generate a bunch of synonyms for each word I find), I do get a lot more literal matches: but I will only know if this makes determining the subject easier or harder once I have a method to relate all found labels to each other and maybe find a cluster of terms which occur frequently.

What’s next?

I am now working on a simple breadth-first search algorithm, which takes a ‘start’-node and a ‘goal’-node, queries for direct neighbours of the start-node one ‘hop’ at a time, until it reaches the goal-node. It will then be possible to determine the relation between two nodes. Note that this will only work within one ontology, if the most frequent terms come from different ontologies, I am forced to use simple linguistic matching (as I am doing now), to determine ‘relatedness’. But as the ontologies all have a distinct field, I imagine the most frequent terms will most likely come from one ontology.

So, after I’ve finished the BFS algorithm, I will have to determine the final keyword-extraction methods, and ways of representing the source data. My current keyword-extraction methods (word frequency and bi/trigrams) rely on a large body of reference material, the longer the DB entry the more effective these methods are (or at least, the more ‘right’ the extracted keywords are, most frequent trigrams from a 10-word entry makes no sense).

Matching terms from ontologies is much more suited for smaller texts. And because of the specificity of the ontologies’ domain, there is an automatic filter of non-relevant words. Bio-ontologies contain biological terms: matching a text to those automatically keeps only the words I’m looking for. The only problem is that you potentially miss out on words which are missing from the ontology, which is an important part of my thesis.

Ideally, the final implementation will use both approaches; ontology matching to quickly find the relevant words, calculate relations, and then keyword extraction to double-check if no important or relevant words have been skipped.

To generate the next bunch of results, I am going to limit the size of both the reference-ontologies as the source data. As WordNet-boosted literal term-matching took well over 20 hours on my laptop, I will limit the ontologies to 1 or 2, and will select around 10 represenatitive Cyttron DB-entries.

I am now running my Sesame RDF Store on my eee-pc (which also hosts @sem_web), which is running 24/7 and accessible from both my computers (desktop and laptop)! Also, I am now on GitHub. There’s more results there, check me out » http://github.com/dvdgrs/thesis.

[Read all thesis-related posts here]