“The tf–idf weight (term frequency–inverse document frequency) is a weight often used in information retrieval and text mining. This weight is a statistical measure used to evaluate how important a word is to a document in a collection or corpus.”[wikipedia]

It is also the weight I use to measure similarity between texts, for these two tasks of my thesis project (click for pic!):
– Step 3: measure the similarity of a cyttron-db entry to a concept-description from an ontology. This will allow me to find concepts in the text that do not appear literally.
– Step 5: to be able to relate concepts which come from different ontologies. By measuring how similar the text surrounding a concept found in the text is compared to another found concept.
As mentioned before, I am using the excellent Gensim “vector space modelling for humans” package, which takes all the complicated mathematics off my hands (like the scary and intimidating formula up top!). Perfect for me, as I’m not mathematician, nor a computational linguist, nor a statistician, but I AM a human, who wants to work with a solid and proven method of similarity measures and feature extraction for texts. Since I am what I am, I won’t attempt to explain any of the inner workings of Bag-of-word models, vector space, and TF-IDF measures, sorry, there are much better places for that. I’ll simply show how I made Gensim work for me (assuming it does).
The first step is to create a training corpus. The training corpus defines the features of the text – the words that will be considered ‘important’ when looking at a text. The training corpus needs to be from the same domain as the target application: in my case the biomedical domain.
At first I was looking at extracting a bunch of relevant Wikipedia articles (all articles from Wikipedia’s Biology category) to use as a training corpus. But then I came across something better: the Open Access BioMed Central full-text corpus. The corpus consists of over 100.000 articles, weighing in at 8GB of XML-documents.
I wrote a simple script using lxml2 to parse the individual files: extracting all plaintext from the article body, cleaning them and storing them in a new text-file (1 article per line) for later processing. The cleaning process consists out of 3 steps: tokenizing articles (aka breaking an article up in words), filtering for common stopwords, and finally stemming the remaining words. I chose to include stemming, in order to unify such words as ‘hippocampal’ and ‘hippocampus’ (stemming returns the ‘root’ of a word). As I stem both the training corpus and the strings that need to be compared, it is not a disaster if words get stemmed incorrectly: in the end I don’t need to make sense out of the stemmed words, I only need them for counting. The plaintext file my script created is 650MB (vs 8,8GB for the uncompressed XML-files)!
The cleaning of the article is pretty straightforward, using pre-cooked NLTK modules: the WordPunct tokenizer, set of English stopwords and NLTK’s implementation of the Porter stemmer. For the quality of the similarity measurement it is important to follow the exact same cleaning procedure with the strings I want to compare – I use the same function for both the corpus-preparation as that of the comparison strings:
def cleanDoc(doc): stopset = set(stopwords.words('english')) stemmer = nltk.PorterStemmer() tokens = WordPunctTokenizer().tokenize(doc) clean = [token.lower() for token in tokens if token.lower() not in stopset and len(token) > 2] final = [stemmer.stem(word) for word in clean] return final
Creating a training corpus in Gensim
Gensim‘s documentation is very extensive, and I can recommend going through the tutorials if you want to get an idea of the possibilities. But I couldn’t find much documentation on how to do simple string-to-string comparisons, so I wrote down what I did (and errrm yes, it’s pretty much exactly the same as string-to-index querying you can find in the Gensim tutorials :p):
1. Create a ‘dictionary’ of the training corpus’ raw text:
The dictionary contains words:frequency mappings and will be used to convert texts to vector space at a later stage:
>>> dictionary = corpora.Dictionary(line.lower().split() for line in open('corpus.txt')) >>> print dictionary Dictionary(1049403 unique tokens)
2. Convert the training corpus to vector space:
class MyCorpus(object): def __iter__(self): for line in open('corpus.txt'): yield dictionary.doc2bow(line.lower().split()) >>> corpus = MyCorpus() >>> corpora.MmCorpus.serialize('corpus.mm', corpus) # Save corpus to disk >>> corpus = corpora.MmCorpus('corpus.mm') # Load corpus >>> print corpus MmCorpus(99432 documents, 1049403 features, 39172124 non-zero entries)
3. Initialize the TF-IDF model:
>>> tfidf = models.TfidfModel(corpus) >>> print tfidf TfidfModel(num_docs=99432, num_nnz=39172124)
Thankfully it’s possible to store the generated corpus, dictionary and tfidf to disk: parsing all these documents takes quite a while on my computer. That’s it for the preparation of the training corpus!
Comparing two strings
Now whenever I want to compare two strings, using features gathered from the training corpus, I need to:
- Clean both strings in the same way I cleaned the articles in the corpus (NLTK stopword-filter + tokenization) » cleanDoc(string)
- Convert both strings to vector-space using the dictionary generated from the training corpus » dictionary.doc2bow(string)
- Convert both vector-space representations of the strings to TF-IDF space, using the TF-IDF model initialized earlier » tfidf[string]
When both strings are prepared, all is left to compare them, by creating an ‘index’ (the reference string) and a ‘query’ (the other string). Order doesn’t matter.
index = similarities.MatrixSimilarity([tfidf1],num_features=len(dictionary)) sim = index[tfidf2] print str(round(sim*100,2))+'% similar'
Resulting in, for example, the comparison of the description of “Alzheimer’s disease” and “Cognitive disease” in the Human Disease (DOID) ontology:
>>> compareDoc("""A dementia that results in progressive memory loss, impaired thinking, disorientation, and changes in personality and mood starting in late middle age and leads in advanced cases to a profound decline in cognitive and physical functioning and is marked histologically by the degeneration of brain neurons especially in the cerebral cortex and by the presence of neurofibrillary tangles and plaques containing beta-amyloid. It is characterized by memory lapses, confusion, emotional instability and progressive loss of mental ability.""","""A disease of mental health that affects cognitive functions including memory processing, perception and problem solving.""") 23.29% similar
Or another example: the Wikipedia article of “Alzheimer’s disease” compared to the ontology description of “Alzheimer’s disease”:
>>> wikiGet('alzheimer') alzheimer in wikiTxt >>> compareDoc(wikiTxt,"""A dementia that results in progressive memory loss, impaired thinking, disorientation, and changes in personality and mood starting in late middle age and leads in advanced cases to a profound decline in cognitive and physical functioning and is marked histologically by the degeneration of brain neurons especially in the cerebral cortex and by the presence of neurofibrillary tangles and plaques containing beta-amyloid. It is characterized by memory lapses, confusion, emotional instability and progressive loss of mental ability.""") 31.95% similar
Final example: the top 5 most similar ontology concepts to the Wikipedia page of “Alzheimer’s disease”:
>>> descMatch(wikiAlz) Label: Alzheimer's disease Similarity: 31.9990843534 Label: vascular dementia Similarity: 28.0893445015 Label: amyloid deposition Similarity: 25.6860613823 Label: cognitive disease Similarity: 18.7662974 Label: dementia Similarity: 18.0801317096
Now the second task (of matching a string to all the descriptions from my ontologies is much the same process, with the only difference that I need to use the similarities.Similarity object when creating the index (of the descriptions): the MatrixSimilarity object resides fully in RAM, the Similarity object on disk.
I am pretty confident about these preliminary results. It all seems to work as it should, and should be much more robust than my earlier attempts at similarity measurement using difflib and some crummy homegrown keyword-extraction and comparison (which I will still use for generating synonyms, crumminess works for that).
I find your post very helpful. Thank you for taking time to write it.
I currently started reading Gensim and your post even makes it clearer! hugzzzzzzzzz
iman
You’re welcome, and thanks 🙂
awesome post…:)
thanks for the great post
i have few things I couldn’t get and understand:
1) how to convert both strings to vector space model. how to use dictionary.doc2bow(string)?
2)Convert both vector-space representations of the strings to TF-IDF space, using the TF-IDF model initialized earlier » tfidf[string]? how as according to your example tfdif takes corpus as input
3) how Wikipedia article of “Alzheimer’s disease” compared to the ontology description of “Alzheimer’s disease” works? i tried but cann’t get it to work
I appreciate your time and response
thanks
thanks for the great post. few things not clear to me, I hope you can help
1. when comparing two strings, corpus is common to both, right?
2. when comparing too strings can you specifiy how to calculate tfdft for both ?
3. please elaborate on how those commands work
index = similarities.MatrixSimilarity([tfidf1],num_features=len(dictionary))
sim = index[tfidf2]print str(round(sim*100,2))+’% similar
I tried but got a hige matrix with similarity corresponding to lenght of my corpus and not a single value
thanks
Thank you so much for the article. I’m currently doing my computer science undergraduate project and your article was very good in pointing me in the right direction. I was also messing around with difflib before 🙂
Can you please provide your email?
Thanks.
Thanks for this great post. But i wanna ask you about libs that you’ve used. As i understand you use gensim, nltk but what else?
Thank’s
Help to find two text files similarity measure using gensim
still sooo relevant ..thanks a truck load for this ..inspite of the deep emphasis on deep learning today, u realize that when ur short on data, this approach works like a charm
index = similarities.MatrixSimilarity([tfidf1],num_features=len(dictionary))
what is similarities here?