Ontology-based semantic similarity measurements: an overview

My thesis is about keyword extraction of biological notes, using semantic ‘dictionaries’ called ontologies. These ontologies are large networks, where each node stands for a concept, and each connection between nodes for relations. See the picture on the right for a visual representation of an ontology.

To identify the subject of a text, I need to see what terms that are described in an ontology appear in a text. This leaves me with multiple concepts, of which I need to find the ‘common denominator’. To do this, I have to measure the similarity (or inversely: the distance) between two concepts: if I find a bunch of very similar concepts in one text, I can be more confident of the subject.

Luckily, a lot of people have dealt with this ‘ontology-based semantic similarity measurement’. I gathered and studied a couple of papers, and provide a quick overview of my findings. See my literature list for a more complete overview.

DISCLAIMER: This is by no means intended to be an exhaustive overview. It’s short. I’m sure I’ve not read every relevant paper. Due to time constraints my priority is finding a suitable method to carry on and checking to see if the direction I’m heading is OK (it seems that way). My overview deals with global approaches only, nothing too specific. If I’ve missed anything really obvious, I’d be grateful if you could leave a comment :).

There are two main approaches in ontology-based semantic similarity measurement: edge-based (also called structural or hierarchical approach) and node-based (also called information-content approach).

Edge-based

Edge-based approaches take the structure of the network as a base, focussing on the connections between nodes and their implications/meanings. In edge-based semantic similarity measurement, there are three main principles (which are fortunately pretty much globally agreed-upon – at least in the papers I found):

Shortest-path length between nodes
The most direct approach: the closer two nodes are in the network, the more similar they are. Important detail: path-length is measured by counting (only!) the nodes which have a ‘is_a’ or ‘part_of’ relation. The most primitive semantic similarity measures use only path lengths. However, this shortest-path measure can be extended with:

Node ‘depth’ (aka specifity)
The deeper a node is (farther away from the root), the more specific it is. In most papers this does not revolve around individual node’s depths, but around the depth of their Least Common Subsumer (LCS). The LCS is the deepest ‘shared parent’ of two nodes. The depth is defined as the amount of nodes the LCS is separated from the root concept. The deeper the LCS is, the more similar the concepts are. Also, the granularity of an individual concept has to be considered in calculating its specifity (more granular means more ‘subdivisions’, means a more specific concept). This is usually modeled as an extra variable that influences the concept’s specifity. This means that a highly granular node will be less similar to a less granular node.

Link’s direction
Ontologies are directed graphs: a connection between two nodes has a direction (chair is_a furniture, does not work the other way around). The more changes in direction the path has between two nodes, the less similar the nodes are.

Node-based

Node-based measures do not take the connections in the network as a main resource, but rather the information inside and surrounding the nodes. Text-mining and textual analysis techniques can be applied here. For example by comparing both concepts’ textual data, the concept’s contexts, or by comparing the similarity of the concepts’ LCS to the individual concepts. In these cases, a node or node’s context is frequently represented as a ‘bag of words‘, disregarding any form of grammar or semantics. Cosine similarity is a measure which is often used for textually comparing two texts. A common approach to weigh the importance of words in a text is the TF-IDF (term frequency–inverse document frequency) measure.

Other techniques involve counting the amount of surrounding nodes (in a way similar to checking a node’s granularity), the depth of a node (counting the amount of hops from the root-node), etc. The way I see it, a node-based approach is a useful extension on an edge-based approach.

Both these approaches (edge & node-based) are applied in a multitude of algorithms combining edge & node-based, or dealing with either one of the two. For an overview of some common algorithms and their use of edge- and/or node-based approaches, I highly recommend [1].

Now what?

What’s left for me is to formulate an approach: picking an edge-based similarity measurement algorithm and implementing it, and finding a node-based approach to extend the edge-based approach with.

For the edge-based algorithm, the bare essentials are already in place:

  • A breadth-first search algorithm to determine paths between nodes
  • A method of finding the common parent (LCS) of two nodes
  • A method of counting the depth of a node
  • A method of exploring the context of a node

Then, I am looking to extend the structure-based approach by including a similarity-comparison of the linguistic data ‘surrounding’ each node. By retrieving all surrounding nodes of both nodes that I’m comparing, throwing all textual data of the surrounding nodes in a ‘bag of words’, and comparing this bag of words to the second node’s bag of words. This is also called a node-based similarity measure (as opposed to the previously described ‘edge-based’ measure). I will also look into combining this text-comparison system with keyword extraction.

An extremely useful Python framework for all text-comparison tasks at hand (BOW model, cosine similarity measures, TF-IDF) I came across is Gensim. It has all the features I could want to use, plus excellent documentation.

“Gensim is a Python framework designed to automatically extract semantic topics from documents, as naturally and painlessly as possible.”

Literature

  1. Pesquita C, Faria D, Falcão AO, Lord P, Couto FM (2009) Semantic Similarity in Biomedical Ontologies. PLoS Comput Biol 5(7): e1000443. doi:10.1371/journal.pcbi.1000443
  2. Al-Mubaid H, Nguyen HA. “A cluster-based approach for semantic similarity in the biomedical domain”, Conf Proc IEEE Eng Med Biol Soc. 2006;1:2713-7.
  3. A. Bramantoro, S. Krishnaswamy, and M. Indrawan, “A Semantic Distance Measure for Matching Web Services”, in Proceedings of WISE Workshops 2005. pp.217~226
  4. E. Gabrilovich and S. Markovitch. Computing semantic relatedness using wikipedia-based explicit semantic analysis. In Proc. IJCAI, 2007
  5. M. Andrea Rodríguez, Max J. Egenhofer, “Determining Semantic Similarity among Entity Classes from Different Ontologies,” IEEE Transactions on Knowledge and Data Engineering, pp. 442-456, March/April, 2003
  6. Lee, W N et al. “Comparison of ontology-based semantic-similarity measures.” AMIA Annu Symp Proc 2008 (2008) : 384-388.
  7. I. Spasic, S. Ananiadou, J. McNaught, and A. Kumar, “Text mining and ontologies in biomedicine: Making sense of raw text”,  presented at Briefings in Bioinformatics, 2005, pp.239-251.
  8. P. Resnik. “Using information content to evaluate semantic similarity in a taxonomy”, In Proceedings of the 14th international joint conference on Artificial intelligence – Volume 1 (IJCAI’95), pp. 448-453. 1995.
  9. R. Thiagarajan, G. Manjunath, and M. Stumptner. “Computing Semantic Similarity Using Ontologies”, HP Labs, 2008

(sorry for the messy list)