That’s a (35$ a.k.a. €45) tiny full-fledged computer, which is hooked up to my home internet connection 24/7, running a webserver (at http://pi.graus.nu — if it’s down, grausPi is off or my internet is down) headless.
It also runs a revived @sem_web (remember him? My linked-data Twitterbot — now called @grausPi), so my Raspberry Pi introduces random-linked-data-fact-noise into the tweetosphere on an hourly basis. So if you too want to stay up to date with mind-boggilingly interesting facts as at the bottom of this post, check out @grausPi.
In the meantime, my Pi waits for a €5 WiFi USB dongle I ordered from DealExtreme, so it can free itself from half of its wires currently attached. Then I’ll think about some more projects to run on this Pi. I still have some servo-motors, LEDs and LDRs (Light Dependent Resistors — simple light meters) left from Arduino days long gone, and apparantly I can control these GPIO pins through Python, as opposed to Arduino’s proprietary language, which is cool. Now for the take-home message:
Did you know the EntrezGene of Biomolecule Chicken ovalbumin upstream promoter-transcription factor is 7026?
— David’s Raspberry Pi (@grausPi)December 12, 2012
12/12/12 update: since @sem_web moved to live in my Raspberry Pi, I’ve renamed him @grausPi
The last couple of days I’ve spent working on my graduation project by working on a side-project: @sem_web; a Twitter-bot who queries DBPedia [wikipedia’s ‘linked data’ equivalent] for knowledge.
@sem_web is able to recognize 249 concepts, defined by the DBPedia ontology, and sends SPARQL queries to the DBPedia endpoint to retrieve more specific information about them. Currently, this means that @sem_web can check an incoming tweet (mention) for known concepts, and then return an instance (example) of the concept, along with a property of this instance, and the value for the property. An example of Sam’s output:
[findConcept] findConcept('video game')
[findConcept] Looking for concept: video game
[findInst] Seed: [u'http://dbpedia.org/class/yago/ComputerGame100458890',
[findInst] Has 367 instances.
[findInst] Instance: Fight Night Round 3
[findProp] Has 11 properties.
[findProp] [u'http://dbpedia.org/property/platforms', u'platforms']
[findVal] Property: platforms (has 1 values)
[findVal] Value: Xbox 360, Xbox, PSP, PS2, PS3
[findVal] Domain: [u'Thing', u'work', u'software']
[findVal] We're talking about a thing...
Fight Night Round 3 is a video game. Its platforms is Xbox 360, Xbox,
PSP, PS2, PS3.
This is how it works:
Look for words occurring in the tweet that match a given concept’s label.
If found (concept): send a SPARQL query to retrieve an instance of the concept (an object with rdf:type concept).
If not found: send a SPARQL query to retrieve a subClass of the concept. Go to step 1 with subClass as concept.
If found (instance): send SPARQL queries to retrieve a property, value and domain of the instance. The domain is used to determine whether @sem_web is talking about a human or a thing.
If no property with a value is found after several tries: Go to step 2 to retrieve a new instance.
Compose a sentence (currently @sem_web has 4 different sentences) with the information (concept, instance, property, value).
Next to that, @sem_web posts random tweets once an hour, by picking a random concept from the DBPedia ontology. Working on @sem_web allows me to get to grips with both the SPARQL query language, and programming in Python (which, still, is something I haven’t done before in a larger-than-20-lines-of-code way).
What I’m working on next is a method to compare multiple concepts, when @sem_web detects more than one in a tweet. Currently, this works by taking each concept and querying for all the superClasses of the concept. I then store the path from the seed to the topClass (Entity) in a list, repeat the process for the next concept, and then compare both paths to the top, to identify a common parent-Class.
This is relevant for my graduation project as well, because a large task in determining the right subject for a text will be to determine the ‘proximity’ or similarity of different concepts in the text. Still, that specific task of determining ‘similarity’ or proximity of concepts is a much bigger thing, finding common superClasses is just a tiny step towards it. There are other interesting relationships to explore, for example partOf/sameAs relations. I’m curious to see what kind of information I will gather with this from larger texts.
An example of the concept comparison in action. From the following tweet:
Picked mendicot: @offbeattravel .. FYI, my Twitter bot
@vagabot found you by parsing (and attempting to answer)
travel questions off the Twitter firehose ..
The findCommonParent function takes two URIs and processes them, appending a new list with the superClasses of the initial URI. This way I can track all the ‘hops’ made by counting the list number. As soon as the function processed both URIs, it starts comparing the pathLists to determine the first common parent.
Here you can see the first common parentClass is ‘Event’: 3 hops away from ‘ChangeOfLocation’, and 5 hops away from ‘Locomotion’. If it finds multiple superClasses, it will process multiple URIs at the same time (in one list). Anyway, this is just the basic stuff. There’s plenty more on my to-do list…
While the major part of the functionality I’m building for @sem_web will be directly usable for my thesis project, I haven’t been sitting still with more directly thesis-related things either. I’ve set up a local RDF store (Sesame store) on my laptop with all the needed bio-ontologies. RDFLib’s in-memory stores were clearly not up for the large ontologies I had to load each time. This also means I have to better structure my queries, as all information is not available at any given time. I also – unfortunately – learned that one of my initial plans: finding the shortest path between two nodes in an RDF store to determine ‘proximity’, is actually quite a complicated task. Next I will focus more on improving the concept comparison, taking more properties into account than only rdfs:subClass, and I’ll also work on extracting keywords (which I haven’t, but should have arranged testing data for)… Till next time!
But mostly, the last weeks I’ve been learning SPARQL, improving my Python skills, and getting a better and more concrete idea of the possible approaches for my thesis project by working on sem_web.