The recent blow up of crowd computing initiatives on the web calls for smarter methodologies and tools to annotate, query and explore repositories. There is the need for scalable techniques able to return also approximate results with respect to a given query as a ranked set of promising alternatives. In this paper we concentrate on annotation and retrieval of software components, exploiting semantic tagging relying on DBpedia. We propose a new hybrid methodology to rank resources in this dataset. Inputs of our ranking system are (i) the DBpedia dataset; (ii) external information sources such as classical search engine results, social tagging systems and wikipedia-related information. We compare our approach with other RDF similarity measures, proving the validity of our algorithm with an extensive evaluation involving real users.
Semantic tagging for crowd computing (extended abstract)
Ragone Azzurra;Di Noia T.;Di Sciascio E.
2010-01-01
Abstract
The recent blow up of crowd computing initiatives on the web calls for smarter methodologies and tools to annotate, query and explore repositories. There is the need for scalable techniques able to return also approximate results with respect to a given query as a ranked set of promising alternatives. In this paper we concentrate on annotation and retrieval of software components, exploiting semantic tagging relying on DBpedia. We propose a new hybrid methodology to rank resources in this dataset. Inputs of our ranking system are (i) the DBpedia dataset; (ii) external information sources such as classical search engine results, social tagging systems and wikipedia-related information. We compare our approach with other RDF similarity measures, proving the validity of our algorithm with an extensive evaluation involving real users.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.