A Scalable Benchmark for OBDA Systems: Preliminary Report

Diego Calvanese, Davide Lanti, Martin Rezk, Mindaugas Slusnys, and Guohui Xiao

Proc. of the 3rd Int. Workshop on OWL Reasoner Evaluation (ORE 2014). Volume 1207 of CEUR Workshop Proceedings, http://ceur-ws.org/. 2014.

In ontology-based data access (OBDA), the aim is to provide a high-level conceptual view over potentially very large (relational) data sources by means of a mediating ontology. The ontology is connected to the data sources through a declarative specification given in terms of mappings that relate each (class and property) symbol in the ontology to an (SQL) view over the data. Although prototype OBDA systems providing the ability to answer SPARQL queries over the ontology are available, a significant challenge remains: performance. To properly evaluate OBDA systems, benchmarks tailored towards the requirements in this setting are needed. OWL benchmarks, which have been developed to test the performance of generic SPARQL query engines, however, fail at 1) exhibiting a complex real-world ontology, 2) providing challenging real world queries, 3) providing large amounts of real-world data, and the possibility to test a system over data of increasing size, and 4) capturing important OBDA-specific measures related to the rewriting-based query answering approach in OBDA. In this work, we propose a novel benchmark for OBDA systems based on a real world use-case adopted in the EU project Optique. We validate our benchmark on the system Ontop, showing that it is more adequate than previous benchmarks not tailored for OBDA.

   title = "A Scalable Benchmark for OBDA Systems:  Preliminary Report",
   year = "2014",
   author = "Diego Calvanese and Davide Lanti and Martin Rezk and
Mindaugas Slusnys and Guohui Xiao",
   booktitle = "Proc. of the 3rd Int. Workshop on OWL Reasoner Evaluation
(ORE 2014)",
   pages = "36--43",
   volume = "1207",
   publisher = "CEUR-WS.org",
   series = "CEUR Workshop Proceedings, http://ceur-ws.org/",
pdf url