Open Data

PR2 robot working in a kitchen lab with researchers

The CoAI JRC offers numerous open data and benchmarking resources. Beyond making data and publications openly accessible, we are looking to develop a framework to provide episodic memory data, joint human-robot data and robot activity data as an open knowledge base using standard data formats to foster reuse by the research community.

SOMA button

The Socio-physical Model of Activities (SOMA) is an ontological model with cognitive bias used to represent the physical and social context of everyday activities. It is currently the most comprehensive ontology for robotics and everyday activities.

Open-EASE logo

openEASE is a web-based knowledge service providing robot and human activity data. It contains semantically annotated data of manipulation actions, including the environment the agent is acting in, the objects it manipulates, the task it performs, and the behavior it generates.

HOBBIT project logo

HOBBIT is a distributed benchmarking platform designed for the unified execution of benchmarks for Linked Data solutions. The HOBBIT benchmarking platform is based on the FAIR principles and is the first benchmarking platform able to scale up to benchmarking real-world scenarios for Big Linked Data solutions.

Overview paper: Michael Röder, Dennis Kuchelev, Axel-Cyrille Ngonga Ngomo, “HOBBIT: A platform for benchmarking Big Linked Data,” in Data Science v. 3(1), 2020.

logo for LIMES

LIMES is a state-of-the-art tool developed by the DICE Data Science Group, which uses dedicated machine learning techniques for link discovery in order to compute links time-efficiently and produce links of high quality to serve applications built upon Linked Data. 

Overview paper: Ngonga Ngomo, A.C., Sherif, M.A., Georgala, K. et al. LIMES: A Framework for Link Discovery on the Semantic Web. Künstliche Intelligenz v. 35, 2021.

DICE Data Science Group logo

GERBIL is an evaluation framework for semantic entity annotation also provided by the DICE Data Science Group. The rationale behind our framework is to provide developers, end users and researchers with easy-to-use interfaces that allow for the agile, fine-grained and uniform evaluation of annotation tools on multiple datasets.

Overview paper: Usbeck R, Röder M, Ngonga Ngomo A-C, Baron C, Both A, Brümmer M, Ceccarelli D, Cornolti M, Cherix D, Eickmann B, Ferragina P, Lemke C, Moro A, Navigli R, Piccinno F, Rizzo G, Sack H, Speck R, Troncy R, Waitelonis J, Wesemann L, “GERBIL: General Entity Annotator Benchmarking Framework,” in Gangemi A, Leonardi S, Panconesi A (eds) Proceedings of the 24th International Conference on World Wide Web, WWW 2015, Florence, Italy, May 18-22, 2015, ACM.