ABSTRACTRON


Conceptual Abstraction in Humans and Robots



Everyday concepts, such as the ability of a glass to hold a liquid, or a plate to support the cake, are learnt by children effortlessly in early childhood through simple interactions with the physical world, everyday activities within the environment they encounter and learn to understand. Bringing this level of effortless learning to artificial systems, both physical (robots) and non-physical (software artefacts) is one of the big challenges of current efforts in Artificial Intelligence (AI).

This trend of research, known as the cognitive turn in AI, is in full swing, although significant challenges are raised by a lack of understanding of how to analyse formally some of the key concepts of cognition enabling such learning processes. A close relationship is hypothesised in cognitive science between image schemas, i.e. simple yet abstract notions (such as containment and support) which are learnt in the earliest phases of human conceptual development and bootstrapping higher conceptual thinking and metaphoric thought, and affordances, i.e. the potential actions on objects in an environment (such as putting the cake on the plate).

An understanding of these notions, for artificial agents and humans, depends deeply on an understanding of how change of spatial conditions impacts the affordances of objects or agents in a given environment.
A focal point of this project, therefore, lies at the intersection of embodied cognition, the ontology of affordance and image schema, and knowledge-enhanced frameworks in robotics: here, the whole pipeline from an encoding of innate knowledge of physics, through event or activity recognition and interpretation, to planning and agency, come into play. Some of the main challenges that are encountered here are
  1. how to provide systematic and ontologically sound bridges from sensory data to affordances and abstract ontological analysis,
  2. how to reason logically with common-sense abstractions derived from interactions in a simple robotic world, and
  3. to validate the fruitfulness of enhancing the knowledge layer for the robot's learning capabilities in a detailed experimental setup.

The specific core aims of ABSTRACTRON are therefore fourfold, namely:

  1. To study in detail the ontology of agent's actions and capabilities and their relation to image schemas, with a particular emphasis on robotics in changing spatial environments
  2. To extend and modify formal logical approaches to image schemas and affordances with robotics-specific representation capabilities.
  3. To develop a workflow and layered architecture to extract higher level conceptual descriptions from sensory data and robotic actions that can be linked up with automatically learned as well as humanly curated formalisations of image schemas.
  4. To provide a detailed validation of the approach through a carefully designed simple robotic world in which transfer learning and acquisition of conceptual abstractions can be systematically verified.

The key deliverable of ABSTRACTRON, thus, will be a proof-of-concept theoretical framework and implementation that delivers an interface between the robotics environments, the learnability of abstractions deriving from embodied interpretations of interactions in such environments, and logical reasoning approaches for such interactions.

ABSTRACTRON Project Partners (2024 — 2027)

Oliver Kutz & Angelika Peer: Free University of Bozen-Bolzano, Faculty of Engineering
Justus Piater: University of Innsbruck, Department of Computer Science

External Project Partners:

Stefano Borgo: Institute of Cognitive Sciences and Technologies, Laboratory for Applied Ontology
Maria M. Hedblom: Jönköping AI Lab (JAIL), Jönköping University