Builds Better
You are not logged in.

Background Information

Knowledge representation and Reasoning


The graph data standard RDF underpins the initiative to build a Web of data as set proposed by the W3C. Its firm foundation on XML Schema Datatypes has been instrumental in the standardisation of data formats for storage and exchange.

RDF Schema and the Web Ontology Language (OWL 2) define standards for representing structured knowledge (ontologies) and allow inferences to be drawn (entailed) from the knowledge, a process known as reasoning.

For illustrative examples, see BioPortal in the bio-medical field (clinical terms, gene tracking, human diseases, drug interactions, etc.) or Financial Industry Business Terms in the commercial field.

Barriers to adoption

The use of ontologies exploiting reasoning is less widespread than use of the basic graph standards. There is no single reason for this but I believe the range of considerations and choices is itself a barrier. For example:

Little Data vs. Small Data

As defined by the Small Data Group:

Small data connects people with timely, meaningful insights (derived from big data and/or “local” sources), organized and packaged – often visually – to be accessible, understandable, and actionable for everyday tasks.
Little Data contrasts in both the origin and scope of the data. For clarity, we define Little Data as:
Little data are collections of facts and rules that are highly specific to a localised domain. As such they may not be derivable (easily) from other sources and are curated manually by an individual familiar with the domain.
For example, the fact “Jane drives a Renault Clio.” is highly relevant when advising on a suitable garage for Jane to get her car serviced. However, unless we have access to vehicle licensing records, insurance records or similar we need to rely on Jane updating this information if she buys a new car.



Identifying the “correct” repairs that restore consistency when using expressive logics is non-trivial. I developed techniques that allow reasoning in the presence of inconsistencies in my PhD.

To illustrate some of these challenges, we summarise the origins of inconsistencies in the common W3C knowledge representation languages, observing that more expressive logics provide more opportunities to introduce (or expose) inconsistencies.


The semantics are formalised by RDF Interpretations and support Datatype entailment and RDF entailment.

Informally, inconsistencies only occur as a result of ill-typed literals, that is literal values that cannot be mapped into the value space of their specified datatype. For example, for the literal "a"^^xsd:integer the value a cannot be mapped to an integer. Reasoners are not required to support all datatypes (only rdf:langString and xsd:string are mandatory) so inconsistencies are always specified in terms of a set of recognised datatypes.

An RDF graph G is said to be D-Unsatisfiable where D denotes some set of IRIs identifying recognised datatypes, if it contains an ill-typed literal. For example the graph:

<example:foo> <example:bar> "a"^^xsd:integer .
is {xsd:integer}-Unsatisfiable.

Inconsistency can be introduced by incompatible typing. For example, the graph:

<_:x> <rdf:type> xsd:integer .
<_:x> <rdf:type> xsd:boolean .

is {xsd:integer, xsd:boolean}-Unsatisfiable, there is no literal with a value in both the integer and boolean value spaces.

As illustrated by the examples, inconsistencies generated by RDF entailment involve very small numbers of statements (typically just one or two) and are fairly straightforward to locate and fix.

RDF Schema

The semantics are formalised by RDFS Interpretations and extend the RDF semantics with RDFS entailment.

The notion of property range can introduce inconsistencies if literals do not match a constraint. For example, the graph:

<example:foo> <example:bar> "25"^^<xsd:integer> .
<example:bar> <rdfs:range> <xsd:string> .

is {xsd:integer,xsd:string}-Unsatisfiable, as the integer value 25 does not belong to the string value space.

Again, inconsistencies usually involve small numbers of statements, but inconsistencies may be harder to track down if the reasoner supports user defined datatypes (see XML Schema Datatypes in RDF and OWL for a discussion).


There are two alternative semantics defined for OWL 2:

Informally, the Direct Semantics offer a computational advantage over the RDF-Based semantics. The OWL 2 Structural Specification and Functional-Style Syntax describes a subset of the full OWL 2 language (called OWL 2 DL), that ensures decidability under the Direct Semantics.

Inconsistencies occurring in OWL ontologies can be difficult to pin down and there may be many choices of possible repair. For example, the ontology:

    Declaration(Class(:Man))              SubClassOf(:Dachshund :Dog)
    Declaration(Class(:Dog))              DisjointClasses(:Man :Dog)
    Declaration(Class(:Dachshund))        ClassAssertion(:Man :fido)
    Declaration(NamedIndividual(:fido))   ClassAssertion(:Dachshund :fido)
is inconsistent since the individual :fido cannot be both a :Man and a :Dog.

Even for this very simple ontology, to repair the ontology we must remove one of the assertions, remove the disjoint class constraint or allow dachshunds not to be dogs.

In more complex ontologies inconsistencies may involve many different axioms.