Highly personalised apps can be created by dynamically aggregating many small-but-relevant data sources to answer queries and draw context-sensistive inferences. Our tools simplify self-curation of this data which may include substantive proportions of personal information we refer to as Little Data*, highly specific to an individual and difficult to derive from external sources.
Building and maintaining consistent data within an organisation is a huge challenge. The use of Enterprise Knowledge Graphs to standardise the vocabularies used within data sets, enrich the structure of the information contained and automate business logic is still in its infancy.
Our tools facilitate this process through rapid prototyping to explore how information multiple data sources can be combined to derive and display new information, report any issues of standards conformance or logical inconsistencies that are exposed.
* In contrast to Big Data, the powerful insights obtained from machine learning using large volumes of data aggregated from many data sources.
The tools allow multiple data sources to be imported and combined, new terminologies (ontological structures) or rules to added and the inferences computed without the need for complex server installation and configuration.
On import of each data-set, vocabulary or ontology issues relating to standards compliance are reported.
In real-word environments, some information may be inaccurate, missing or contradictory. Contradictory information in a knowledge base can result in logical inconsistencies which prevent any meaningful inferences being obtained until the issue has been resolved.
Exposing the inconsistencies can be very valuable as the information can then be corrected and the ontology repaired ensuring the validity of inferences.
To draw meaningful inferences the data must be accurate. By providing precise control and visibility to the user of exactly what information is stored where, the user can be encouraged to validate and where necessary correct the data.