Getting Started with Knowledge Graphs and Machine Learning: Part 2 Q&A with Sebastian Schmidt, CEO of metaphacts
JW: And step three?
SS: Then, step three is about describing the datasets or data sources and creating a first data catalog. This refers to storing relevant metadata about included data sources. Such metadata can include provenance information, information about data-access requirements, owners, information about the reliability of data — everything that we need to build trust in that data and to use as evidence later on to be able to really understand where knowledge came from.
And then the final step, step four, is about using this data and building discovery and collaboration interfaces which allow your domain experts and business users to interact with it. Here we really map to the user journey of interacting with the data, editing, and further managing the data, and the lifecycle around it.
And we again utilize our metaphactory product in a low-code approach with data-driven application development to build those applications so, as you can see, all of these steps follow a simple, agile approach and are fully implemented in metaphactory.
JW: Could you explain more about what metaphactory is, and what it provides for knowledge democratization?
SS: metaphactory is the only knowledge graph platform enabling everyone in the enterprise, specifically including business users and domain experts, to participate in the knowledge generation, maintenance and consumption process.
We are building on the F.A.I.R. data principles and therefore enabling true knowledge democratization. Metaphactory is designed to ease the onboarding into the world of knowledge management with knowledge graphs by delivering features for semantic model management and building web applications based on a data-driven, low-code approach. The apps created with metaphactory support end users in their exploration and analytics tasks and allow them to work with data and knowledge available in the enterprise.
With metaphactory, data can be published as open data based on open standards, and knowledge can be shared across departments, across institutions with partners and customers, or even outside of the enterprise because we are building on those open standards. And, very importantly, metaphactory can run anywhere, so it can run in the enterprise data center, in the cloud, and seamlessly integrate into existing data management infrastructure.
JW: Are there any customer successes that you can talk about?
SS: That's an important question. I'm really happy to talk about customers' successes in the last years. We have supported customers using this approach across multiple industries and use cases. Pharma and life sciences is a very strong industry where customers use metaphactory for applications to democratize knowledge and coordinate research activities to better align research projects, share knowledge across their research teams, and accelerate innovation and reduce duplicate work. And this is very much also accelerated by pharma and life sciences being very early in adopting the F.A.I.R. data principles.