Advertisement

Semantic Web and Semantic Technology Trends in 2018

By on

semantic webThere have been some exciting developments of late in the Semantic Web and Technology space. Semantic Technology trends in 2018 will continue to advance many of the trends discussed in 2017 and build upon a number of new changes just entering the marketplace.

This fall, in fact, the Elsevier 2017 Semantic Web challenge focused on Knowledge Graphs. The winner was IBM Socrates by Michael Glass, Nandanda Mihindukulasooriya, Oktie Hassanzadeh, and Alfio Gliozzo of IBM Research AI. “Knowledge Graphs are currently among the most prominent implementations of Semantic Web technologies,” an Elsevier press release stated. It also noted that Socrates uses an:

“Innovative integration of additional Artificial Intelligence techniques such as Natural Language Processing (NLP) and Deep Learning over multiple web sources to find and check facts. Their knowledge graph outperformed the state of the art.”

A few other highlights include:

  • In April, World Wide Web inventor and Semantic Web visionary Tim Berners-Lee took home the 2016 Turing Award.
  • In October the EDM Council released a new version of the Financial Industry Business Ontology (FIBO), further enticing the financial community to adopt the Semantic Web standards-infused update to manage interoperability between data sources.
  • That same month, Thomson Reuters introduced a Knowledge Graph feed for customers. With the Linked Data feed, they gain the ability to incorporate its financial content sets as part of their own institutional Knowledge Graphs, and it becomes easier to support data relationship discovery and exploration needs across a range of business requirements, from investment research to sales intelligence.

Where will Semantic Technology trends go in 2018? DATAVERSITY® is pleased to present the insights from some prominent voices on what the future may bring.

Michael Bergman, Senior Principal at Semantic Technology Consulting Firm Cognonto Corporation

Bergman sees the Thomson Reuters announcement as one harbinger of continued growth and progress, along with the impact of Wikidata as an engine for capturing relationships among numerical, text, visual, and graphical content (now five years old and boasting more than 37 million items, 8.5 million user freeform database queries a day and a community of more than 1,400 active editors) and the efforts of  groups like LODLAM (Linked Open Data in Libraries, Archives and Museums). “I think (and hope) we will continue to see high-quality information being made available in linked data form in the coming year,” Bergman says.

The growing presence of higher quality data leads to an emphasis being placed on interoperable Knowledge Graphs – “that is, ontologies capable of beginning to integrate this high quality info,” he notes. But an accepted central Knowledge Graph isn’t immediately in the offing, despite interesting efforts like NITRD’s Open Knowledge Network to create an open Knowledge Graph of all known entities and their relationships.

“Consensus on this stuff is tough, with competing philosophies and interests. Some kind of de facto standard needs to emerge, which will likely come from a market embrace, rather than some kind of standards effort,” he says. “I think there will be a lot of talk, but little progress in 2018. I hope to contribute to the dialog some with KBpedia,”


Put it all together and what have you got?

“Semantic Technologies will continue to see steady growth and adoption, but will likely never be the rallying flag on their own,” Bergman states. “I think we will continue to appreciate Semantic Technologies as an infrastructure play, in service to broader needs such as Artificial Intelligence, Machine Learning, or data interoperability. Semantic Technologies will come to assume their natural role as essential enablers, but not as keystones on their own for major economic and information change.”

Jans Aasman, CEO of Semantic Graph Database and Artificial Intelligence Innovator Franz Inc.

In line with Bergman’s insight of Semantic Technology as an essential enabler, Aasman looks forward to Semantic Technology becoming the AI Interpreter.

“As Artificial Intelligence becomes the new consumer-facing UI for many businesses, Semantic Technology will emerge as the necessary interpreter,” he says. “Conversational AIs will need precise understanding of the communication from humans and extract meaning from the communication. Artificial Intelligence in combination with Semantic Technology is ideally suited to address this challenge.”

And while a centrally accepted true information sharing hub may still be in the wings, Aasman sees that enterprises will embrace their own AI Knowledge Graphs as the standard for 360-degree views of customers and entities.

“Companies across many sectors including financial services, healthcare and retail will combine domain knowledge with customer knowledge to create Facebook-like visual customer profiles that include hard facts, interpretations and predictions that leverage machine learning, Semantic Technology and natural language processing,” he says.

These AI Knowledge Graphs will become “the intelligent foundation for business actions and [will be leveraged] as part of their intellectual property.”

Also expect to start to see the beginnings of smart cities with their roots in AI Knowledge Graphs. “Smart Cities will envelop data from a broad range of Internet of Things (IoT) sensors and video devices” – placed on, near or in street lights, bus stops, autonomous cars, public bikes and delivery robots, they will provide the opportunity to gather data about traffic, biking and walking patterns, pollution, weather, and natural disaster challenges, he says.

Dr. Daniel Herzig-Sommer, COO at Enterprise Knowledge Graph Vendor metaphacts

Hezig-Sommer sees an uptake by a greater range of industries in putting enterprise Knowledge Graphs to work for them. After seeing Knowledge Graph innovators’ success in the first half of the 2010s and remarkable achievements made by early adopters starting 2015, 2018 will be the year where more parties come onboard, he says. Venture capitalists are noticing, he says, with increasing investment activities underway in the sector.

It’s not just those tech and tech-affiliated industries operating inside the traditional core areas of Semantic Technologies that want a bite at the apple. Engineering and manufacturing, banking and finance, agricultural and life sciences, and even retail:

“Want to apply enterprise Knowledge Graphs for use cases ranging from Knowledge Discovery and Management, analytical information needs, Data Governance, or overseeing processes and product steps, sometimes with the use of digital assistants like Amazon Alexa or Microsofts’ Cortana, which use Knowledge Graphs as their use case specific brain,” he says.

Large corporations and mid-size companies alike, across a spectrum, will look to enterprise Knowledge Graphs, but they’re less likely to want to build them in-house using internal tools, he believes.

“They are asking for standardized products and external providers to operate their applications. And the market is ready to respond,” says Herzig-Sommer. “The technological fundament has matured and standards like SPARQL 1.1, RDF 1.1 and related W3C recommendations are the well-established consensus among vendors, who are building interoperable and standardized tools.”

This next phase of the technology cycle, he predicts, will see the Knowledge Graph industry focus on these early majority adoptees.

“The challenge for the next three to five years for the knowledge graph industry is to cross the chasm and become entirely mainstream by serving customers in all industries. All industries face a number of challenges in their daily operations, and many of these challenges can be addressed by smart applications using Knowledge Graphs,” he says.

Andreas Blumauer, CEO and co-founder of Semantic Web Company:

There have been some interesting concepts put into place and standards negotiated, such as around Semantic data streams to address the problems of heterogeneous data in an age of massive data stream processing, but no mature technologies arrived at yet. Blumauer also notes that there have been some obvious options of how to benefit from Semantic Knowledge models and metadata enrichment, such as providing higher Data Quality as input for Machine Learning. “Interestingly, only a few Machine Learning companies have realized that option yet,” Blumauer says.

And when it comes to Ontology based Data Access (OBDA):

“Expectations are still high, but only a few vendors have generated first success stories,” he says. “In general this approach is still not widely accepted by industry due to data policies in place, and the question is whether it will enter the next phase.”

To borrow from the Gartner Hype Cycle, he points to the slope of enlightenment being the use of Enterprise Knowledge Graphs and semantically enhanced recommender systems for knowledge discovery, and to the plateau of productivity being Linked Data and Agile data integration in enterprises based on Semantic Web standards. He points to the Semantic Web Company’s PoolParty GraphSearch and PoolParty UnifiedViews as examples of each, respectively. (See DATAVERSITY’s most recent article on the release of the PoolParty 6 Semantic Middleware suite.)

Dave Raggett, W3C Data Activity Lead

The web is well known for its wealth of pages but the focus has to keep increasing on what’s happening in the backend to propel open markets of data and services. Having common data standards is core to the proposition of Semantic interoperability that is the key factor for opening up these markets, Raggett says. Fortunately, there exist a strong suite of enabling data interoperability standards like those around ontologies (such as OWL), query languages (such as SPARQL), and data models for data interchange (such as RDF). But Semantic interoperability challenges remain in helping producers and consumers of data to be sure they’re talking about the same thing.

“Semantic interoperability relates to things like how to discover services based on some semantic constraints, like what you are trying to measure,” he says. Standards for Semantic descriptions are important for providers that choose to publish catalogs describing data sets for download and local processing, as well as for network APIs for services based upon remote data sets.

Vendors typically seek to differentiate their offerings from those of their competitors, however, creating challenges for applications to adapt to service variations. But that adaptation:

“Is possible when vendors provide machine-interpretable descriptions of their services based upon open standards. Such Semantic descriptions further allow for designing compositions of services where you can be sure that they will act together as expected, along with increased resilience through the means to validate data against the descriptions,” he says. “A further point is that companies may restrict access to data services to paying customers, so we need standards for terms and conditions, and for different forms of remuneration. W3C’s work on open standards for data is essential for enabling open markets of such services.”

With Semantic interoperability as a cornerstone of Web of Things enabled smart city initiatives, for instance, open data and Semantic interoperability standards open the door for any participant to be able to understand and work with any data around parking, transport, energy usage or anything else to enhance the services they want to deliver. W3C’s Web of Things defines an object-oriented model of things that decouples developers from the network APIs used to access remote sources of data.  “The European Commission is putting efforts into a single digital market so it’s working slowly towards enabling this,” says Raggett. “They can further promote open standards and the data they provide using their procurement policy.”

At the W3C, the Data Exchange Working Group is extending the Data Catalog Vocabulary (DCAT) based upon experience with different profiles of data across a broad spectrum of applications. There are huge opportunities for data vocabulary standards across many sectors including increased productivity through digitizing industries. The requirements vary considerably. “Some vocabularies which everyone relies on must be stable and very highly reviewed,” he says, such as units of measure that won’t keep changing. Other vocabularies will need to be more agile, such as those that might relate to IT capabilities in smart devices that will change as the devices evolve, he says.

“Here there’s a potential benefit from the W3C’s work on Linked Data as a high-level framework for integration across systems,” he says. The Web of Things working group at the W3C also is hoping to come up with some candidate recommendations about how to describe things in terms of properties, actions and events by the end of 2018, he says.

But early in the new year there’s a plan for a W3C workshop to focus on the role of Linked Data in transparency and privacy – essentially looking at how companies can leverage interrelated data sets to know what they have to do to protect consumer data privacy online, especially as regulations like GDPR ratchet up the control individuals have over their own data.

“That’s a problem especially for weakly identifying data that can be combined to create a precise profile of a consumer,” he says. “The workshop will allow participants to discuss the potential role of data standards around that challenge.”

 Dave McComb, President of Information Systems Consultancy Semantic Arts

2018 will continue to see businesses try to reach the goal of becoming a data-centric organization. But they have to consider that there may be more to it than they realize. First, it’s important to understand that being data-centric means starting with the data, and later determining the processes and processing, McComb points out. Many systems start the other way around, which in practice is what causes systems to have myriad ways of expressing, structuring and representing what often is data about the same thing in the real world.

“Our position is that an application is data-centric to the extent that its development proceeded from the data to the code. It is even more data centric to the extent that the code is replaced by data, as in model-driven development,” McComb says. “The long-term vision is to have very little application-level code,” with most enterprise applications instead built by compositions of reusable bits of functionality – that is, a Microservices Architecture, married to the model.

“An enterprise is data-centric proportionally to the number of data-centric apps it has based on the same core enterprise model,” McComb says. By basing many apps on the same model, integration – now the single biggest cost item in almost all IT shops and all large IT projects – becomes frictionless. Here’s where semantics stands up:

“The only feasible way to make progress on this that we have come across is to base the models on Semantic Technology and their implementation on graph databases.”

As organizations embrace Data Lakes in their quest to become data-centric enterprises, they must remember that those lakes are made of raw data. “No one knows what it means until it has been interpreted, which is often done by Data Scientists, who have no obvious way to share what they have learned,” he says. Conform the data lake to an ontology, however, and you provide a path that is based on shared knowledge.

 

Photo Credit: agsandrew/Shutterstock.com

Leave a Reply