You are here:  Home  >  Data Education  >  Big Data News, Articles, & Education  >  Big Data Blogs  >  Current Article

How We Will Harvest Cognition in 2017

By   /  December 19, 2016  /  No Comments

Click here to learn more about author James Kobielus.

As we bid farewell to 2016, the entire data industry is focused on cognitive computing as the path forward.

We’re all making predictions for how Cognitive Computing’s footprint in our lives will deepen in coming years. One key assumption in everybody’s predictions is that somehow this resource called “cognition”—in other words, the substance of human thought, reasoning, and evidence-driven decision making — will continue to flow into every pore of the digital world. I don’t doubt that it will, but it seems to me that we’re all just assuming that it will, without directly examining the the trends that will make it happen. For example, in this recent article, Mike Gualtieri of Forrester Research lists nine “AI technology building blocks” — Deep Learning, Machine Learning, Natural Language Processing, etc. — that rely on cognition. However, for no apparent reason, he fails to include cognition itself as a building block.

Clearly, the Artificial Intelligence (AI) revolution will lose steam if we don’t continue to find newer, more innovative approaches for harvesting this precious resource — consumable cognition — so that it can be infused into everything. Here are my predictions for how emerging practices of discovering, acquiring, and curating cognition will evolve in 2017 and beyond:

  • Historical cognition will become an even more central foundation of cloud-based AI apps: Deepening a trend that took root earlier in this decade, AI developers everywhere will continue to grow the historical subject-matter data sets of training and operational data upon which all of their statistical algorithms depend. In 2017 and beyond, more open data will flow into these knowledge bases, and their cognitive riches will be deepened through sophisticated metadata, taxonomies, ontologies, glossaries, semantics, and statistical algorithms.
  • Interaction cognition will enrich more smart apps: As conversational cognitive interfaces become commonplace–such as in chatbots, smart appliances, and autonomous vehicles–we’ll see more cognition sourced organically and transparently from interactions between human users and apps. In the coming year, we’ll see more cognitive apps that are built to learn adaptively and refine their real-time responses in line with learnings from interaction data.
  • Streamed cognition will distill more fresh knowledge on the fly: As streams of social, mobile, video, audio, and other media pervade every corner of the digital universe, unsupervised learning, natural-language processing, and other cognitive algorithms will distill knowledge from it all. As we approach the end of the decade, cognitive algorithms will be deployed into all these streams to power 24×7 detection of statistical patterns. These auto-detected cognitive patterns will be used for feature engineering, training of cognitive algorithms, and optimization of cognitive apps’ real-time adaptive responses.
  • Collaborative cognition will drive more development of team-based data science: We’re seeing the center of gravity in cognitive development shift toward team-based collaboration. The cognition that infuses data science initiatives will increasingly come from engagement between team members within integrated, open, cloud-based development environments. In more cognitive development environments, data science professionals will engage in notebook-oriented knowledge sharing, with auditable project logs and robust tracking and governance of data, models, and other development artifacts.
  • Democratized cognition will drive more innovation: As the cognitive application ecosystem moves toward open platforms, tools, applications, and data, we’ll also see new sources of cognitive talent enter the field. More data science professionals will engage in cognitive projects through open data-science competition communities such as Kaggleand TopCoder. In addition, innovative approaches to building cognitive applications and intelligent products will come from non-traditional sources, such as “citizen data scientists” and other knowledge workers who use self-service cognitive tools.
  • Crowdsourced cognition will vet more algorithmic patterns: Going forward, all cognitive apps will depend on legions of unseen human beings who curate the training data upon which the predictive accuracy of machine learning algorithms depends. Once this data has been tagged by humans, cognitive algorithms can work their predictive magic. In 2017 and beyond, crowdsourcing environments will continue to grow in importance as channels for manual assessment and tagging of cognitive training.

Perhaps cognition—not data itself—is the “new oil” as we move into 2017 and beyond. What do you think?

Have a happy holiday season and a great new year!

And just in case you haven’t seen enough of me on DATAVERSITY.net this year, here are some Cognitive Computing, Data Science, and Big Data Analytics industry predictions that I posted recently in IBM Big Data and Analytics Hub, InfoWorld, TechTarget, and KDNuggets—and yet again in KDNuggets.


About the author

James Kobielus, Wikibon, Lead Analyst Jim is Wikibon's Lead Analyst for Data Science, Deep Learning, and Application Development. Previously, Jim was IBM's data science evangelist. He managed IBM's thought leadership, social and influencer marketing programs targeted at developers of big data analytics, machine learning, and cognitive computing applications. Prior to his 5-year stint at IBM, Jim was an analyst at Forrester Research, Current Analysis, and the Burton Group. He is also a prolific blogger, a popular speaker, and a familiar face from his many appearances as an expert on theCUBE and at industry events.

You might also like...

Where is Data Science in the Hype Cycle?

Read More →