You are here:  Home  >  Data Education  >  BI / Data Science News, Articles, & Education  >  BI / Data Science Blogs  >  Current Article

Putting “AI-First” Into Its Proper Context

By   /  August 21, 2017  /  No Comments

Click to learn more about author James Kobielus.

Pardon my cynicism, but it seems as if every solution provider now is “AI-first,” at least in the limited sense of mentioning Artificial Intelligence very early in their marketing pitch.

As an industry analyst specializing in this area, I have to be curt when someone touts “AI-first” as their solution’s “secret sauce.” If a vendor brags about their AI, but fails to include even a token mention of AI’s chief enabling technology – Machine Learning (ML) – my personal hype-sniffer starts to rattle like a Geiger counter. And if there’s no mention of Artificial Neural Networks, Natural Language Processing, supervised learning, or any of the other enabling tools for building, tuning, deploying, and managing AI applications, I tend to dismiss the pitch entirely. Under those circumstances, it’s usually an instance of a vendor appropriating a trendy label for marketing purposes.

If “AI-first” means anything, it should mean that developers’ first and foremost approach is to infuse their applications with any or all of the following capabilities:

  • Augmentation: This is an app’s ability to augment users’ organic powers of cognition, reasoning, natural language processing, predictive analysis, perception, and pattern recognition.
  • Assistivity: This refers to an app’s deployment as an AI-infused virtual assistant into commerce, mobile, messaging, social, IoT, and other applications to drive smarter decisions.
  • Anticipation: This is the mind-reading prowess we expect from an AI-powered digital assistant: the ability to accurately anticipate our desired outcomes and understand our inner desires via embedded predictive models fed from fresh data streams.
  • Adaptivity: This is an app’s AI-powered ability to adapt its Machine Learning and other statistical models to fresh data, to interactions with we users, and to changing contexts in order to hone its cognitive skills to a finer degree.
  • Anthropomorphism: This is the heart of conversational interfaces, in which AI apps incorporate both natural-language understanding and generation to such a high fidelity that they can stand in for flesh-and-blood humans.
  • Acceleration: This is why Big Data, streaming, and in-memory architectures are prevalent AI architectures, because they enable us to consider more data, more dimensions, more decision factors, and more complex scenarios in shorter timeframes than anyone would be able to achieve unassisted.
  • Automation: This is when decision support solutions incorporate AI technologies to automate our human cognitive faculties and thereby enable more consistency, replicability, and transparency over those processes.

This approach expresses my long-held belief that AI is primarily a paradigm focused on developing leading-edge software-development practices. AI—as a scientific research focus—has been around for more than 60 years, and it continues to buzz in the culture through successive waves of innovation in the underlying technologies. I employed this practice-oriented framework to develop the AI maturity model I presented in this KDNuggets post from earlier in the year.

What AI has not been, from the very start, is a specific set of technologies, tools, platforms, and languages. Given this history, it’s counterproductive to scope this fast-evolving discipline down to some specific set of technological enablers, a la what you find in Gartner’s Hype Cycle or Forrester’s Pragmatic AI framework. The capabilities and technologies referenced in these frameworks are finding their way into practically every application you can name. The difficulties one encounters in scoping AI’s enabling technologies compromise the validity of any market sizing study one might perform,—such as those discussed here.

Likewise, studies (such as this) that focus on leading AI startups struggle with the issue of who, these days, isn’t in AI (or who hasn’t shifted their go-to-market message toward wanting the market to believe they are). Recall that 20+ years ago every startup that built its business model on this new phenomenon called the “World Wide Web” was considered a “web” company. This overbroad market-segmentation approach became increasingly ridiculous as some of these startups (e.g., Amazon, Uber) started to dominate existing industries (e.g., bookselling, taxicabs) and the incumbents embraced those very same disruptive technologies.

However, there is validity in an approach, such as this, that posits successive waves of “AI companies” who are introducing and infusing innovations throughout the economy. In the piece, Louis Coppey describes three waves of AI solution providers, without giving specific years when these waves started or, in theory, might someday end. As befits his status as a venture capitalist, Coppey focuses on the extent to which vendors in each of the following waves were successful in monetizing their offerings:

  • First AI wave: These are “purely research-driven companies” who, to the extent they survived, generally “were acquihired before generating revenues.
  • Second AI wave: These are “companies building Machine Learning infrastructures,” who, in many case were acquired for their intellectual-property assets by larger, established solution providers in cloud computing, Big Data Analytics, and so forth, but who never took their offerings to the last-mile of lucrative mass-market AI applications.
  • Third AI wave: These, who the VCs are avidly funding right now, are bringing “applied AI solutions” to market in specific horizontal or industry-specific solution areas.

What you notice with each of these innovation waves is that the core AI technologies, as they emerge from R&D, are being disseminated and diffused into every application domain. In the process of commercialization, it becomes less meaningful to speak of them as “AI solutions.” Even Gartner has acknowledged that fact, with its recent pronouncement that AI technologies will be in almost every new software product by 2020. This is consistent with what I’ve stated in my recent Wikibon research: every software programmer is having to master the skills of Data Science and more software projects focus on developing AI microservices for Cloud deployment.

We should discard the increasingly quaint notion of AI-first. As AI-infused software pushes more deeply into the Internet of Things’ distributed application fabric, we should instead focus on “edge-first” strategies. This is where the innovations from these AI waves are being delivered into real-world applications.

About the author

James Kobielus, Wikibon, Lead Analyst Jim is Wikibon's Lead Analyst for Data Science, Deep Learning, and Application Development. Previously, Jim was IBM's data science evangelist. He managed IBM's thought leadership, social and influencer marketing programs targeted at developers of big data analytics, machine learning, and cognitive computing applications. Prior to his 5-year stint at IBM, Jim was an analyst at Forrester Research, Current Analysis, and the Burton Group. He is also a prolific blogger, a popular speaker, and a familiar face from his many appearances as an expert on theCUBE and at industry events.

You might also like...

Data Science Use Cases

Read More →