Advertisement

Bottlenose Takes on the Data Scientist Shortage

By on

jz_bottle_122216Like many of their peers, as business leaders look to the future, they may well be concerned most about getting value out of all the data they possess. They look around at a world of unstructured data, rich in insights, but not suited to traditional legacy databases. They perceive that technologies such as Machine Learning and Artificial Intelligence are becoming mainstream for producing accurate predictions and insights from all this data. And they wonder how on earth are they going to be able to bring on-board the people with the skills they need to leverage these data sources? So they begin to look at hiring a Data Scientist who can use these technologies to help unearth the hidden intelligence of the company’s data assets.

“This stuff is hard,” says Nova Spivack, CEO and co-founder of the enterprise intelligence company Bottlenose. To realize the potential of all the data requires special individuals – Data Scientists, hard-core analysts, those with advanced computer science degrees from elite and cutting-edge university programs. Big companies in popular sectors like tech or finance – Google, Microsoft, IBM, Citigroup, American Express and the like – may not have a problem getting those folks to sign on as employees. But even large global enterprises in more traditional verticals often find themselves scraping by with small Data Science teams.

Spivack relates, for instance, that he recently visited with a giant automaker in such a position. Because the small Data Science group’s services are in such demand, business leaders can’t get their situations serviced as quickly as they need. “One part of the company is responsible for all the analytics, but [the capabilities these employees provide need] to be distributed to every person and every function,” he says.

And if even big guns like these have it rough, imagine the difficulties faced by small- and mid-sized businesses, especially those operating outside of hot markets like the start-up tech sector.

Helping Hands for the Humans

Spivack cites a figure by IDC stating that by 2018, there will be only about 1.5 million managers with proficiency in data-driven decisions in the U.S. and a shortage of nearly 300,000 Data Scientists and Analysts. “They are scarce and getting scarcer,” he says. Even any upticks in students graduating with these skills won’t help anytime soon, as they lack the real-world, practical Machine Learning and AI experiences that are so crucial to success.

“The world can’t meet the demand for these people, not even ten years from now,” he says. “That’s a huge obstacle to real, full adoption of these technologies at all layers of the economy.”

What can the vast numbers of businesses that want in on better Business Intelligence do now, if they lack the money or glamour credentials to attract the cream of the crop in terms of human labor? That’s what the latest version of Bottlenose Nerve Center 3.0 aims to address. The Cognitive Computing platform essentially creates a new AI-automated analyst, according to the company. Companies in the under-$100 million revenue mark are among the planned beneficiaries of its focus on enabling non-Data Scientists and non-Analysts to automate more of what human Data Scientists and analysts do to get as much insights out of data as possible.

According to Spivack, Bottlenose has been observing how Analysts and Data Scientists work; discovering that about 70% to 80% of what they do is repeatable and automatable. Typically, this is boring work where Artificial Intelligence technologies can add a lot of value, he says. After six years of developing the Nerve Center platform, which streams the process of ingesting and enriching realtime data from more than two million sources (with the ability to add millions more, including customers’ own data), turning it into usable and actionable insights, and providing visualization of information, Bottlenose has delivered a fully data-agnostic system to take on the repeatable and automatable parts of the Data Science and analytics job.

It’s reaching towards monitoring 200 billion entities, with hundreds of different time-series metrics for each one, tracking typical behavior and behavior changes over time to help understand what is normal for an entity and departures from that state. For instance, its technology caught on to where Brexit was really going ahead of the vote, even as polls were coming to the opposite conclusion.

To help understand what occurs within a time series and layer those relationships, it relies on different Machine Learning technologies. For example, unsupervised Machine Learning does a form of dimensional reduction to create what it calls a Google Maps for data – a topographic map of relationships between data in a data set based on real clusters. Supervised Machine Learning trains the system to detect certain kinds of patterns—say, the kind of issues customers are complaining about.

“People talk about a screen issue or keyboard problems in thousands of different ways,” he says. “You can train pattern classifiers using supervised Machine Learning to understand that something is a battery issue even if they don’t use the word ‘battery.’” A single click applies deep learning to train classifiers.


Finding entities, patterns in time series data around entities, clusters around that, and higher-level Machine Learning concepts together promote a mountain of potential insight for decision makers. At the top layer of Nerve Center is the real AI haven, where intelligent autonomous agents continuously running in the Cloud are dedicated to specific tasks, whether it’s looking for breaking news about a brand or tracking risk to a stock. The agents can validate what they find and notify the decision maker of this – usually a line-of-business person – as well as their evidence for it, so that that individual may take action. Or, the system can automate some type of behavior or process based on prescribed preset responses.

“The point is that Data Scientists and Analysts waste most of their time on data preparation and cleaning, then on prospecting through data to find something interesting,” he says. “It’s hard to keep up with changing data and then have to validate what they find. That’s all before they provide any conclusion or recommendation, which is the hard part. We want to automate all that.”

That makes insights easier and more accessible to the decision makers who need them to take action, as well as frees up the Data Scientist, for the companies that are lucky enough to have one, to do even more creative deep diving into an issue.

Nerve Center’s Evolution

Nerve Center accomplishes this via what Spivack calls a radical new interface, “completely rebuilt to be a next-generation tool.” The platform features one layer for data cleansing, enrichment, and management; a cognitive layer for Advanced Analytics called Nerve Center Compute; and Nerve Center Discover, for enabling custom dashboards and deeper analytics dives.

A full SDK, APIs and a new query language (BNQL, a flavor of SQL to speed productivity for those familiar with that) are part of the latest version, as are solutions that package the system’s capabilities for specific audiences and their issues. These include Consumer Insights and Audience Intelligence; Competitive and Market Intelligence; Financial Industry Intelligence (FII); and Risk and Threat Intelligence. These solutions pump certain data sources, enrichments and intelligent agents into a visual interface that can be accessed on mobile devices or the Web. “They look like purpose-built applications to answer specific problems,” Spivack says. “It’s like you had a Data Scientist working for you 24/7 for a specific purpose.”

The vision over time for Nerve Center is to open it up as a platform and ecosystem, Spivack says. “Ultimately there are so many use cases that are special cases of the same problem that enterprises won’t want a different vendor for each one,” he says. Today, Spivack says its customers are primarily big companies, but it is gaining traction not just at the enterprise level, but with the departmental buyer, too. These buyers don’t have the budgets or time – and certainly not the Data Scientists – to take on multimillion dollar and multi-year projects to get answers to their business projects, so providing them an easy to use and affordable alternative is important.

“Most of the real problems that have money and value are in the various vertical parts of the enterprise,” Spivack says. “How to get them this capability without having to get the CIO involved, slowing things down and taking control out of [departmental buyers’] hands? Do it as SaaS and make it affordable.”

The end game, he says, is ultimately giving humans more time to use their brains “for the creative work that I don’t think machines will be good at anytime soon. But they are great at things like recommendations and at predictions that you can make your decisions from. The point here is to make machines better at what they do.”

Leave a Reply