WANT TO STAY IN THE KNOW?
Get our weekly newsletter in your inbox with the latest Data Management articles, webinars, events, online courses, and more.
In 1865, Richard Millar Devens presented the phrase “Business Intelligence” (BI) in the Cyclopædia of Commercial and Business Anecdotes. He was using it to describe how Sir Henry Furnese, a banker, profited from information by gathering and acting on it before his competition. More recently, in 1958, an article was written by an IBM computer scientist named Hans Peter Luhn, describing the potential of gathering Business Intelligence (BI) through the use of technology. Business intelligence, as it is understood today, uses technology to gather and analyze data, translate it into useful information, and act on it “before the competition.” Essentially, the modern version of BI focuses on technology a way to make decisions quickly and efficiently, based on the right information at the right time.
In 1968, only individuals with extremely specialized skills could translate data into usable information. At this time, data from multiple sources was normally stored in silos, and research was typically presented in a fragmented, disjointed report that was open to interpretation. Edgar Codd recognized this as a problem, and published a paper in 1970, altering the way people thought about databases. His proposal of developing a “relational database model” gained tremendous popularity, and was adapted worldwide.
Decision Support Systems (DSS) was the first database management system to be developed. Many historians suggest the modern version of Business Intelligence evolved from the DSS database. The number of BI vendors grew in the 1980s, as business people discovered the value of Business Intelligence. An assortment of tools were developed during this time, with the goal of accessing and organizing data in simpler ways. OLAP, Executive Information Systems, and data warehouses were some of the tools developed to work with DSS.
Online analytical processing (OLAP) is a system that allows users to analyze data, from a variety of sources, while offering multiple paradigms, or perspectives. Databases configured for OLAP use a multidimensional data model, supporting complex analysis and ad hoc queries. The standard applications of OLAP include:
- business reporting for sales
- management reporting
- business process management (BPM)
- budgeting and forecasting
- financial reporting and similar areas
- new applications, such as agriculture
OLAP “was” quite popular, because of the variety of ways it offered to assemble and organize information. As a SQL based program, it lost popularity when NoSQL became popular. (At present, some companies, such as Kyvos Insights, Platfora, and AtScale, have layered OLAP onto a NoSQL base.) OLAP supports three basic operations:
- slicing and dicing
Consolidation involves combining data that can be stored and processed in multiple ways. For example, all branch auto sales can be totaled by the auto sales manager, as a way to anticipate sales trends. On the other hand, the drill-down technique supports navigating through, and researching, the details. People can view the auto sales by color, style, or gas consumption. Slicing and dicing lets people take out (slice) specific data on the OLAP cube, and view (dice) those slices from different perspectives (sometimes called dimensions, as in “multidimensional”).
In the late 1970s, CEOs began using the internet for researching business information. This led to the development of software, called Executive Information Systems (EIS), to support upper management in making decisions. An EIS is designed to provide the appropriate and up-to-date information needed to “streamline” the decision-making process. The system emphasizes graphics displays and easy-to-use interfaces in presenting the information. The goal of an EIS was to turn executives into “hands-on” users, who handle their own email, research, appointments, and reading of reports, rather than receiving this information through middle men/women. EIS gradually lost popularity due to its limitations in actually being helpful.
Data Warehouses started becoming popular in the 1980s, as businesses began using in-house Data Analysis solutions on a regular basis. (This was often done after 5 PM and on weekends, due to the limitations of computer systems at the time.) Prior to Data Warehousing, a significant amount of redundancy was needed to provide different people in the decision-making process with useful information. Data warehousing significantly cut the amount of time needed to access data. Data traditionally stored in a number of locations (often, in the form of departmental silos), could now be stored in a single location.
The use of Data Warehouses also helped in developing the use of Big Data. Suddenly, a massive amount of data, in a variety of forms (email, internet, Facebook, Twitter, etc.) could be accessed from a single data store, saving time and money, and accessing Business Information previously unavailable. The potential of Data Warehouses for data-driven insights was huge. These insights increased profits, detected fraud, and minimized losses.
Business Intelligence Goes High Tech
Business Intelligence, as a technological concept, began shortly after the 1988 international conference, The Multiway Data Analysis Consortium, held in Rome. The conclusions reached at this conference jump-started efforts for simplifying BI analysis, while making it more user-friendly. A large number of BI businesses started up in response to the conference’s conclusions, with each new business offering new BI tools. During this period, there were two basic functions of BI: producing data and reports, and organizing and visualizing it in a presentable way.
In the late 1990s and early 2000s, BI services began providing simplified tools, allowing decision makers to become more self-sufficient. The tools were easier to use, provided the functionality needed, and were very efficient. Business people could now gather data and gain insights by working directly with the data.
Business Intelligence vs Analytics
Currently, the two terms are used interchangeably. Both describe the general practice of using data in making informed, intelligent business decisions. The term Business Intelligence has evolved to represent a range of technologies supporting decision-makers within businesses. Analytics, on the other hand, has come to represent a broad range of tools for processing data, and acts as an umbrella phrase, covering data warehousing, enterprise information management, business intelligence, enterprise performance management, and governance.
Descriptive analytics describes, or summarizes data, and is focused primarily on historical information. They describe the past, allowing for an understanding of how previous behaviors effect the present. Descriptive analytics can be used to explain how a company operates, and to describe different aspects of the business. In the best case scenario, descriptive analytics tells a story with a relevant theme, and provides useful information.
Predictive analytics anticipate the future. They use statistical data to supply companies with useful insights about upcoming changes, such as identifying sales trends, purchasing patterns, and forecasting customer behavior. Business uses normally include anticipating sales growth at the end of the year, what products customers might purchase simultaneously, and forecasting inventory totals. Credit scores offer an example of this type of analytics, with financial services using them to determine a customer’s probability of making payments on time.
Prescriptive analytics is a relatively new field, and is still a little hard to work with. These analytics “prescribe” several different possible actions and guides people toward a solution. These analytics are all about providing advice. Essentially, they predict multiple futures and allow companies to assess many possible outcomes, based upon their actions. In the best case scenario, prescriptive analytics will predict what will happen, why it happens, and provide recommendations. Larger companies have used prescriptive analytics to successfully optimize scheduling, revenue streams, and inventory, in turn, improving the customer experience.
Streaming Analytics is the real-time process of constantly calculating, monitoring, and managing data-based statistical information, and acting on it “before the competition.” This process involves knowing and acting upon events taking place in the marketplace, at any given moment. As a new tool, it has improved significantly the flow of useful information to decision makers.
Data for Streaming Analytics can come from a variety of sources, including mobile phones, the Internet of Things (IoT), market data, transactions, and mobile devices (tablets, laptops). It connects management to external data sources, allowing applications to combine and merge data into an application flow, or update external databases with processed information, quickly and efficiently. Streaming Analytics supports:
- Minimizing damage caused by social media meltdowns, security breaches, airplane crashes, manufacturing defects, stock exchange meltdowns, customer churn, etc.
- Analyzing routine business operations in real time.
- Finding missed opportunities with Big Data.
- The option to create new business models, revenue streams, and product innovations.
One example of Streaming Analytics is the “WindyGrid,” developed by the city of Chicago (and built with MongoDB) to coordinate seven million data points, taken from various city departments. Chicago’s city staff can analyze data and anticipate where resources will be needed, allocating them appropriately, and providing an efficient response to problems. Staff can make quicker, more informed decisions and allocate resources more efficiently. The WindyGrid has revolutionized Chicago’s ability to understand, plan, and respond to a variety of situations in a cost-effective manner.
Photo Credit: Phongphan/Shutterstock.com