Cloud Computing was initially leveraged for providing and accessing Service Oriented Architecture, which enabled organizations to attain resources with reduced costs, decreased infrastructure, and simplified architecture. Suddenly, advanced analytics capabilities (and many others) were in the hands of laymen.
Big Data has emerged in earnest the past couple of years and with such an emergence the Cloud became the architecture of choice. All but the most well financed organizations find it feasible to access the massive quantities of Big Data via the virtual resources of the Cloud, with its nearly infinite scalability and on-demand pay structure.
However, the sheer quantities of Big Data have continued to increase in the midst of the burgeoning Internet of Things (IoT) and the continuous connectivity demands found in the wake of the consumerization of IT, the Bring Your Own Device, and Choose Your Own Device phenomena. Subsequently, the conventional paradigm of Cloud Computing is no longer sufficient.
There are some data savvy and well financed organizations that have invested in technologies designed to perform analytics at the distributed sites of data origination points. Nonetheless, the typical Cloud model is to usher data from myriad disparate points into a few centralized data centers for computations, before transmitting those results back to an increasingly greater set of distributed consumers.
Fog Computing represents a new paradigm for Cloud Computing and involves performing whatever computations that are necessary at the fringe of the Cloud for a host of benefits, including less bandwidth and networking strain, reduced costs, decreased latency, and greater access. Such computing extends the Cloud’s capabilities while decreasing the requirements of organizations leveraging it.
Perhaps the most cogent example of Fog Computing involves the Industrial Internet and the copious quantities of data that are generated via the real-time monitoring of continuously data generating equipment assets in any variety of industries. Fog computing can enable organizations to monitor that equipment at the source via real-time and predictive analytics—most of which will reveal that the equipment is operating as intended. Inordinate amounts of data are processed at the source without constraining an organization’s network resources by sending that data back and forth to a data center.
However, the moment the need for maintenance or failure is detected (or presaged via predictive analytics), that comparatively little amount of data indicating that there is a problem is transmitted to centralized facilities so appropriate action can occur. Additional Fog boons include:
- Cost: The bandwidth required for regularly transmitting decentralized data (which could originate from anywhere in the country or in the world) to centralized locations is expensive and can create bottlenecks as various enterprise use cases contend for those same resources. Fog Computing requires significantly less movement of data, which frees up the network for other uses.
- Expedience: By processing data closer to its source, Fog Computing can significantly expedite computations and processes—enabling organizations to go from chimeric ‘near real-time’ processing speeds to true real-time processing. Again, the proliferation of mobile devices and demands projected for the IoT make time a critical component of service delivery and customer satisfaction. IoT applications such as vehicle to vehicle communication require the least amount of latency as possible.
- Security and Governance: The less frequently and the less distance that data has to travel, the more secure it is. Additionally, there are strict regulatory requirements about where data is stored and accessed (which vary by industry and country) to which local Fog Computing at the extremities of the Cloud can innately conform.
Depending on one’s perspective, some of the advantages of Fog Computing function as disadvantages. Detriments associated with Fogs include:
- Physical locality: There are some who would argue that the whole point of utilizing the Cloud is to access data and resources from anywhere, regardless of physical location. Although Fog Computing merely functions as a more selective way of ascertaining which data becomes centralized and which stays local, some perceive that the limitations of the latter are disadvantageous in terms of access.
- Security: Security has long been regarded as the Achilles heel of the Cloud, but with a number of developments in this space within the past several years, issues of security really amount to a matter of trust. Certain organizations feel more comfortable having their data in a centralized location rather in remote, disparate ones—although the former option can exacerbate Data Governance when considered on a global scale.
- Confusion: There is also the perspective that facilitating Fog Computing merely adds to the number of Cloud options (public, private, hybrids, cloudlets, etc.) and is needlessly complicating architecture that is already complex enough. Conceivably, such pundits would harbor the same opinion about the IoT in general.
Hardware and Software
The Fog Computing paradigm encapsulates several components: data, application services, storage, computing power, analytics, networking, and others. Therefore, it requires a combination of both hardware and software to provision these demands between the end device and the Cloud—which results in the Cloud’s extremities based on physical proximity. One can posit that the vendor that is arguably the leader in this space is Cisco, which may well have innovated the Fog term and has issued its own platform, IOx, designed to account for the IoT. The hardware components that are essential for Fogs include wireless routers, IP video cameras, switches, and computer chips —all of which can be endowed with various software applications to handle the computing needs for data generated near these devices. These devices function as gateways to the Cloud since they ultimately determine which data is transmitted to centralized locations and which remain in local Fogs. Additional vendors in the Fog space include IBM, Intel, EMC, and others.
Decentralized vs. Centralized
The essence of Fog Computing is that it represents what is perhaps the triumph of a decentralized approach to computing and its advantages versus a centralized approach and the traditional concerns associated with the latter. The IoT and the increasingly mobile nature of communication, work, and personal interactions require a shift in architecture that reflects that decentralized perspective. This aspect of Fog Computing coincides nicely with distributed computing and even certain elastic computing capabilities. It makes sense that when provisioning resources on demand that one can do so most efficiently in a local fashion that is integral to Fog Computing’s advantages—especially when those resources include the enormous data quantities of continuously streaming IoT applications and other facets of Big Data. In this respect Fog Computing is a vital supplement to (and not replacement of) Cloud Computing.
Fog Computing represents an extremely significant evolution in Cloud Computing and in computing in general. Its emergence emphasizes the ascendance of a decentralized model for computing that is more flexible and agile than the traditional centralized paradigm. Such agility and flexibility are necessary with Big Data applications taking the form of the IoT and its low or no latency requirements. Fog Computing may not prove a panacea for the unique demands of the IoT and the inexorable movement towards mobile computing. But, it at least recognizes and attempts to address many of the circumscriptions of centralized models which only attract more and more traffic—with less and less bandwidth and networking capabilities—as Big Data continues to grow. It provides a viable architectural solution to these concerns which may even improve in the near future.