Loading...
You are here:  Home  >  Data Education  >  Data Architecture News, Articles, & Education  >  Data Architecture Blogs  >  Current Article

Tech Primer: Serverless Computing Converging FaaS

By   /  October 23, 2017  /  No Comments

Click to learn more about author Kong Yang.

Traditional enterprise applications and services can be complex and monolithic, requiring costly maintenance and customization to scale. In recent years, a number of organizations have begun transitioning to service-oriented architectures and serverless computing to build their applications and help overcome these challenges. The challenge is that it’s almost impossible to come across a definition of serverless computing without finding that the definition changed overnight.

In this primer, I’ll seek to unpack the characteristics of these new technology constructs and provide a guide to navigate this rapidly transforming landscape.

Defining Serverless Computing

Serverless computing is an execution model in which the Cloud service provider dynamically manages the allocation of machine resources. It is an application architecture that depends on third-party services. Examples are Amazon Web Services’ Lambda and Microsoft Azure’s Functions. It’s sometimes labeled as Backend as a Service (BaaS) because of the dependency on third-party services, or as Functions as a Service (FaaS) because of its dependency on custom code that runs in ephemeral containers.

The concept of serverless creates confusion and varying opinions for two key reasons. First, the two constructs — BaaS and FaaS — are often used to define an instance of serverless computing based on its characteristics, which can lead to numerous interpretations of whether something is BaaS, FaaS, or serverless. Second, the term serverless is a misnomer because while developers no longer need to worry about server management, servers are still required to execute the runtime code.

Unfortunately, there are no industry-standard bodies to define these classifications, leading to overlapping or slightly different interpretations.

Serverless and FaaS are becoming more similar to Mike Roberts’s 2016 definition of serverless and FaaS. Serverless can mean apps with server-side logic written by the app developer that runs in stateless containers. These are event-triggered, ephemeral and fully managed by a third-party service provider—all of which FaaS encompasses. The continued growth of AWS Lambda is a clear example of serverless merging with FaaS. I’ll be using FaaS as shorthand for serverless going forward.

For IT professionals, it’s important to understand the subtle nuances and characteristics and focus on the core issue: what can FaaS do for your organization?

The Case for FaaS

To help illustrate the impact of FaaS, think of how the retail world and its user experience has transformed. It’s becoming more critical than ever to provide shoppers with customized and seamless consumption experiences. And FaaS is emerging as an important delivery vehicle.

In a traditional online transaction without FaaS, a typical tiered ecommerce site has three main elements: the client side, the web servers and the database. The client side is frequently the consumer-facing website, which interacts with the web servers that handle everything from product webpages to authentication to cart management to application transaction processing. When a transaction occurs, it gets recorded to the back-end database, which serves as the system of record. Traditionally, the web application was monolithic—one contiguous apparatus with parts dependent on each other. If there was subpar performance, maintenance or downtime in the parts, it meant that the entire application was affected and led to bad customer experience.

With a FaaS deployment, the same retailer can use multiple functions to create a tailored online shopping experience. In this example, the client side website coordinates with a series of functions that establish and customize the customer relationship based on triggered events. Based on the interactions customers might have taken in their browsers — whether they’ve been searching for a specific product or clicking through different product areas — the product database can narrow the options displayed to customers, including best sellers in those areas, product comparisons, and product reviews.

Specifically, FaaS is a single-purpose piece of code that allows organizations to integrate and deliver consistent functionality around a single action, such as re-sizing an image, altering data, searching or encoding video. Using FaaS, the search function consists of two functions: one to index the content and another to search the content via an API. This abstracts the function of indexing and searching away from the product database, decouples the search results from the landing page web experience, and is only used when someone conducts a search. If the search function fails, it doesn’t take down the entire application. Not only does this allow the user to continue with the e-commerce experience, but it also removes the need to restart virtual machines in place of the down application servers.

Benefits and Drawbacks

The benefits that FaaS can provide to enterprises are:

  • Reduce operating expenses: Because your code runs on a third-party platform, companies can reduce their data center footprint and spend.
  • Scalability efficiency: Your code runs on cloud infrastructure backed by juggernauts like AWS or Azure, which means elasticity is integrated into the platform. Plus, you only pay for what you consume.
  • Decrease time to market: Your only focus is on coding single-purpose functions, code that can be re-used. By reducing time spent on developing or upgrading the entire application stack and maintaining the underlying infrastructure, application time to market can be significantly faster.

However, there can be drawbacks to relying on a third-party platform, no matter how powerful or cost-effective. For example:

  • Relinquished control: Because the AWS Lambdas or Azure Functions of the world control their own environments, they can remove or update functionalities, they can push forced upgrades as their APIs change, and they can increase costs.
  • Vendor lock-in: It’s no simple task to move functions from one vendor to another. Once your function is locked in with Lambda, it will require heavy lifting to move to Functions if changes arise, such as if cost goes up or AWS puts limitations on your functions.
  • Reduced monitoring and troubleshooting: The monitoring and troubleshooting capabilities provided by your vendor may not be enough to surface the single point of truth in your function.

Fitting FaaS Into Your Environment

If you’re not using FaaS in your applications, having a baseline knowledge of your existing applications’ behavior and workloads today will help to identify specific functions where FaaS could be utilized.

If FaaS is already exists in your organization, you’re potentially at the mercy of the vendor’s platform for monitoring options. It’s still critical to look at the holistic application view. Functions are simply a part of the tiered application. Thus, using FaaS means it’s more important than ever to apply monitoring as a discipline across the stacks that you control. For instance, if you are using an end-to-end monitoring solution, you can pinpoint what the performance and latencies look like going in the FaaS platform so that if performance is impacted coming out of the platform, you can pinpoint the issue and address it with the service provider via the SLA.

Moving forward, vendors may apply more insights and comprehensive monitoring tools, such as AWS X-Ray™, into their platforms, but even so, your business should continue to leverage tried-and-true monitoring principles to achieve an efficient workload.

In Closing

As the definition of FaaS evolves, there’s a lack of debate around the technology’s capabilities for the future of application development—cloud/hybrid IT infrastructure and container technologies have transformed enterprise applications, bringing flexibility and scalability to organizations.

About the author

Kong Yang, Head Geek at Solarwinds. Kong has over 20 years of IT experience specializing in virtualization and cloud management. He is a VMware vExpert™, Cisco® Champion, and active contributing thought leader within the virtualization community. Yang’s industry expertise includes performance tuning and troubleshooting enterprise stacks, virtualization sizing and capacity planning best practices, community engagement, and technology evangelism. Yang is passionate about understanding the behavior of the entire application ecosystem — the analytics of the interdependencies as well as qualifying and quantifying the results to support the organization’s bottom line. He focuses on virtualization and cloud technologies; application performance; hybrid cloud best practices; vehicles for IT application stacks such as containers, hypervisors, and cloud native best practices; DevOps conversations; converged infrastructure technologies; and data analytics. Follow Kong Yang and Solarwinds at: Twitter, LinkedIn, Solarwinds Twitter

You might also like...

Using Your Data Corpus to Further Digital Transformation

Read More →