Loading...
You are here:  Home  >  Data Education  >  Data Architecture News, Articles, & Education  >  Data Architecture Blogs  >  Current Article

Iterations are the Currency of Software Innovation

By   /  May 22, 2018  /  No Comments

Click to learn more about author Pete Johnson.

In the modern tech economy, every business is a software business.  Why?  Because companies know that the easiest way to introduce innovation into a marketplace is with software.  Instead of waiting months to make a change to a physical piece of equipment, software can be distributed quickly and widely to smartphones or programmable manufacturing equipment. This is why companies of every size currently have openings for software engineers.

But if you’re spending all that money on developer salaries, how do you maximize the amount of innovation you get out of them?  It turns out, iterations are the currency of software innovation.

It’s All About the At-Bats

Venture capitalists are in the business of finding innovation. Most of them will tell you that for every ten companies they invest in, they are happy if one hits it big.  Among the things that the public Cloud did for the VC community is to let them take more swings of the bat by funding more companies at a lower capitalization, because those start-ups can begin without having to purchase hardware.  More at-bats, to continue the baseball analogy, results in more innovation.

Applying that same hit percentage to software development, companies have a 10% chance of any release containing an innovation that will stick with its intended audience, so is it better to have four chances at innovation a year with quarterly releases, twelve chances with monthly releases, or 52 chances with weekly releases?  The strategic answer is obvious.  More releases and more iterations of software produce more chances at innovation.  Tactically, though, how do you do that?

Maximizing Iterations: From Monoliths to Microservices

In the early 1990s when most software was running in data centers on physical hardware, iteration speed was trumped by risk mitigation.  Back then, those physical servers had to be treated like scarce resources—they were the only way to make a unit of compute accessible to run a software stack on top of. Replacing that unit of compute took months.

Back then, components of a monolithic application typically communicated with each other within the same memory space or over client/server connections using custom protocols.  All the pieces were moved together into production to avoid as much risk as possible. The side effect of that was that if one component had issues, the entire application had to be backed out, which further limited iteration speed.

Now, virtual machines can be created in minutes and containers can be created in seconds, changing the way that developers think about application components.  Instead of relying on in-memory or custom protocol communication, if each component has an HTTP-based API, it can act as a contract between the components.  As long as that contract doesn’t change, the components can be released independent from one another. If every component can sit behind its own load balancer, it can be scaled independently. That’s in addition to taking advantage of rolling deployments where old instances of components are removed from behind the load balancer as new ones are injected.

These are the modern tenants of a microservices-based architecture. They are more loosely coupled than their monolith predecessors, thanks to those API contracts, enabling faster iterations.

Kubernetes is a Big Deal (and so is Serverless)

If you have hundreds—if not thousands—of containers to manage for all these microservices, you need a way to distribute them across different physical or virtual hosts. You have to figure out naming and scheduling, and improve networking, because different components might be on the same host, negating the need for packets to go out to the network card.

This is why Kubernetes is such a big deal and why Google (through GKE), AWS (through EKS), and Cisco (through CCP),  among others, are so bought into the container clustering platform.  And again, it’s all in the name of iterations, so that development teams can more loosely couple their components and release them faster as a way of finding innovation.

But what’s next?  The big deal over serverless architectures is that they could be the next step in this evolution.  Instead of coupling components via API contracts, serverless functions are tied together through event gateways. Instead of having multiple instances of a component sitting behind a load balancer, functions sit on disk until an event triggers them into action.  This requires a far more stateless approach to building the logic inside the individual functions.  It’s an even looser coupling than microservices, at potentially better underlying physical server utilization, since the functions are at rest on disk unless absolutely necessary.

The Bottom Line

The best way to find a good idea is to iterate through ideas quickly and discard the bad ones once you’ve tried them.  This concept is driving application architecture, container clustering platforms, and serverless approaches, in an attempt to remove as much friction from the software development and release processes as possible.  The potential innovation gains from maximizing iterations are what just about every company is chasing these days and it’s all because iterations are the currency of software innovation.

About the author

Pete Johnson is a Technical Solutions Architect at Cisco, covering cloud and serverless technologies. Prior to joining Cisco as part of the CliQr acquisition, Pete worked at Hewlett-Packard for 20 years where he served as the HP.com Chief Architect and as an inaugural member of the HP Cloud team.

You might also like...

Real-time Data Processing with Serverless Computing

Read More →