Advertisement

Google’s Bet on API Metrics

By on

Click to learn more about author Pete Johnson.

If developers are the new kingmakers, then APIs (application programming interfaces) and IDEs (integrated development environments) are the swords and shields they use to storm the castle. Both speak to developer productivity, but APIs have grown from early arguments over REST vs. SOAP formats to form the de facto way in which someone with data or some transactional process makes them available to anybody else. Many attribute Amazon’s long-term success to the so-called Bezos Mandate, which demanded that even internal teams expose functionality to one another through APIs. All that is to say, APIs are the backbone of a day in the life of a developer.

That makes some of Google’s choices relating to API metrics all the more interesting. Generally speaking, in order to scale a technology you have to know more about how it is being used. Traditionally this has applied to networks, storage, compute and all sorts of other aspects to modern public and private Cloud usage. What Google is doing is applying that same metrics lens to APIs, which makes it far easier to keep an eye on those massively hybrid applications I wrote about in this same space in October.

How are they doing this?

APIGee

In this hybrid application world that is growing around us, business logic and data can reside in very different places. Regardless of whether your core application logic resides in a public Cloud or on-prem, there is a rich set of information otherwise locked in legacy data stores all over different enterprises.

APIGee, which IPO’d in August of 2015 before being acquired by Google in September of 2016, does a lot of very cool things that enable an Enterprise IT organization to expose legacy back end systems as modern APIs, complete with RESTful interfaces, automated documentation creation, and access keys for security. One of the original API gateways, APIGee not only lets an organization take the friction out of making legacy data accessible, it can even be monetizable. As an example, Pitney Bowes has used APIGee to monetize its data in a way it couldn’t previously.

At the heart of APIGee is its metrics collection through its Analytics Services, which an IT Ops person can use to explore how APIs exposed through the toolset are being used by different audiences. Imagine taking a legacy Oracle database or SAP instance and not only using APIGee to expose its data through a modern API, but being able to see who is using the data when, and how much. This can provide critical insight as to how this on-premises data is really being used by different constituents that isn’t possible or nearly as secure if access were provided directly to these information stores.

Service Catalog and GCP Broker

APIGee can gather metrics for on-premises data, but what about using public Cloud services like sentiment analysis, message queues, or image recognition from either on-premises or for business logic? That’s where Google’s Service Catalog and GCP Broker toolsets can ease the aggregate usage of the public Cloud across multiple applications that an organization might be managing.

A typical overhead task for a developer to consume a public Cloud API of any kind is to bind the business logic code to a public Cloud service through a set of identifiers and access keys using configuration files at deployment time. As demonstrated by Google’s Martin Gannholm at the Next conference last year, there is a better way to do this in the Kubernetes world by having a central administrator inject those bindings into an application component. The immediate improvement to the developer over the traditional binding model is that the overhead is dramatically reduced. To the IT Ops administrator, they can improve security to the public cloud API by rotating access keys without requiring a redeployment of new configuration files and offer metrics insight when transactions between different public cloud APIs are aggregated across many different applications.

Imagine having a single tool that an administrator could go to in order to see public Cloud Big Data usage across all applications, or see spikes in AI service usage that drove up a public Cloud bill in a particular month. Much like we saw with internal data usage exposed by APIGee, these sorts of metrics can help an organization get a deeper understanding of how public Cloud services are being consumed.

Google now has two sets of tools in place for collecting API metrics for both on-prem legacy data that has been strategically exposed and public Cloud service usage. APIGee provides a mechanism for creating new revenue streams using these metrics. The Service Catalog and GCP Broker can improve developer productivity by making service binding easier, while also showing aggregate public Cloud API usage across multiple applications. Together, the offerings give an organization a broad view of how developers are consuming APIs.

Leave a Reply