Reading Time: 21 minutes

In part 1 of this post, we have established the overall value proposition of defining reusable KPIs in an attempt to assess and drive the concept of reuse within your API platform. Once the capability to establish and monitor both abstract baselines and progress against them have been established within an enterprise, the next step is to determine what metrics are worth tracking, where they break down, and how they relate to each other.

Here are 8 KPI metrics to track API reuse: 

Metric 1: Provisioned Users per API

latest report
Learn why we are the Leaders in API management and iPaaS

– How it could be calculated – A weighted average of the number of credentialed users per API registered in an API exchange with the weighting coming from the number of requests for a specific API in the last month.

– How it relates to reuse – Given that each provisioned user represents a different context of use, this is a great first metric to demonstrate reusable components rather than custom product delivered for a single use case.

– Ease of instrumentation – Instrumenting the number of provisioned users per endpoint is relatively simple given that API access is usually governed through a gateway that tracks and manages access credentials for provisioned users.

– Example usage – “Our provisioned users/API jumped by more than 20% last month as we on-boarded the new customers from the acquisition. The teams did a great job of designing for reuse in the first place and also being flexible with making changes to support a smooth integration.”

– Complementary metrics – While having a large number of users is clearly a good thing, there are several complementary metrics that can help lend color to suss out how valid the metric is in a given context:

  • % of credentialed users active (or inactive) – provisioned users could be overstated if some number of users haven’t made a request in a month or longer.

Metric 2: Channel Types per API

– How it could be calculated – Assuming that all API calls included some form of device type in the request, a weighted average of the number of different device types could be computed with the weighting coming from the number of requests for a specific API in the last month.

– How it relates to reuse – Given that each channel/device type represents a different context of use, this is an excellent metric to demonstrate reusable components rather than one-off code that not adaptable outside of a single context.

– Example usage – “Our channel types/API ticked up last month as we launched the new watch apps. Given that overall traffic has bumped up as well, this means that our omnichannel efforts are succeeding in both reuse and creating new traffic rather than just cannibalizing or redirecting existing traffic.”

– Ease of instrumentation – Instrumenting the number of channel/device types per API requires that some work is done to capture, categorize and pass down channel/device types from an original request to any subsequently called services. This would require an organization to have some form of standards to capture, aggregate and pivot on this data.

– Complementary metrics – Supported channel/device types. If this number is anomalously high or low, it could bear further investigation including to see if all the device types are in active use.

Metric 3: APIs Consumed

– How it could be calculated – Assuming that all API calls included some form of originating request ID in the request, a weighted average of the number of subsequently spawned API calls could be computed with the weighting coming from the number of requests for a specific originating API in the last month.

– How it relates to reuse – Given that each consumed API is a component that is different from the originating composite API, there is some correlation to reuse rather than having each and every API span all the way from request to response independently. This is a reasonable proxy metric to demonstrate reusable components.

– Example usage – “Our average service spawns just over 4 other service calls. This isn’t a bad thing in itself, but I dove into the data and found one service that made more than 10 hops before returning data to the caller. Reuse and layering are good, but we probably want to do some refactoring here to get that service closer to being self-contained.”

– Ease of instrumentation – Instrumenting the number of APIs consumed per original API request would require either some data science by looking at request logs and pivoting on a request ID. While this might take some work, it could be extremely useful in deriving a dependency map. This would require an organization to have some form of standards to capture, aggregate and pivot on this data.

– Complementary metrics – API performance metrics of all types (e.g., time to the first byte, request complete time, 99% complete time, etc) would be valuable as complementary metrics here to root out unnecessary hops and dependencies.

Metric 4: API Catalog Count

– How it could be calculated – Assuming that all APIs are registered in a management portal, this raw metric should not be hard to extract programmatically at a predefined frequency for trending.

– How it relates to reuse – Given that each registered API is an individual component, there is some correlation to reuse as it explicitly tracks the number of reusable self-contained and functionally complete components. This is a reasonable proxy metric to demonstrate reuse.

– Example usage – “Our service catalog count has been rising at a steady rate for the last 2 quarters but our overall traffic is somewhat flat. I know we are gearing up for the big product launch next month, but this might indicate that we could stand to improve our “design/build for reuse” efforts with our dev teams.”

– Ease of instrumentation – Instrumenting the number of APIs registered within a management portal should not be hard to extract on a regular basis.

– Complementary metrics – Tracking deprecated or retired APIs over time would provide a valuable contrast to this raw number as would the number of new APIs added to the portal in any given month.

Metric 5: Requests/Month

– How it could be calculated – Take the total number of requests for an individual API or group of APIs and divide that number by the number of elapsed months that the platform has been running. A decision would have to be made to determine how to track the subsequent requests spawned by an original request.

– How it relates to reuse – Tracking the number of times an individual or group of APIs are called over time is a good reuse metric as it demonstrates different usage instances and frequencies for each individual API or collection of APIs.

– Example usage – “Our requests/month jumped by 15% last month without any significant launch of new production services. The majority of this lift came from launching a new tablet experience that leverages existing omnichannel APIs.”

– Ease of instrumentation – Assuming that an API management tool of some nature is in use, regularly extracting and tracking the number of requests in a given time period should not be hard.

– Complementary metrics – Raw catalog count for production along with a breakdown of the requests/month for individual APIs would help to discern the nature of reuse for an individual API.

Metric 6: Time to First Call

– How it could be calculated – Subtract the date-time of a submitted feature request from the date/time of the first successfully completed production request. Aggregate numbers could be weighted based on the total number of subsequent historical calls.

– How it relates to reuse – Tracking the elapsed time between an idea and a proven use is a good proxy reuse metric as it demonstrates how simple or complex it is to create new reusable components within the ecosystem.

– Example usage – “Our time to first call improved by 50% last year as we rolled out more guidelines regarding enhancing existing services with forward and backward compatibility in mind, required use of API contracts and got alignment on a standard taxonomy for our domains. This means we are getting better at satisfying our business partners and clients while also making our platform more operationally sustainable.”

– Ease of instrumentation – This would be relatively difficult as it would require integration between work entry systems, deployment systems, and API management systems.

– Complementary metrics – Time to second client call, provisioned users, and channel types per API would lend color to the validity of this metric.

Metric 7: Time to Second Client Call

– How it could be calculated – Subtract the date-time of the first successfully completed production request from the date/time of the first successfully completed production request for a second client type. Aggregate numbers could be weighted based on the total number of subsequent historical calls.

– How it relates to reuse – Tracking the elapsed time between a service production launch and the time it has been used in an omnichannel scenario demarcates how quickly and frequently a component can be positioned for omnichannel uses.

– Example usage – “We are pretty good at designing with reuse in mind. Our service catalog has an average time of 3 days per service before any service is used across multiple contexts, which is just enough to run and validate functional and performance tests.”

– Ease of instrumentation – This would be intermediate in complexity given in order given that a significant sample size of data would have to be collected before the metric would be valuable and the metric might need to understand the processes in a version control strategy.

– Complementary metrics – Usage per channel type per API would help establish the validity of this metric.

Metric 8: Time to MVP / Speed to Market

– How it could be calculated – Subtract the date-time of a submitted “new product definition” from the date/time of the first successfully completed production request within the context of the product launch. Aggregate numbers could be weighted based on the total number of subsequent historical calls.

– How it relates to reuse – Tracking the elapsed time between an idea and a proven use is a good reuse metric as it can demonstrate how speed to market improves as reuse ramps up.

– Example usage – “Our speed to market has improved within the CPG value stream because our teams over there have embraced the reuse ethic. What can we do to help the Customer Experience team use this model for their value stream given that they’ve stayed flat on speed to market for the last 3 years?”

– Ease of instrumentation – This would be relatively difficult as it would require integration between work entry systems, deployment systems, product marketing calendars and API management systems.

– Complementary metrics – Test coverage metrics along with APIs consumed will help to understand this metric because the speed to market metric is at the mercy of:

  • How quickly test/fix cycles can be completed
  • How often development teams build their own artifacts rather than reuse existing ones

Thence APIs and Reuse Go

Given the speed of advances in driving reusability that is happening in the API space, it is more critical than ever to start getting the KPIs for APIs instrumented and baselined.

One reason, in particular, is to be able to baseline and contrast the productivity and quality of teams and value streams that adopt “modern practices” (like API-led connectivity and DevOps) against parts of the enterprise that are still utilizing legacy practices and bi-modal delivery for back office work. This ability is the leverage point for enterprise leadership to drive investment, debt remediation and optimization of the new approaches.

Developing and keeping an environment of high trust within the enterprise is necessary for a healthy and vibrant technology and product development partnership; being able to harvest and showcase the strategic and tactical value being provided for the enterprise is a necessary ingredient.

MuleSoft is not only committed to blazing a trail in the creation of reusable APIs, we are also invested in developing the next generation of metrics for tracking, steering and growing the API economy within the enterprise. If you have had success with any other KPIs and metrics for monitoring and encouraging reuse, we would love to hear about them. Remember, sharing knowledge is required for the knowledge to be reused.

Learn more about how you can use approaches like API-led connectivity to further enhance API reusability.


Series Navigation<< Reusable KPIs for reusable APIs: The value prop