sommai - Fotolia
Tips for avoiding external and internal API performance issues
After convincing users to integrate into your system, the last thing you want is for them to suffer from API performance issues. Ensure users that API performance isn't a problem.
Let's face it, software isn't perfect. No matter how reliably a product is built, or how thoroughly it is tested, it will still glitch or crash at one point or another. Defects are simply a fact of life.
Fortunately, as engineers, we have the tools to predict and compensate for these issues. Analytics, monitoring, log aggregation, alerting, load balancing, unit testing, caching ... stop me when you get the point.
When it comes to creating and consuming APIs, these strategies become significantly more important because APIs don't have the luxury of acting as isolated environments. Because APIs are a potential point of failure in every application that utilizes them, it is important to not only monitor the API performance of the components you create, but also the API performance of the components you consume.
API performance monitoring
Obviously, it's important to stay ahead of potential issues to ensure a fast and reliable product, but how exactly do we measure API performance to accomplish that?
At a high level, measuring API performance is no different than measuring any other kind of application performance. The main thing to focus on is request latency. Milliseconds can add up quickly, and if an API becomes a bottleneck in consumer applications, it will be dropped for something more performant. For both internal and external APIs, request latency is a relatively straightforward statistic that can be measured using any number of proprietary and commercial methods.
Internal API performance strategies
Developing an internal API strategy is effectively the same as strategizing the development of any other product. When you have direct access to the codebase, the same suite of tools that are used to monitor the health of any application can be taken advantage of directly within the API architecture.
While log aggregation platforms and server monitoring tools can go a long way toward diagnosing systemic problems, many issues that greatly impact API performance can be boiled down to the efficiency of the code itself, and the services with which it integrates.
By taking advantage of popular monitoring platforms (I won't get into the pros and cons of competing services here, but tools like New Relic, Scout and Sumo Logic are all more than up to the task), you can narrow down problem areas in your infrastructure and codebase. Things like processing-heavy operations and database queries can eat up precious milliseconds, which can, in turn, negatively affect your API consumers (who already have their own latency issues).
Reducing the time these operations take is often a very project-specific solution, but techniques such as load balancing and data caching can go a long way toward limiting the number of unnecessary operations an application performs over short periods of time.
External API performance best practices
While internal API performance monitoring and testing is a pretty straightforward process, dealing with external APIs can be a little more difficult; especially when they are both mission-critical and questionably reliable.
Third-party APIs are effectively black boxes; requests go in and responses come out with little transparency about what happens in between. Fortunately, the same platforms and tools mentioned above can often also be used to monitor the latency of requests to external APIs.
The exact implementation of monitoring will vary from platform to platform, but taking special care to implicitly track API requests within your application can go a long way toward identifying problems early and often.
While there is very little that can be done to improve the usability of an API that is down or slow, what can be done to help mitigate problems is to move API requests away from client-facing code. Properly doing this warrants an entire article of its own, but, at a high level, this process relies heavily on asynchronous processing and data management.
Being aware of the average latency of each API request is very helpful in determining how long to cache data, which data to store for the long term and even which crucial requests can be made just in time.
Ultimately, API performance monitoring is no different than any other type of application monitoring. What sets APIs apart is how much applications can come to depend on them. There are a lot of moving pieces in any application, and this dependence means that keeping an eye on how efficiently an API is performing is more important than ever.
Responding to these inefficiencies by doing things, like optimizing database queries and caching the results of time-consuming operations, can go a long way to improving the overall health of any API.