How a data cache can solve your JavaScript performance problems
There are many barriers to improved web application performance, but using a data cache effectively can break down many of those performance barriers.
The shorter the distance application data has to travel to drive a web application, the better the user experience will be. Developers have a wide array of places to use a data cache, but each comes with tradeoffs in terms of performance and validity.
Data cache invalidation is one of the hardest problems in computer science, said Yoav Weiss, principal architect at Akamai Technologies, based in Cambridge, Mass. Furthermore, data cache semantics in HTTP can be difficult, which means most content on the web today is not properly cached.
"When you are making decisions about storing something in cache, you have to guess if it is something that would be useful to keep around," Weiss explained. "You cannot keep everything around, and putting resources into a data cache also means throwing something out."
A good data cache strategy for applications has as much to do with knowing what to evict. This involves specifying that fresh sources stored closest to the user are more valuable than those being thrown out.
There are a wide variety of data-caching mechanisms spanning computer architecture. CPUs include L1, L2 and L3 caches that are able to deliver responses in 0.5 to 30 nanoseconds. RAM caches have about 100-nanosecond response times, while solid-state disk response times are on the order of 150 microseconds. Network response times are on the order of 150 milliseconds, or about a thousand times slower than SSDs, and a million times slower than RAM. Ideally, developers want to keep as much data as possible in RAM or local disk.
The journey of a request object
When a request object is created in a browser's rendering engine, its sole purpose is to find the resource so it can render a page. This could include an image, script or any other type of external resources. This request could be created because of user navigation or explicit JavaScript APIs. Each request could be different in terms of type, credential setting and in terms of the URL it is sending.
Yoav Weissprincipal architect at Akamai
The first place to look is the MemoryCache class, because the data is stored in RAM and is part of the rendering engine on the browser. This cache is destroyed once a user goes to a different webpage. This contains resources that include preload scanning or multiple tags referencing the same resource. There are special rules for MemoryCache designed to reduce security vulnerabilities. For example, a script resource cannot be returned for an image request.
MemoryCache also does not support HTTP caching semantics. Even though MemoryCache provides the best performance, it can be challenging to use the same code across browsers. "In general, the specification community should do a better job to make sure the implementations between browsers match," Weiss said.
Use service workers to keep other data on hand
If a request is serviced by MemoryCache, the response will not appear in developer tools or resource time measures. If a request is not found in MemoryCache, it will continue to a service worker -- a JavaScript proxy for handling requests and responses built into most modern browsers. It is not yet available in Safari, but it's on the timeline.
Service workers can be unpredictable. They can generate their own responses, and their response mechanism is not baked into the browser. "There are no caching semantics baked into service workers, unless the developer adds them in," Weiss said. If a service worker is not able to create a response, it uses the fetch API to look further up the stack.
At the network layer, the application then checks the HTTP cache, which uses very strict caching semantics. HTTP cache is also persistent, which allows it to save resources to disk for later use. However, it is considerably slower than MemoryCache, which operates at RAM speeds.
If data is not found in HTTP cache, the browser makes one last check for the Push Cache available as part of HTTP/2. But this is more complicated, since different browsers have different rules for managing Push Cache. "This cache is also under spec'd, and we should to a better job of making sure the different implementations are compatible," Weiss said.
Network caches rely on less predictable networks
Once the client has made an exhaustive search of all these caches, the request is sent out over the network. "The network can be unpredictable. Data can get stuck in queues, and there can be data corruption of packets. In addition, latency can vary widely according to the type of network," Weiss said.
Data loss and latency can combine to create jitter. For example, Wi-Fi networks tend to have lower latency than 4G networks, but can suffer packet loss resulting in slower app performance. Latency can go up significantly if the client has to fetch data from the other side of the country or another continent.
This is where having a good content distribution network (CDN) strategy can improve app performance. CDNs use edge servers to reduce the distance and number of router hops required for retrieving data. There are a number of challenges with keeping the content in the edge servers fresh, particularly if the data involves retrieving database updates and querying multiple cloud applications. To address this challenge, CDNs often use reverse proxies that determine if it can serve existing content closer to the user, or if a new request needs to be made to the origin server.
Identify the optimal validation strategy
HTTP/1.1 allows the client to generate specially crafted URLs, which make it easier to determine if cache data is valid or stale. Developers can also set the freshness of cache data, which starts a timer indicating when cache data needs to be updated. A max-age variable of 3,600 would indicate that the cache data is likely to be guaranteed not to change in the next hour.
Hard coding freshness can create problems, too. "If we ship a resource and a minute later find a bug, users will see that bug for almost another hour before the browser and cache revalidate the resource and retrieve a fixed one," Weiss explained.
Once the resource freshness lifetime runs out, it does not necessarily have to redownload the resource; it just has to be revalidated. "Validators let us check the cache with a lower cost, because we don't have to redownload the payload. But it still takes a full round trip to get the response back into the cache," Weiss said.
There are also privacy concerns that need to be addressed with caching. Some resources are perfectly fine to store in the HTTP cache. The Cache-Control header allows developers to specify whether data can be stored on public caches or private caches.
There are a number of myths regarding how Cache-Control works. For example, specifying "Cache-Control: must-revalidate" may not revalidate until after the freshness timer expires. Specifying "Cache-Control: no-cache" will cache in the browser. As a result, Weis recommended developers read the manual to confirm the Cache-Control mechanism executes the desired behavior.
At the end of the day, a good caching strategy can make a tremendous difference on a web app's performance. Weiss recommended developers investigate service workers that can enable new patterns for caching on the web and the browser.