There are a couple of ways to cancel a fetch call. The new AbortController
API is designed specifically for this purpose. Another, albeit less effective, way is to use a timeout promise in combination with Promise.race
; less effective because this doesn’t actually cancel the ongoing network request, just that the results get ignored. The latter is a popular way to do cancellation before AbortController
. There are libraries that help with this, but there are subtle memory leaks that show up with some of the implementations.
A naive implementation
A simple, buggy implementation might look like this:
The first issue you might spot here that the timeout is not cleaned in neither of the callbacks. This means that for 10 seconds, all the resources in that function, which get closed over by the Promise
constructor’s function will stay in memory until the timer gets expired. If there are lots of requests in a short period, and the responses are large enough, this might even induce a crash of the Node process as it hits the memory limit (< 2 GB at least as of Node 11).
A naive implementation, improved
Clearing the timeout in the fetch handlers will avoid the guaranteed bloat:
Better version
A way to avoid closing over the context is to separate out promise creation into a function. But it may not be easy/possible to pass the timeout ID out to the caller, which is required for clearing the timer entry. A JavaScript class (or the equivalent non-modern version) can solve this problem cleanly.
First up, the class that encapsulates the timeout:
The usage of this class would look something like this:
This has the benefit of encapsulating the access to the timeout ID away from the actual call site of fetch
, and there’s clearer picture of the code flow compared to the older inline anonymous function version.
I’ve profiled the memory retention information between GCs for the first and the last implementation, and here’s a summary:
Before:
After:
The methodology used:
- On a cold start: trigger a GC and take the first snapshot
- Make 100 requests per second for 10 seconds, with 100 connections, using the wrk2 tool
- Take another snapshot
- Trigger a GC
- Take the third snapshot
Whatever objects got allocated between snapshot 1 and 2 represent the normal allocations as requests get handled, which ideally should get reclaimed after the requests stop ( normal operation of the garbage collected ). When objects get retained between multiple garbage collection runs, there is increased pressure on the garbage collector which, incidentally, also handles memory allocation when the application needs it. If there is a memory cap, the program simply crashes. Object retention doesn’t necessarily mean there’s a memory leak, but the effect in both cases might end up being the same.