Potential technical improvements to the search page
Improving rendering time of search results:
At the moment, we're generating as many HTML elements as we can until a timer reaches 1ms. Once the time is up, we stop to let the browser do layout and paint, and we continue again on the next frame. That number was chosen arbitrarily and seems to work well in all cases, but also feels a bit slow on more powerful machines. Moreover, since browsers now limit the precision of timers to only 2ms, our timer is kind of busted.
I've been thinking of ways to get around the 2ms precision issue and to generally improve the speed of rendering to match each machine's capabilities. First of all, let's assume that our smallest rendering unit is one marker. Rendering a single marker shouldn't take much time, even on slow machines. We can also consider that creating each episode wrapper "costs" one rendering unit. We can employ something similar to TCP congestion control to increase the number of rendering units per frame until we start dropping frames. The general idea is that we double the number of rendering units per frame, until we hit some threshold value, at which point we increase the number of rendering units per frame by a constant value (so it starts out increasing exponentially and then linearly). When we skip a frame, we cut the number of rendering units in half, and keep increasing linearly from there. I need to read more about it, since I'm not sure how it eventually settles on a value instead of always dropping in half and slowly climbing back up all the time (it never stops increasing the window).
There's also the problem of detecting dropped frames in the browser. One way would be to use requestAnimationFrame, as we do now, and measure the time between frames. Since those numbers are rounded to the nearest 2ms, and because I don't trust browsers to run my code on time, we'll need to allow for more than 16.6ms per frame. My guess is that checking for a <20ms delta would be sufficient for 60fps. However, I'm not sure that all browsers run at 60fps. It might be that mobile browsers don't attempt to run that fast. I definitely noticed that browsers seem to scroll faster than they call requestAnimationFrame. This could be a framerate limit, or an arbitrary decision not to call rAF to often. If there is a framerate limit that is lower than 60fps, we may need to do something to detect the browser's framerate, and not assume that it's 60. The benefit of that is we'll support cases where browsers run at faster framerates (mine runs at 144fps). In theory, we should be able to detect the browser's framerate by running a few "empty" rAFs and measuring the time between them. Unfortunately, I don't know when would be a good time to run that test. On page load the browser might still be doing things, and the framerate might be lower than usual. On first search, running that measurement might take too long (3 frames at 30fps is 100ms, which is noticeable). Maybe we can do something more adaptive. We can have a "target delta time" value, which we can initially set to 37 (33.3ms + 4ms for the browser's rounding*), and each frame we can measure the frame delta time. If the frame delta time + 4 is smaller than our target, then that becomes our new target. If the frame delta time is significantly large (a drop from 60fps to 30fps would be a change from 16.6ms to 33.3ms), then we assume we dropped a frame.
* Each time sample is rounded to the nearest 2ms, which in the worst case can look like this:
The range inside the pipes is the actual time between samples. The range inside the A's is what we get if the first sample got rounded down by 1ms and the second sample got rounded up by 1ms (it's overall 2ms longer). The range inside the B's is what we get if the first sample got rounded up by 1ms and the second sample got rounded down by 1ms (it's overall 2ms shorter). The total difference between the A range and the B range is 4ms.