SFEIR Share presentation: When Pre-Loading Beats Streaming - The Caching Advantage

I presented my article When Pre-Loading Beats Streaming: The Caching Advantage at the event SFEIR Share, in French.

I put here a translated version of the presentation. You can also check out the slides and the script of the talk, in French, on the page:Présentation SFEIR Share: Quand le pré-chargement l’emporte sur le streaming - l’avantage du cache

Introduction

Hello everyone. Today, I am going to present the content of my article: “When Pre-Loading Beats Streaming: The Caching Advantage”.

I will compare two different ways to deliver web pages: streaming, which is increasingly supported by web frameworks, and pre-loading, which gets less attention. I show that both optimizations can provide similar performance. I compare both approaches in depth, showing in which situation each is the better one.

In fact, one goal of writing my article is to highlight the limitations of streaming and defend pre-loading as a simple and efficient optimization that deserves deeper integration in web frameworks.


Simulation

To be as fair as possible in my comparison, I created and used a simulator to generate the page loading timelines of different scenarios.

The diagrams that I’ll be showing on these slides are very simplified and hand-created. I invite you to check the article for the simulation-generated diagrams with more fine-grained details, distinguishing:

The article also links to a simulation playground which lets you try out different parameters like network bandwidth, file sizes, processing time and more.


Table Of Contents


The page to optimize

First, let’s look at the problem statement: We have a web page which contains two types of content:

We want to load this page as fast and efficiently as possible.


Let’s first show a very naive way to deliver our page: The server returns an empty HTML document which loads a script which when loaded on the client will load the page semi-static and the dynamic parts.

I’m using this example solely as a worst-case scenario, showcasing clearly the main performance problem: the latency induced from the network round trips between the client and the server, and the need to execute the script on the client before the server even starts loading or generating the page content.


Full-Page Streaming

Let’s now see how to load this page with streaming:


Split-Page with Pre-loading

Now let’s see how to load this page in an alternative way: By splitting the semi-static and dynamic parts as 2 different resources.


Comparison (1)

If we compare the pre-loading and the streaming approaches, it looks like streaming is the winner here because it starts loading the page’s dynamic content earlier.

Now let’s look what happens when we add caching.


Caching

Now let’s add two layers of cache:


Streaming with caching

Let’s look at the loading timeline for the streamed full-page version:


Pre-loading with caching

Now let’s look at the loading timeline for the split-page with pre-loading:


Comparison (2)

“If we compare both approaches in the presence of the edge cache, we can see that the split-page with pre-loading version wins over the streaming version in terms of First-Paint and final load. The client receives the semi-static content and static script earlier, clearing the way for faster dynamic content processing.


Edge-Side Page Assembly (For Better Caching)

We saw that the streaming approach could not take full advantage of the edge cache because both semi-static and dynamic page parts are bundled as a single resource.

This problem can be solved by doing some computation at the edge. Among the possible solutions, I cite:


Streaming with Edge-side page assembly

Let’s see the timeline of the page loading when the page is assembled on the edge.


Comparison (Edge-Side Page Assembly)

Now, thanks to edge-side page assembly, the full-page streaming approach recovers its edge over the pre-loading version, because it is as efficient at loading the semi-static page part, and the dynamic page part starts loading earlier.


Returning users with a fresh cache

Streaming with Edge-side page assembly, for returning users

Now let’s examine what happens when a returning user revisits our streamed page with a fresh cache.


Pre-loading with caching, for returning users

When a user revisits our pre-loaded page with a fresh cache, things are a bit different:


Comparison (Returning users)

For returning users with a fresh cache, pre-loading gives earlier First-Paint than streaming. Pre-loading manages to load the page’s dynamic part as fast as streaming too.


Takeaways

What we can conclude from all of this is that both full-page streaming and split-page with pre-loading techniques improve page loading performance. Pre-loading wins over streaming and vice-versa in different contexts:


Support in mainstream frameworks