Ben  Franklin Ben Franklin | 26 Aug 2020

If you’re investing a significant amount of money into developing a web application, you should expect that it is quick to load. But what is quick?

Quick means different things depending on the scenario. If you have a brochureware site, you may expect it to load instantly. However, if you are using a CMS or processing a lot of data then a larger load time is acceptable. I’m in no way saying you shouldn’t aim for the quickest of the quick but be realistic with the parameters you are working with (and the budget you have).

There are many tools available that can measure the time it takes for a site to load, such as: 

The important thing to consider is that it is never as simple as the baseline figure suggests. There are many factors that have a bearing on the speed of a site, such as frontend framework, servers, internet connection, caching and many others. 

These tools are used to measure a myriad of different sites, so take a somewhat blanket approach to the suggestions they give. They don’t know what the backend system is, so they treat every site the same. However, not all sites are the same.

Which brings us onto CMS.

When using a traditional CMS, as opposed to headless, it is not always straightforward (or possible) to implement all the suggestions made by these tools. There is always going to be a static element to anything built in a proprietary system.  

Here we look at some of the common things that come up in speed tests, and how realistic they are to fix when using a traditional CMS.

Imagery

Imagery plays a vital role in most websites, but it can be a big factor for poor performance. Some of the common suggestions found in these tools are:

Properly size images

This is very common and is generally due to an image having been uploaded at the incorrect size. For example, if a banner requires an image of size 900px by 300px and you upload an image of 1800px by 600px, then not only does it have to be shrunk dynamically when the page is rendered, it is also likely to be twice the file size. This means it takes longer to transmit from the server, thereby increasing the page load time. For a single image this may not have a massive impact, but if you have many images on a page the time sink soon adds up.

The solution to this is to upload the images in the correct size to begin with. CMS’s can help with this in two ways;

  1. Provide guidance to the user to display the correct image dimensions required
  2. Automatically resize the image on upload (though this is hit and miss depending on the aspect ratio of the original image)

Efficiently encode images

Many of the images used on websites begin life for a different medium, such as print (e.g. brochures). Print based media requires a very high-quality image with high DPI, and therefore this results in a large file size. This level of detail is not required for the web, and the inherent large file size can severely hamper the performance of a website.

The solution for this is to compress (AKA minify) images, which means removing parts of the image, or grouping parts together, to reduce the overall number of bits within the image itself. This concerns the actual data that makes up the image, rather than the parts of the image. For example, if there are large amounts of red in the image, this can be logically grouped together in the data. More information on the types of image compression can be found at https://www.keycdn.com/support/what-is-image-compression

For the web, compression does not make a noticeable difference in the way it looks to the naked eye, but it does reduce the page load speed significantly. The image below shows the original version on the left, and the compressed version that has an 81% smaller file size. It is hard to tell the difference.

In real terms, if the image above initially took 2 seconds to load, it would only take 0.38 seconds after compression. 

Compression can be done either before uploading to the CMS (via a tool such as https://www.jpegmini.com/) or it can be built into the CMS upload process. CMS’s don’t normally come with automatic compression, so it is something that we normally develop (for example we have written an extension for Kentico Xperience that compresses images when they are uploaded to the media library).

Whichever method you choose, this speed test recommendation is something that is possible in any CMS platform.

Next-gen image formats

Related to compression, you will often see something along the lines of:

“Image formats like JPEG 2000, JPEG XR, and WebP often provide better compression than PNG or JPEG, which means faster downloads and less data consumption”

The basic premise of this is that the newer formats are encoding differently, meaning they are more efficient to load. So, a bit like compressing your images, and then compressing them again. This all sounds great, but there is a rather large problem currently; browser support is limited (as can be seen at https://caniuse.com/#feat=webp and https://caniuse.com/#feat=jpeg2000). This means that in some browsers the images just won’t show at all.

The main workaround for this is to provide fallbacks for all images using the picture tag, for example:

<picture>   <source srcset="image.webp" type="image/webp">   <source srcset="image.jpg" type="image/jpeg">    <img src="image.jpg" alt="Your image"> </picture>

The issue with this is that CMS’s are not setup to work like this. In most situations you simply choose an image, and it adds it to the page. To make this approach work would require; a) for all instances of image tags to be extended to render using the picture tag, b) for media libraries to allow multiple images to be uploaded for a single item, and c) for images to uploaded in different formats manually, or automatically converted on upload. This is a significant amount of work that is required, and, it is not something that can be recommended.

Until next gen formats become standard across all browsers, this is a recommendation that is not a reality.

Response times

The time that various items take to respond can have a large impact on performance.

Reduce initial server response time

Before any of the frontend can be rendered, the browser must first receive a response from the host server which contains the data for the page. For example, the screenshot shows the initial waterfall timeline for the Quba website.

The first item, of type document, is the initial request and response to the server to get the page data. This blocks everything else from loading, so if it is slow then this has a knock-on effect to everything else. 

There are many things that can cause a sluggish initial response. Some examples are as follows:

  1. Slow or poor-quality servers; server specifications are low or are in a shared tenant that can’t cope with the amount of load.
  2. Server location; if the server is situated in the United States but you are in the U.K. then this will cause a lag, unlike if you are accessing the data from a server located closer to your physical location. This is purely down to the distance that the data must travel.
  3. High volume traffic; if the site experiences spikes in traffic that it cannot cater for then the response time will be increased.
  4. Code; if the site code hasn’t been thought through well enough then this can degrade performance. For example, if an algorithm must loop through a large amount of data and isn’t optimised then this can be time consuming.
  5. Out of date CMS; if the CMS being used has inefficient code or is not built to use the latest frameworks then it will not be fully optimised.

These examples are obviously not all related to the CMS the site it is built on (e.g. server location), and as such I will investigate these in more detail in a future article.

From a CMS perspective, it is important to host on a server that exceeds the specification required, and to have it configured for best performance. There are normal guides for this on the CMS vendor sites. It is also imperative you keep the version of the CMS up to date, so it is running at its full potential. This also has the added benefit of improving the security of the site as you will have all available hotfixes.

The takeaway from this is that when a speed test tool suggests you reduce the server response time remember that it can only “see” the frontend elements of a website. The inner workings of a site and the servers are hidden from the test tool, so whilst there are always improvements to be made it can only ever give a simplistic view of it. Whilst improvements can always be made there is no silver bullet.

Asset response time 

All external assets (i.e. files that are not part of the HTML) must be loaded into the page before it can be displayed in full. Among other things, this includes JavaScript, CSS, fonts and images. Logically, if you haven’t loaded in the CSS then the page will not look correct until it appears. 

If the assets take a long time to load, then ergo the site will also take a long time to load. There could be many reasons for them being slow (as above with servers) but there is a recognised way to negate any issues; a CDN.

Serving assets via a CDN (content delivery network) such as Azure or CloudFlare (https://www.cloudflare.com/learning/cdn/what-is-a-cdn/) allows files to be downloaded from a cached version that is stored in a node that is nearest to you. Instead of having to get the file from a different continent it will be served from whichever one is closest to your location.

The integration of a CDN with a CMS is normally seamless, and many have inbuilt connectors in place already.

JavaScript and CSS

Serving assets over CDN is one part of the puzzle, but the content of the actual files is important too. 

Compression and minification 

Minifying and compressing JavaScript and CSS is very similar to compressing images in that it aims to reduce the number of bits used per file. It does this in various ways such as by removing whitespace, culling comments and renaming variables to smaller versions. Minification is used to rewrite the code in the file, whereas compression uses encoding to compress the actual file.

Developers should do this as part of any good build, but many CMSs have this as a built-in function now also.

Render-blocking

Reducing the file size of an asset is one thing, but you still must wait until that asset is loaded before the page is fully complete. What if we could load part of the page to view and load the rest later? In some ways, run before we can walk.

This is the concept of render-blocking scripts. For example, if we have a script in the <head> of the website, then this is classed as render-blocking as it is loaded before the rest of the page. More examples can be found at https://web.dev/render-blocking-resources/. The idea in this case is only load in the JavaScript required for the page in the position on the page that is required. So for example, if there is some JavaScript to control a form only load it above the form HTML.

This seems logical; however, CMSs don’t make this easy. There are usually two issues when it comes to a CMS:

  1. CMSs use their own assets, which are usually included in the <head> and can’t be removed or edited.
  2. The purpose of a CMS is allowing the editor to create pages as they see fit, meaning in some cases they can put anything anywhere. This makes it nigh on impossible to know which asset is required on which page as there are endless possibilities.

There are certain ways of facilitating this partially with a CMS, but it will not always be possible to achieve this fully.

Summary

Page speed tests are powerful tools and can be used to make your sites quicker. They also play a big part in helping your site rank better in search engines such as Google.

However, their recommendations should be taken in the context of your overall system, particularly if you are using a CMS.

If you would like to discuss the results of a speed test, drop me an email and I'd be happy to go through it with you.