The Definitive Guide To Website Speed Optimisation
Website optimisation needn’t require enormous technical expertise, but it’ll yield measurable improvements in traffic and conversions. Whatever the purpose of your website, whether it’s the frontend of a retail giant or a niche blog, responsiveness will play a crucial role in determining how much traffic you receive. In this guide, we’ll look at how to make your site load quicker, and why you need to do so.
Before we delve into the best optimisation practices and tools, let’s first consider why speed matters. Three reasons stand out as particularly noteworthy.
Each new visitor to your site will arrive with certain expectations. You’ll need to fulfil those expectations, or risk bouncing them straight back to the search-engine results page. Among the most important expectation is a timely loading experience.
The longer a site takes to load, the more likely the user is to navigate away. And the difference between success or failure can be measured in milliseconds. Harry Shum, the Executive Vice President of Microsoft’s Technology and Research division and architect of the company’s search engine Bing, famously said that “a difference of 0.25 seconds, either slower or faster, is close to the magic number for competitive advantage on the Web.”
The science would appear to bear out Mr. Shum’s sentiment. The human brain stores visual information as sensory memory for a few hundred milliseconds, and thus if your website quickly enough, it’ll appear instantaneous. If the load time gets longer than a second, you’ll start to notice the lag.
According to a study by DoubleClick (a Google subsidiary), 53% of site visits are abandoned if the content takes longer than three seconds to materialise. An earlier study by Akamai found that 47% of users expect results in less than two seconds, and 40% will abandon after three. Now, while it’s difficult to arrive at an exact figure, and there are some problems with these studies worth discussing, the conclusion is clear and inescapable: speed matters.
Most online businesses draw in a sizeable portion of their new business through Google. Consequently, the art of Search Engine Optimisation (SEO) has become massively important in determining success or failure. Google doesn’t want to provide its users with slow-loading content, and its algorithms will punish poorly-optimised sites accordingly.
It’s worth mentioning that poor performance leaves an impact that extends to long after the end of the browsing session. Customers who are dissatisfied with the performance of an online shop are far less likely to return. On the other hand, by providing a consistently responsive user experience, webmasters can foster a positive impression, and thereby secure the repeat traffic required to grow an online business.
Getting information from your servers and onto your client’s device requires an array of different components. If just one of these is performing badly, then the entire user experience will suffer. If the wheels on a car are too skinny to grip the surface of the road, then there’s little point in installing a new engine. As such, optimisation attempts should focus on identifying and addressing weaknesses. Performance issues tend to come in three varieties. Let’s look at each of them.
Speed problems that occur on a user’s machine are referred to as ‘client-side’. Even if your servers can deliver the entire website to the client, there’s still a way to go: the browser software will need to display the site in a way that’s comprehensible to the user. If the site isn’t coded efficiently, the browser will not be able to do this quickly, with the result that the browsing experience is slower for the end user.
Users have an unprecedented range of browsers to choose from, and an unprecedented range of devices on which to run those browsers. Thus successful web pages will need to as load quickly on Chrome for Android as they do on Safari for iOS.
The server is the machine (or collection of machines) on which your website is stored. It’ll receive and respond to requests from client machines. The more efficiently it’s able to respond to those requests, the better. The more requests it has to deal with, the more ‘bandwidth’ (the amount of data it can deal with in a given timeframe) it’ll require.
Between the client and the server is what’s known as the pipeline. It’s where a lot of bottlenecks can form. Even if both the client machine and the server are lightning-quick, if there isn’t enough bandwidth to deliver the content, then the site will perform poorly.
Before we can improve a site’s performance, we need to figure out where we’re starting from. After all, we can’t solve a problem if we don’t know where it lies! This means clocking your page-loading times in various ways. Ideally, we need to identify:
There exist several free online tools to help you analyse your website’s performance, along with plugins that make life easier for WordPress Users.
PageSpeed Insights is Google’s free tool. It’s a great starting point, and will provide a score out of a hundred for optimisation alongside your speed statistics. Beneath that, you’ll get an easy-to-read list of optimisation suggestions. You’ll get suggestions for both mobile and desktop platforms, along with a little graphical preview of how the website looks.
Alternatively, you might consider a tool like KeyCDN’s website speed test. It’ll give you a choice of sixteen locations around the world from which to test the speed of your site, and thus it’s better for tracking down regional inconsistencies. You’ll also get a waterfall breakdown which outlines each component of your site, a thus makes it easier to identify problem areas.
Load testing allows us to identify bottlenecks. Again, there are several different tools available to help you load test. Load Impact will simulate up to 100,000 users in under a minute, and can emulate a range of different browsers and network conditions to see how your site would respond to real-world demand. Blazemeter, Wondernetwork and Loader perform roughly the same function, and are each worth considering.
Load testing should form a vital cornerstone of your evaluation, and one that’s particularly important if you’re looking to upscale.
One metric worth paying attention to is your Time-to-First-Byte, or TTFB. This is the amount of time that elapses between your browser requesting information, and the first byte it receives. Since we’re only talking about one byte, this measure is unaffected by file size, and will thus help show up bottlenecks in your hosting. If your DNS provider, your web host or your CDN isn’t performing well, you’ll start to see a longer TTFB.
Now that we’ve looked at why performance matters and how we measure it, let’s move onto what can be done to improve it. Remember when you’re implementing the following strategies to evaluate the effect they’ve had. This means periodically reassessing your website’s performance using the method’s we’ve looked at. With that said, let’s look at the first (and arguably most effective) means of improving your website’s performance.
Images are tremendous bandwidth hogs. While a webpage might load a ten-thousand word essay in an instant, even a modestly-sized picture is a different story.
Raster-based images constitute the overwhelming majority of images on the internet. Each is built from a mosaic of coloured blocks known as pixels. A 16bit, 1080p image contains just over two million pixels, each of which can be one of around four thousand different shades. That’s a lot of information to get across! What if you need to get it to 100,000 different users simultaneously? What if the image is just one frame of a video?
Anything that can be done to lower the filesize of your image will make your website load more quickly. In many cases, this means compressing the image.
Image compression comes in two forms: lossy and lossless. The former will reduce the size of your image dramatically, but it’ll do so at the cost of fidelity. This is achieved in several different ways, most of them beyond the scope of this guide. For example, the number of colours in an image might be reduced to the most common, allowing for reduced bit-depth without appreciably changing the colour content of the image.
Specialist programs like Adobe’s Photoshop offer a plethora of compression options. But there are simpler solutions available. WordPress users might prefer a plugin like WP Smush, which will automatically compress each image that’s added to a library. It’ll get the job done quickly without adding to your workload.
Most images used on the web are in the jpeg format, for two simple reasons. Firstly, they offer a lot of fidelity in a small file size. Secondly, they’re widely-supported; even older browsers will display them without complaint. As such, you’ll probably want to use them ahead of something like .png, whatever other technical advantages the latter might offer.
An alternative to raster-based images comes in the form of vector-based ones. Rather than using a grid of pixels, these use mathematical algorithms. They can’t create photo-realistic pictures in the same way that a raster-based image can, but they can create simple, crisp graphics that can be resized again and again with no loss of fidelity. What’s more, simple vector images can weigh in at just a few kilobytes, and thus they make an excellent fit for some applications.
Using CSS, it’s possible to adjust the size of an image in your browser. So, if you need a 500-pixel-wide image to occupy a 400-pixel-wide space, you can add a little code to make it so. This practice is undesirable for two reasons. If you’re scaling up, it often looks dreadful – the image might appear undesirably zoomed and cropped, revealing all of those ugly pixels, or it might warp across one axis to fill the gap.
If you’re scaling down, on the other hand, CSS image resizing is wasteful. Your server will send a file to the client that’s larger than it need be. Better to upload multiple images at different resolutions and have your CSS choose one that’s appropriate to the client device.
Having said that, CSS downsizing isn’t entirely worthless; some devices equipped with ultra-sharp retina displays might benefit from it. As a general principle, however, it’s to be avoided unless you know exactly what you’re doing.
The internet allows us to communicate information across vast distances. If a user on one side of the world accesses a site hosted on another, each packet must navigate thousands of miles of real-world cabling and routers before arriving at its destination. The greater the distance travelled, the greater the opportunity for lag to occur.
Modern websites address this problem using something called a Content Delivery Network, or CDN. They’re something that everyone who uses the internet interacts with on a daily basis, even if they don’t know it. A CDN is made up of remote data centres at locations around the world, each of which hosts a cached version of a given site. That way, if a European user wants to access an Australian website, they’ll access a cache server in Europe that contains all the information they need. These remote data centres, called points of presence, allow the internet as a whole to run more quickly and efficiently, and they help to eliminate traffic.
As well as speeding up a given website, Content Delivery Networks confer several other benefits. They distribute the strain of traffic across multiple servers and protect sites against malicious DDoS attacks. Even if one server is brought down, users will still be able to access the site from another PoP. For this reason, they’re popular with smaller political websites which often find themselves targeted by their opponents.
When CDNs were first introduced in the 90s, they were very expensive and restricted to large corporations. Since then, they’ve gotten more sophisticated, affordable, and easy to implement. To get yours running, you’ll need to modify your domain name server so that it deals with your CDN’s IP ranges. This is easier than it might sound; modern CDN providers offer detailed instructions to get their services working. In many cases, your hosting provider might provide CDN as standard, and so securing the benefits of a CDN for your site might be as simple as checking a box.
As useful as Content Delivery Networks are, they’re not for everyone. Typically, they’re the preserve of websites with an international appeal. If most of a website’s users are based in the same country, then there’s little point in incorporating a CDN. In fact, doing so might actually slow your site down, as each packet will need to be routed through the CDN rather than travelling directly to your users.
Every time you navigate to a new web page, your browser will request all the assets required to build the page. The server receives the request and sends them. For a one-off visit, this is unavoidable. For repeat visits, however, it’s wasteful. Why request assets that have already been downloaded?
Repeat requests of this sort can place an avoidable strain on the server. Caching is a technique through which those requests can be avoided. It allows the elements that make up your website to be automatically downloaded and stored in a temporary area of your hard drive known as a ‘cache’. That way, later visits to the website will rely on assets that have already been downloaded. This practice produces considerable performance improvements for the user, and saves oodles of bandwidth.
Again, WordPress users have access to dedicated plugins that will take care of the problem, including Frederick Towne’s W3 Total Cache. Browser caching can also be handled by server-side scripts. This is more difficult to implement, but the results can be worth the effort, particularly on high-traffic sites.
Most resources should remain in the cache for around a week. There are several different sorts of HTTP header available. The latest of these is called Cache-Control.
The cache control header came about in the HTTP/1.1 spec. Through it, you’ll be able to stipulate who can cache the response and for how long, and you’ll be able to define the caching behaviour of each resource on your site.
The cache control header comes with several variables. You’ll be able to stipulate whether an asset is ‘public’ or ‘private’, and thereby prevent CDNs from caching the page. You’ll also be able to give the asset a ‘max-age’, which identifies the time in seconds that the response can be used from the time of the request.
The ETag header is a means of making update checks efficient. It provides the client with a validation token, which it can check against the resource that’s in the cache. If the two match, then the browser will conclude that the resource is unchanged and thus does not need to be re-downloaded.
Suppose that you’d like to update an asset that your visitors already have cached. You need a means of instructing their browsers to download the new version rather than displaying the older one. But there’s only really one way of doing this, and that’s by changing the resource.
Caching comes with a few drawbacks that make it unsuitable for some cases. Private or time-sensitive data, for instance, shouldn’t be cached. Despite this, caching remains a useful (if not indispensable) tool for web-optimisation. To get the best from it, however, you’ll need to determine what caching strategy is appropriate for your site.
Your site will need to provide an ETag validation token to prevent the same information from being repeatedly transferred.
Every time you store the same content using a different URL, you’re instructing your user’s browsers to fetch the information all over again. Thus, you should only change a URL in situations where the content has changed.
Assets displayed to all users should be cacheable by CDN and other intermediaries.
Some of the parts of your site will frequently change, while others will remain relatively static. You want the former to be retrieved from the server, and the latter from the cache. Arrange your site accordingly, separating your code according to how often it’s requested. You’ll want to determine how long each resource should remain in the cache, which means conducting an audit and adjusting the cache times accordingly.
Hotlinking is a practice whereby unscrupulous webmasters can assemble pages using assets stored on other people’s servers. By doing this, they’re able to cut down on their own server load at someone else’s expense. Hotlinking tends to be used for large media files that people don’t want to pay to host themselves, and thus places a sizeable strain on unprotected sites.
Happily, hotlinking is relatively easy to guard against if you’re using a Content Delivery Network like Cloudflare. Just flick the appropriate switch and the problem disappears. Moreover, these services are tuned to handle referring bots that shouldn’t be filtered. Alternatively, you can always issue the offender with a DMCA takedown notice.
Needless to say, hotlinking is a dishonest practice and should be avoided. But even if you discount the moral case against it, hotlinking can hamper your site’s performance. If you’re linking to a slower website, then your site’s performance will suffer. Their website might suffer downtime, during which you’ll be left with a gaping hole where an asset should be. Finally, it’s possible for webmasters to replace hotlinked images with humorous and often highly offensive alternatives which can inflict massive reputational harm on your website. So, rather than linking to assets on other people’s sites, load them onto your own server.
We’re already touched upon compression in relation to the images on your website. And just about everyone reading this will probably have compressed a collection of files into a single .zip file. But you might not know that it’s possible to compress entire web pages in the same way, through something called GZip compression. Compression of this sort allows your server to send out compressed packages containing your website’s assets, which your client’s browser can unpack and display.
To do this, both client and server must first exchange some key information. First, the browser tells the server that it is capable of accepting compressed content, and lists the sorts of compression it’ll accept. The server replies with the compressed data, and advises the browser which technique has been used.
All of this processing takes place ‘under the hood’, and thus the end user won’t notice it – though they might notice the improvement in performance. In some instances, a compressed file can be a tenth of the size of an uncompressed one. GZip can be implemented through a CMS, or manually. You can add code to your .htaccess file or to the top of each page.
If a file has been properly compressed, then it’s impossible to compress it again. Large media files like images, videos and music will have already been compressed in the way we’ve already described, and thus won’t benefit from browser compression. As such, the technique should be reserved for HTML, CSS Javascript and the like.
Compressing things quickly saves on bandwidth while placing a minimal load on the processing resources of the server and client machine. In most case, this trade-off is worthwhile – but not always.
A final minor quibble concerns backwards compatibility. HTTP compression has been around for a while, and yet it might not be compatible with very old machines and browsers. If your site holds a particular appeal to users of Windows 98, then this might be something to consider.
While it might seem obvious that broken links should be eliminated, it might not be clear how they slow your website down.
Broken links between pages of your website will annoy your users. But where CSS, Java and images are embedded using faulty links, the result can be a marked slowdown. Whenever a bad link is requested from your site, it’ll need to waste resources responding. Images are often the culprit here, and if they’re small enough they’re easy to miss.
Sites that use external CSS files should also be wary. A common cause of broken links is a CSS file being moved without the associated HTML links being updated. Thus, the page will request content from the wrong place, and return a 404 error when the CSS file can’t be found.
JavaScript presents a special problem in that bad links are sometimes not simply disregarded; the browser will instead attempt to interpret the 404 page as JavaScript. While this is going on, it’ll stop all other downloads. This will result in a noticeable slowdown for your page. For this reason and others, it’s common practice to put JavaScript code near the end of your HTML; that way problems of this sort won’t get in the way of other sections of your site from being loaded.
Naturally, manually checking every link on your site is going to prove time-consuming, particularly if the site is large, complex and frequently changes. Happily, there are free online tools that’ll do the job for you. Dead Link Checker and Broken Link Checker are two simple examples: input your site’s URL and they’ll crawl through in search of 404 errors. If you’re using WordPress, there are similar tools available in convenient plugin form.
‘Minification’ is a programming term which refers to the process of removing superfluous code. By stripping away unnecessary characters, it’s possible to drastically improve the performance of a given piece of Javascript, while keeping its functionality intact.
The following sorts of characters have no function, and should be eliminated:
It’s worth keeping a backup of your un-minified Javascript and CSS files, particularly if you have more than one person referring to them. Those comments and spaces, after all, can help to keep things readable and thereby make it easier to improve the code.
The more individual Javascript files running on your website, the greater the potential benefits minification might bring. The user’s browser software will treat each Javascript file individually, sending out a request for each. By consolidating these files into a single one, you’ll be able to shrink this number to just one. This also applies, to a lesser extent, to CSS files.
Again, WordPress users will be able to get the job done with the help of a single plugin, Better WordPress Minify. It’ll maintain the order of your CSS and JS files as well as their dependencies, and offers a range of customisation options. If you’d prefer something more direct, you might also input your Javascript directly into an online tool like Javascript Minifier.
A database that’s improperly indexed, or that’s cluttered with unused tables, is likely to slow your website down. One simple step you can take is to move your database to a separate server that’s been specially configured for database. Splitting your hosting into a database server and a web server will give each its own resources. They’ll each have their own dedicated RAM, CPU and storage space. Provided that the two have the bandwidth required to exchange data quickly.
Some sites are more database-dependent than others. If you’re using WordPress or a similar CMS, and have a lot of plugins installed, then your website will likely be slower than it should be. Many plugins will automatically save statistics and user data that you might not need – and all that extra information has to be stored somewhere.
In most cases, there are settings you can tweak to minimise this problem. But to stay on top of things, you’ll want to get into the habit of periodically cleaning out your database. Back up everything before you start making changes.
For this task, the most popular WordPress plugin is WP Optimise. It’ll trawl through your database and clear out all the junk. What’s more, it’ll offer you detailed control of how often those clean-ups should occur, and how extensive they should be.
Thus far, we’ve mostly discussed ways to get data from the server to the user’s browser in the quickest possible time. But many of the advantages of a fast, responsive webpage only accrue if the user experiences them. In order to generate a responsive user experience, it’s important to prioritise content that’s going to be immediately visible, or ‘above-the-fold’.
‘The fold’ refers to the real-life, physical fold that runs through the middle of a broadsheet newspaper. Historically, when such publications were displayed in a stack, it would be just the top half of the front page that would be visible. Consequently, this would be where editors would place their most eye-catching content, namely: the name of the paper, an arresting headline, and an image.
The same holds true for the internet age. On a website, the term ‘above the fold’ refers to the parts of a page that are visible to the user before they start to scroll down. Research suggests that most visitors to a website will leave before they start to scroll, and thus most webmasters prioritise their ‘above the fold’ content in the same way that newspaper editors once did. While there’s reason to be sceptical of the power of the ‘fold’ and the prevailing wisdom around user scrolling behaviour, the fold still matters from an optimisation perspective.
Suppose that you have a very long page that’s designed to be scrolled through. What if the content at the bottom began to load before the stuff at the top? The result would be the appearance of a sluggish website. This can be avoided by sensibly structuring of your html. Google advises splitting CSS into two parts: an inline component that deals with the above-the-fold content, and deferred component that kicks in only once the first has done its job.
A redirect is a way to divert users to a different URL than the one they expected. The most common, the 301, will pass on the clear majority of ranking power to the redirected page, and thus they’re the preferred option for many webmasters.
While redirects might seem a convenient way to point users to new content, they create inherent problems with responsiveness. Each redirect will create another cycle of HTTP requests and responses. In the worst instance, this might mean several rounds of back-and-forth as new DNS lookups and handshakes are made. For example, if you’re going to redirect mobile users to a different, mobile version of your website, then you’ll add to the initial wait. As we’ve discovered, every millisecond counts on the landing page.
In some cases, one redirect is desirable. After all, there are more different screen-sizes accessing online content than every before, and it’s important to direct your users to the right one to deliver a consistent experience. It’s when multiple redirects are daisy-chained together that real unacceptable dips in performance occur.
External scripts pose another, related problem. Each time you add one to your site, it’ll throw up additional HTTP requests each time the site loads. External scripts might take the form of pop-ups, social media widgets, and commenting systems. Embedded video from sluggish services will also inhibit your performance online, and thus they’re to be avoided.
An HTTP exchange can be made more efficient with the help of a pair of related techniques.
What if your browser was able to set up a connection before sending the corresponding HTTP request to the server? Your DNS, TCP and TLS connections can all be made in this way, using a technique called a pre-connect. Many modern browsers will do this automatically, but you’ll be able to give them a better idea of what connections to make through HTML.
HTTP Keep-Alive comes into action after a conversation has taken place. It allows the TCP connection to be maintained after an exchange is over, thereby keeping it ‘alive’ for the next one. This allows the client’s browser to grab multiple files at the same time without having to repeatedly re-connect.
You can turn keep-alive on and off using a simple HTTP header. If you see the statement ‘connection: close’, it means that keep-alive is disabled. Change it to ‘Connection: Keep-Alive’ to enable keep-alive. You can also achieve the same end using a .htaccess file, which will in most cases override the server settings.
In many cases, keep-alive is on by default, though some shared hosts might disable it to preserve bandwidth. It’s worth determining whether your connections are being kept alive, as it can make a significant difference to the apparent responsiveness of your site.
Almost every website has some variety of text on it. Without words, after all, it’ll be impossible to convey your message. But for a machine to display that text, it’ll need the necessary font – a file that contains every character the text is going to display. Once, a browser would refer to fonts already contained on the client machine. Today, however, webmasters can publish using any font they like, storing the file on the web server.
Custom fonts have seen an enormous increase in usage over the past few years. They can provide a means of customising your site and ensuring a consistent experience from user to user. But they will add extra HTTP requests, slowing down your website. Moreover, if you’re using a third-party font service like Google Fonts or Adobe’s Typekit, then you’ll run into problems should the service go down.
There are several ways to optimise your fonts, but two pieces of advice stand out:
New webmasters often opt to host their new sites cheaply. They suppose, not unreasonably, that it’ll take time to build up enough traffic to justify the cost of something more expensive. And thus they go for cheap hosting that’s shared among several different sites.
While shared hosts have their roles, they tend to suffer from per performance. When the system is under heavy load, all of the hosted sites will suffer. And providers tend to oversell their services, pushing the hardware beyond what’s reasonable. As the website grows, it’ll begin to feel the limitations of the hosting. It doesn’t take long before switching to a VPS begins to make sense.
A VPS, or virtual private server, is a virtual machine sold by your host. While it’ll still share a server with other websites, the resources of that server will be strictly partitioned, with system resources equally divided between them. This makes hosting more expensive, but the resultant improvements in performance and security usually justify the extra cost. A VPS also tends to be more easily upscaled, which suits fast-growing sites.
For even greater performance increases, you might consider a dedicated server (or several dedicated servers), which would see your entire site separated from everyone else. All of the server’s resources, including CPU, RAM and bandwidth, would be reserved for your site alone.
If you decide to ditch your shared hosting, then you’ll be able to choose between a managed or unmanaged alternative. Pay for a managed server, and you’ll also be paying for the expertise to keep it running properly, along with backups, disaster recovery and additional labour-saving services. The extent of management differs between operators, so be sure to establish exactly what you’re getting when researching potential hosts.
The internet is driven by technologies and techniques that are refined every day. And every increase in performance leads to a proportional increase in user expectations. If your website loads slower the competition, you’ll lose traffic and revenue. Put simply, websites that fail to keep up the pace run the risk of being left behind.
The scope for potential optimisation is enormous. It really is true to say that a website is an ongoing project, and we’ve just scratched the surface of the steps you can take to improve things. Once you’ve implemented a few of the measures we’ve discussed, you’ll notice improvements in performance. Tackle the problem pro-actively, and you’ll be able to stay ahead of the competition for years to come.