Recently, our development team was tasked with improving website performance for one of our longstanding clients. Their site was experiencing numerous page load issues, including extended load times and page time outs.
Statistics say that for every three seconds it takes your page to load, 40% of your visitors will give up and leave the site.* The numbers are even worse for mobile device users, who often have much slower connections and cache sizes. This makes website performance an integral factor in your brand's digital success.
Identifying the Issue
After our first pass at speeding up this client's site using a standard approach (checking scripts and images), we noticed only minimal performance improvements. A CDN, in this case Cloudflare, was already in place serving those objects, but the site itself was scoring in the 30’s using Google’s Lighthouse tool.
So, we took a look at the client’s CMS. The site was running Kentico EMS 8, which is a few years old now, but we were able to rule that out in regards to the performance problem.
Finally, we examined the webhost. We found that the site was originally setup on Rackspace a number of years ago. Rackspace can be extremely limiting due to the hardware offered, scaling capabilities, and its cost prohibitive nature. It's also beginning to shift its business model, becoming a support partner for companies moving to larger cloud providers.
It was time for a change.
Our team determined that a migration from Rackspace over to Microsoft’s Azure was the best means of updating the infrastructure to support the client's website. By making this switch, the client would receive a more scalable and flexible architecture, allowing them to incrementally test the performance issues for both anonymous users and authenticated users, who were getting poor page load on the backend of the CMS.
We took a copy of production and stood up services on Azure to mirror the production site. We also added a subdomain within CloudFlare to mimic the CDN for testing so we could have a real 1-1 of what was running at the time.
After moving the site over to Azure, scores moved up from the 30’s to the 50's on comparable hardware. This was a considerable improvement, but still not acceptable to end users.
In our tests, we noticed that IIS was receiving a lot of requests as well. Standard Kentico caching was in place, as well as IIS caching, but it wasn’t able to meet the demand from end users, causing the site to lag and even stall in some cases.
Enter Varnish and Redis
At Boston Interactive, we work with .NET technology but we also have a large practice in open-source development, specifically Drupal.
One practice that is common in Drupal development is to define a solid cache layering, not only for static assets, but for dynamic page cache as well. In this particular case, we had to work with Kentico's cache and headers, as well as IIS, and eventually the CDN. While brainstorming some options we started to look into Varnish, a commonly-used service in the Drupal world.
What is Varnish?
Varnish is a web application accelerator that caches requests and functions as an HTTP reverse proxy. Varnish is typically installed between the webserver and the CDN in the stack.
In this model, there is no direct communication to the webserver from the client’s HTTP request - be it IIS, Apache or NGINX. All of that traffic is routed to Varnish for processing using Varnish Configuration Language (VCL). Varnish translates that VCL into binary code, which is then executed when requests are received. The VCL itself is then organized into subroutines, which are executed at varying times.
For example, one subroutine is executed when we receive a request from the browser, and another when files are fetched from the backend server(s). This allows us to offload requests from the web server and direct them to the Varnish server.
In other words, Varnish will build its own page cache and serve that up to the user if it has been fetched by a previous user session, or it will request that cache from the web server and build it for the current session, which will then be utilized for future sessions and requests. This can dramatically increase page performance by 300-1000 times. By avoiding recurring IIS assembling page requests and periodically adding it to Kentico or its own cache, we are able to dramatically improve the site speed.
Additionally, by leveraging Azure, we're able to configure a Varnish instance and point the app gateway to that server. This allows Varnish to communicate with the web frontend via the internal subnet, rather than the public-facing internet address. The result is that all page requests to the webserver are sped up through Varnish, and direct internet traffic to the webserver from anonymous users is eradicated, making the site more secure.
Best of all, Varnish is free! All you need is some linux knowledge and an understanding of VCL, which has inherited a lot from C so if you’ve worked with C# (common with Kentico), it should be pretty easy to pick up:
What is Redis?
Redis is an open source, in-memory data structure store which can be used as a database, cache, and message broker. One of the reasons it's so popular and fast is that it uses pre-built data structures for popular use cases. This allows the system to call a single request and have Redis do all the processing through the creation of those items.
In the case of Kentico, we store the dynamically built page requests, or SQL queries, that build specific pages. In this way, we don’t have to request the SQL server to build these pages, or even query strings; instead, Redis handles this load, which speeds up page rendering for all dynamic instances.
The other added benefit is that, by default, Kentico will not cache pages for content administrators and content editors on the backend. Instead, Redis allows the queries that build the content editing forms and pages, like the content tree, to be stored there for the rendering of those forms, making that experience much faster for content editors.
Azure also offers Redis as a service.
So.. Did It Work?
After adding in those services, and working with fine tuning our custom VCL, we managed to boost page speed rankings to 100 on a majority of site pages. Even under load pages are still in the high 80s to low 90s.
Find a Community:
Gables at Winchester Page - Under Load:
Higher Cache Lifetime = Better Performance
As a final step, we increased Kentico page cache lifetime from its default of one hour to one week - eventually, we may even increase it to one month or longer.
Varnish introduces the concept of purgers, or cache invalidation tags. With it, you can define cache headers, or have Varnish look at headers already sent, to invalidate Varnish cache. This allows more dynamic pages or pages that content editors change to show up on the site without having to invalidate all caching on the site. In this use case, we have Varnish looking for HTML headers to invalidate page cache on pages that are published or edited by the content administrator. Once a page is updated or saved in Kentico, it will send this new header, meaning that the next page request from the end user will show updated content.
Example response header from the home page:
One Last Thing
We've been using Varnish and Redis for our Kentico clients that we host directly, and we've seen similar boosts in performance on all of those sites.
Feel free to contact us if you're having issues with web hosting or you’re getting some serious performance issues. While we’ve mainly discussed Kentico in this article, we can just as easily configure this model for Sitecore, Magento, WordPress and beyond.