How I Optimized Legacy Code

by user

Auto Draft


Recently I was responsible for tuning the performance of a website for a service that has been around for seven or eight years. The code is pretty legacy and there is no unity like an anarchy. Furthermore, because of the http/1.1 protocol, the communication speed is still slow. Even in such a situation, usability must be improved while maintaining the service. So this assignment requires more than my ability because my career as a front end engineer has just started since half a year ago. Of course, I still need to consider IE support. Performance is, however, often invisible and often neglected but it can have a surprising effect when implemented with a little care. Tuning the performance encouraged me in deep understanding of the web structure. In this post, I will explain how I approached to improve the website.

Each topic might not be detailed enough but I hope you get an overview of the optimization.


Firstly, I measured the current performance in order to grasp what I should do first. I adopted “Audits” in Chrome Dev Tool among various measuring tools as it can measure easily. The initial score looked like below.


The measurement results were so extraordinary low score that I was at a loss without knowing what to start with. But, generally speaking, the speed of the initial display is emphasized in modern website. So I set the performance measurement budget to the initial speed improvement scored at over 90.

Deciphering the Critical Rendering Path

I think the most easiest way to improve the initial display speed reduction is to reduce critical rendering path.
What is the critical rendering path? You can get more specific explanation about it in this article.

the sequence of steps the browser goes through to convert the HTML, CSS, and JavaScript into pixels on the screen

In other words, if there are many CSS and JavaScript files that are required to illustrate contents on the initial display, the loading time will be delayed by that amount. As represented by Webpack a module bundler assembles multiple CSS and JavaScript files into a single file. So this critical rendering path may not matter much on your modern project, especially using such React or Vue framework. In terms of my case various scripts and css files were, however, requested like patchwork on the legacy site that has existed for more than 7 years.


There were 21 files was requested for the initial display! The constitution of the rendering path was mainly loading font files, CSS files and JavaScript files. In order to reduce this rendering path, I combined dependent scripts into one. This included third-party libraries such as jQuery and Google Analytics. Nevertheless Google Analytics is an essential library to enhance our service, it is not necessary for the initial display and should be executed after the page has been loaded. What I eliminated from the scattered scripts was jQuery because the library was large size itself. Therefore I rewrote it by the native JavaScript as well as integrated other scripts into one file.


When I measured the performance again after merging the files, it was slightly improved. The assessment of the rendering path is in the list at the bottom of the above image as “Minimize Critical Requests Depth”. You can see that the critical rendering path has been reduced at 4 chains.

Reduce file size and lazy load

Next step what I did was to adjust the image size and shift loading timing. As the Audits warned which image size I should reduce with its recommending size in “Property size images” field, I just regenerated all of them with an appropriate size. Then I have set the size with using srcset attribute in order to download them with the appropriate image size for each terminal.

Unfortunately, this wasn’t enough for improving the performance because of rendering block by loading images. My solution for this was to load images after initial contents has appeared. In addition images outside of the display area is scheduled to load when it scrolls up to the display area for the first time. By reducing the number of requests, the amount of communication can be also reduced. I won’t go into much detail here on how to implement this lazy-load-image but the technique I used is the Intersection Observer API.

This is an API that monitors how much the element of rectangle is being shown on the display. It can also be used with IE11.


After the image was optimized, a certain level of performance was ensured

Lazy Load JavaScript files

The next thing I noticed was the timing of loading the script files. I simply added async or defer property in <script> tags. Here is the difference between async and defer and the this article explains more specifically.


If you look only at the script, defer property seems to be better but the timing of load event was different between async and defer. For example, in the case of async, if loading images starts at DOMContentLoaded, the subsequent processing is skipped, which means passes through loaded event even if they are still downloading. On the other hand, if defer property is set in script tag, load event is NOT called until all images has been loaded. Well, this case is not correct but I encountered this issue.

window.addEventListener('DOMContentLoaded', (event) => {
    // NOTE: Load image.
    // In the case of defer, on load is not called until this image is loaded.
    // In the case of async, if you skip and go out of scope, onload is called

Besides, Google Analytics is loaded lazily using requestIdleCallback. This callback is called when the browser become idle and does not lead to a rendering block.

JavaScript rendering block

Manipulating the DOM and CSS style by JavaScript can cause reflow and rendering block. This leads to performance degradation. The following manipulation occurs reflow.

  • getComputedStyle()
  • offset* properties
  • client* properties
  • scroll* properties

more detail

Every time being called the performance is getting degrade. The following are top tip for improving to do deal with DOM manipulation.


<div class="persons">
let elements = document.querySelectorAll('.persons p');
elements[0].textContent = 'person A';
elements[1].textContent = 'person B';
elements[2].textContent = 'person C';

In this case, the three p elements are replaced in sequence, so the DOM tree is updated three times. Try to change it to the following implementation.

// Clone Node
let origin = document.querySelector('.persons'), 
    clone = origin.cloneNode(true);

// Update elements at the clone node.
let elements = clone.querySelectorAll('p');
elements[0].textContent = 'person A';
elements[1].textContent = 'person B';
elements[2].textContent = 'person C';

// Replace clone node to original node.
origin.parentNode.replaceChild(clone, origin);


while(container.firstChild) {

The above could be pretty fast but a reflow will occur when removeChild is executed. If you put empty character in innerHTML as below, which should be more faster.

const container = this.paginationContainer;
container.innerHTML = '';


Most node lists are implemented from node iterators and filters. For example, if you want to get the length of the list, it is calculated at [ O (n)], or if you manipulate elements and check the length of each loop, it takes [ O (n ^ 2)].

let paragraphs = document.getElementsByTagName('p');

for (let i = 0; i < paragraphs.length; i++) {

In the above case, the length is called for each loop. Try to rewrite as follows.

let paragraphs = document.getElementsByTagName('p');

for (let i = 0, paragraph; paragraph = paragraphs[i]; i++) {


let parentNode = document.getElementsById('persons');

for (let child = parentNode.firstChild; child; child = child.nextSibling) {

Other points

visibility: hidden; and display: none; have different concept. If visibility: hidden is set, the element still takes up the space on the layout nevertheless invisible as if there is an empty box. In contrast, display: none removes the element from the rendering tree completely and not included in the layout. I changed, therefore, visibility: hidden to display: none to make the render tree as short as possible.


Performance improvement is not effective just by optimizing one aspect. It will be necessary to tune through the whole. While optimizing, I was always doubting if this optimization was effective. It was as if I had grasped unclear cloud on the air. Now that I can estimate how much the current modification is optimized and effective, I can implement as usual with tuning points while paying attention to the performance.

The following list is all about what I did:

  • Reduce the number of requested JavaScripts and integrate them into the main project to reduce critical rendering paths.
  • Remove jQuery dependency files.
  • Adjust download size
  • Implement lazy loading of images
  • Change the execution timing of Google Analytics to after loading to avoid the initial loading speed downgrade
  • Asynchronous script loading
  • Implement lazy loading of iframe ads to avoid rendering blocks (Not to mention in this post)

Final Score


You may also like

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.

Close Bitnami banner