1. Reduction of HTTP requests


A complete HTTP request requires a DNS lookup, a TCP handshake, the browser making the HTTP request, the server receiving the request, the server processing the request and sending back a response, and the browser receiving the response. Let’s look at a concrete example to help understand HTTP:


This is an HTTP request for a file size of 28.4KB.

 Noun Explanation:

  •  Queueing: Time in the request queue.

  • Stalled: The time difference between when the TCP connection is established and when the data can actually be transmitted, this time includes the proxy negotiation time.

  • Proxy negotiation: Time spent negotiating with the proxy server connection.

  • DNS Lookup: The amount of time it takes to perform a DNS lookup for each different domain on the page.

  • Initial Connection / Connecting: The time taken to establish a connection, including TCP handshaking/retrying and negotiating SSL.
  •  SSL: Time taken to complete the SSL handshake.

  • Request sent: The time taken to send a web request, usually one millisecond.

  • Waiting(TFFB): TFFB is the time between sending a page request and receiving the first byte of answer data.

  • Content Download: The time taken to receive the response data.


As you can see from this example, the percentage of time it takes to actually download the data is 13.05 / 204.16 = 6.39% The smaller the file, the smaller this percentage is, and the larger the file, the higher the percentage is. This is why it is recommended to reduce the number of HTTP requests by combining multiple small files into one large file.

 2. Use of HTTP2


HTTP2 has several advantages over HTTP1.1:

 Fast parsing


When a server parses an HTTP1.1 request, it must keep reading in bytes until it encounters the delimiter CRLF. Parsing HTTP2 requests is less cumbersome because HTTP2 is a frame-based protocol, and each frame has a field that indicates the length of the frame.

 multiplexed


HTTP1.1 If you want to initiate multiple requests at the same time, you have to establish multiple TCP connections, because a TCP connection can only handle one HTTP1.1 request at a time.


On HTTP2, multiple requests can share a single TCP connection, which is called multiplexing. The same request and response are represented by a single stream, identified by a unique stream ID. Multiple requests and responses can be sent out of order over a TCP connection, and then reorganized by stream ID when they reach their destination.

 first-part compression

 HTTP2 provides first-part compression.

 For example, there are two requests as follows:

:authority: unpkg.zhimg.com
:method: GET
:path: /[email protected]/dist/zap.js
:scheme: https
accept: */*
accept-encoding: gzip, deflate, br
accept-language: zh-CN,zh;q=0.9
cache-control: no-cache
pragma: no-cache
referer: https://www.zhihu.com/
sec-fetch-dest: script
sec-fetch-mode: no-cors
sec-fetch-site: cross-site
user-agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.122 Safari/537.36
:authority: zz.bdstatic.com
:method: GET
:path: /linksubmit/push.js
:scheme: https
accept: */*
accept-encoding: gzip, deflate, br
accept-language: zh-CN,zh;q=0.9
cache-control: no-cache
pragma: no-cache
referer: https://www.zhihu.com/
sec-fetch-dest: script
sec-fetch-mode: no-cors
sec-fetch-site: cross-site
user-agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.122 Safari/537.36


As you can see from the two requests above, there is a lot of data that is duplicated. If it were possible to store the same first part and send only the parts that differ between them, it would save a lot of traffic and speed up the request.


HTTP/2 uses “header tables” on both the client and server side to track and store previously sent key-value pairs, so that the same data is no longer sent with each request and response.


Here’s another simplified example, assuming that the client sends the following request prefixes in sequence:

Header1:foo
Header2:bar
Header3:bat

 When a client sends a request, it creates a table based on the first value:

1 Header name
62Header1foo
63Header2bar
64Header3bat


If the server receives the request, it creates a table as usual. When the client sends the next request, it can just send such a first block if the first part is the same:

62 63 64


The server looks up the previously created table and reduces these numbers to the full first part corresponding to the index.


HTTP2 can set a higher priority for more urgent requests, and the server can prioritize such requests when it receives them.

 flow control


Since the bandwidth of a TCP connection flow (depending on the network bandwidth from the client to the server) is fixed, when there are multiple requests concurrently, one request accounts for more traffic and another for less. Flow control allows precise control of the traffic for different streams.

 server push


A powerful new feature added to HTTP2 is the ability for a server to send multiple responses to a single client request. In other words, in addition to the response to the initial request, the server can additionally push resources to the client without the client having to explicitly request them.


For example, when a browser requests a website, in addition to returning an HTML page, the server can push resources in advance based on the URLs of the resources in the HTML page.


Nowadays, many websites have started to use HTTP2, such as Zhihu:


Where h2 refers to the HTTP2 protocol and http/1.1 refers to the HTTP1.1 protocol.

 3. Using server-side rendering


Client-side rendering: Get HTML file, download JavaScript file as needed, run the file, generate DOM, then render.


Server-side rendering: the server returns the HTML file and the client simply parses the HTML.

  •  Pros: fast first screen rendering, good SEO.
  •  Cons: Configuration is troublesome and increases the computational pressure on the server.


I’ll use Vue SSR as an example to briefly describe the SSR process.

 Client-side rendering process

  1.  Access to client-rendered websites.

  2. The server returns an HTML file containing the introduction resource statement and <div id="app"></div> .

  3. The client requests resources from the server via HTTP, and when the necessary resources are loaded, the execution of new Vue() starts instantiating and rendering the page.

 Server-side rendering process

  1.  Access the server-rendered website.

  2. The server looks at what resource files are needed for the current routing component and populates the HTML file with the contents of those files. If there is an ajax request, it is executed to prefetch the data and populate the HTML file, and then return the HTML page.

  3. When the client receives this HTML page, it can start rendering the page right away. At the same time, the page loads resources, and when all the necessary resources are loaded, the execution of new Vue() begins to instantiate and take over the page.


As you can see from the two processes above, the difference is in the second step. A client-rendered site will return the HTML file directly, while a server-rendered site will render the page and then return this HTML file.


What are the benefits of this? It’s faster time-to-content.


Let’s say your website needs to load abcd four files before it can be rendered. And the size of each file is 1 M.


This calculation: the client-side rendering of the site needs to load 4 files and HTML files to complete the home page rendering, the total size of 4M (ignoring the size of the HTML file). While the server-side rendering of the site only needs to load a rendered HTML file to complete the home page rendering, the total size of the rendered HTML file (this file is not too large, usually a few hundred K, my personal blog site (SSR) loaded HTML file for 400K). This is the reason why server-side rendering is faster.

 4. Use of CDN for static resources


A Content Delivery Network (CDN) is a set of web servers that are distributed in several different geographic locations. We all know that latency is higher when servers are farther away from the user, and CDNs are designed to solve this problem by deploying servers in multiple locations to bring users closer to the servers, thereby reducing request times.

 CDN Principles


When a user visits a website without a CDN, the process looks like this:


  1. The browser has to resolve the domain name to an IP address, so it needs to make a request to the local DNS.

  2. The local DNS sends requests to the root servers, top-level domain name servers, and permission servers in turn to get the IP address of the web server.

  3. The local DNS sends the IP address back to the browser, which makes a request to the web server IP address and gets the resource.


If a user visits a site that has a CDN deployed, the process looks like this:


  1. The browser has to resolve the domain name to an IP address, so it needs to make a request to the local DNS.

  2. The local DNS sends requests to the root servers, top-level domain name servers, and privilege servers in turn to get the IP address of the global load balancing system (GSLB).

  3. The local DNS then sends a request to the GSLB. The main function of the GSLB is to determine the location of the user based on the IP address of the local DNS, filter out the local load balancing system (SLB) that is closer to the user, and return the IP address of the SLB to the local DNS as a result.

  4. The local DNS sends the IP address of the SLB back to the browser, which makes a request to the SLB.

  5. SLB selects the optimal cache server to send back to the browser based on the resources and addresses requested by the browser.

  6. The browser then redirects to the cache server based on the address sent back by the SLB.

  7. If the caching server has a resource that the browser needs, it sends the resource back to the browser. If it doesn’t, it requests the resource from the source server, sends it back to the browser and caches it locally.


5. Place CSS in the header of the document and JavaScript files at the bottom.


  • CSS execution blocks rendering and prevents JS execution.

  • JS loading and execution blocks HTML parsing and prevents CSSOM construction.


If these CSS and JS tags are placed in the HEAD tag and take a long time to load and parse, the page will be blank. So the JS file should be placed at the bottom (it doesn’t prevent DOM parsing, but it does block rendering), and the JS file should be loaded after the HTML is parsed, so that the content of the page can be presented to the user as early as possible.

 So why is the CSS file even in the header?


Because loading HTML before loading CSS will make the first time the user sees the page is no style, “ugly”, in order to avoid this situation, we have to put the CSS file in the head.


Also, it’s not impossible to put the JS file in the header, as long as you add the defer attribute to the script tag, asynchronous download, delayed execution.


6. Use the font icon iconfont instead of the picture icon


Font icon is to make the icon into a font, use it just like a font, you can set the properties, such as font-size, color, etc., very convenient. And the font icon is a vector image, will not be distorted. Another advantage is that the generated file is very small.

 Compressing font files


Compress the font files using the fontmin-webpack plugin (thanks to frontend Xiaowei for this).

 7. Make good use of caching so that the same resources are not loaded over and over again


To prevent users from having to request files every time they visit a website, we can control this behavior by adding Expires or max-age, which sets a time before which the browser will not request files and will use the cache instead. Expires sets a time before which the browser will not request files and will use the cache instead. max-age is a relative time, and it is recommended to use max-age instead of Expires.


This creates a problem though, what happens when the file is updated? How do you notify the browser to re-request the file?


It is possible to get the browser to voluntarily abandon its cache and load new resources by updating the link addresses of the resources referenced in the page.


The specific approach is to associate the modification of the resource address URL with the file content, that is to say, only the change of the file content will lead to the change of the corresponding URL, so as to realize the precise cache control at the file level. What is associated with file content? We will naturally associate the use of data summarization algorithms to seek summary information on the file, the summary information and file content corresponds to one by one, there is a can be accurate to the granularity of the individual file cache control basis.

 8. Compressed files


Compressing files reduces file download time and allows for a better user experience.


Thanks to webpack and node, it is now very easy to compress files.

 In webpack you can use the following plugins for compression:

  •  JavaScript: UglifyPlugin

  • CSS : MiniCssExtractPlugin
  •  HTML: HtmlWebpackPlugin


Actually, we can do better than that. That’s using gzip compression. This can be enabled by adding the gzip flag to the Accept-Encoding header in the HTTP request header. Of course, the server has to support this as well.


gzip is the most popular and effective compression method. For example, the app.js file generated after building a project I developed with Vue was 1.4MB in size, and after gzip compression it was only 573KB, a reduction of almost 60%.


Attached is the webpack and node configuration for using gzip.

 Download plug-ins

npm install compression-webpack-plugin --save-dev
npm install compression

 webpack configuration

const CompressionPlugin = require('compression-webpack-plugin');

module.exports = {
  plugins: [new CompressionPlugin()],
}

 node configuration

const compression = require('compression')
 
app.use(compression())

 9. Image optimization

 (1). Delayed loading of images


In the page, first do not set the path to the picture, only when the picture appears in the browser’s visual area, then go to load the real picture, this is delayed loading. For websites with a lot of images, loading all the images at once will have a big impact on user experience, so you need to use delayed loading of images.


The first thing you can do is to set the image like this, so that the image will not load when the page is not visible:

<img data-src="https://avatars0.githubusercontent.com/u/22117876?s=460&u=7bd8f32788df6988833da6bd155c3cfbebc68006&v=4">

 Use JS to load the image when the page is visible:

const img = document.querySelector('img')
img.src = img.dataset.src


This way the image is loaded and the complete code can be seen in the reference.

 (2). Responsive images


The advantage of responsive images is that browsers are able to automatically load the right image for the screen size.

 Realized through picture

<picture>
	<source srcset="banner_w1000.jpg" media="(min-width: 801px)">
	<source srcset="banner_w800.jpg" media="(max-width: 800px)">
	<img src="banner_w800.jpg" alt="">
</picture>

 Realized through @media

@media (min-width: 769px) {
	.bg {
		background-image: url(bg1080.jpg);
	}
}
@media (max-width: 768px) {
	.bg {
		background-image: url(bg768.jpg);
	}
}

 (3). Resize the image


For example, you have a 1920 * 1080 sized image that is shown to the user as a thumbnail and the full image is shown only when the user hovers over it. If the user never actually hovers over the thumbnail, time is wasted downloading the image.


So, we can implement optimization with two images. At first, only the thumbnail is loaded, and only when the user hovers over the image, the large image is loaded. There is another way to delay the loading of the large image, by manually changing the src of the large image to download it after all the elements are loaded.

 (4). Reducing the quality of images


For example, with JPG images, the difference between 100% quality and 90% quality is usually not noticeable, especially when used as a background image. I often use PS to cut background images into JPG format and compress them to 60% quality, and you can’t tell the difference.


There are two methods of compression, either through the webpack plugin image-webpack-loader or through an online site.


Attached below is the usage of the webpack plugin image-webpack-loader .

npm i -D image-webpack-loader

 webpack configuration

{
  test: /\.(png|jpe?g|gif|svg)(\?.*)?$/,
  use:[
    {
    loader: 'url-loader',
    options: {
      limit: 10000, 
      name: utils.assetsPath('img/[name].[hash:7].[ext]')
      }
    },
 
    {
      loader: 'image-webpack-loader',
      options: {
        bypassOnDebug: true,
      }
    }
  ]
}


(5). Use CSS3 effects instead of images whenever possible


There are many images that can be drawn using CSS effects (gradients, shadows, etc.), in which case CSS3 is a better choice. This is because the size of the code is usually a fraction or even a few tenths of the size of the image.


(6). Use of webp-formatted images


The advantages of WebP are reflected in its superior image data compression algorithms, which can bring smaller image size, and have the image quality that is recognized by the naked eye without differences; it also has lossless and lossy compression modes, Alpha transparency, and animation features, and the conversion effect on JPEG and PNG is quite excellent, stable and uniform.


10. Load code on-demand via webpack, extract third library code, and reduce redundant code for ES6 to ES5 conversions.


Lazy loading, or loading on demand, is a great way to optimize your web page or application. This approach actually starts by separating your code at some logical breakpoints and then referencing or about to reference some other new block of code as soon as certain operations are completed in some blocks of code. This speeds up the initial loading of the application and reduces its overall size, as some code blocks may never be loaded.


Generate filenames based on the contents of the file, combined with import to dynamically introduce components to achieve on-demand loading


This can be accomplished by configuring the output’s filename attribute to have a [contenthash] value option that creates a unique hash based on the contents of the file. when the contents of the file change, so does the [contenthash].

output: {
	filename: '[name].[contenthash].js',
    chunkFilename: '[name].[contenthash].js',
    path: path.resolve(__dirname, '../dist'),
},

 Extracting third-party libraries


Since the introduced third-party libraries are generally stable and don’t change often. So it’s better to extract them individually and use them as a long-term cache. Here you need to use the splitChunk plugin cacheGroups option of webpack4.

optimization: {
  	runtimeChunk: {
        name: 'manifest'  
    },
    splitChunks: {
        cacheGroups: {
            vendor: {
                name: 'chunk-vendors',
                test: /[\\/]node_modules[\\/]/,
                priority: -10,
                chunks: 'initial'
            },
            common: {
                name: 'chunk-common',
                minChunks: 2,
                priority: -20,
                chunks: 'initial',
                reuseExistingChunk: true
            }
        },
    }
},

  • test: Controls which modules are matched by this cache group. If passed unchanged, it will select all modules by default. Types of values that can be passed: RegExp, String and Function.

  • priority: the extraction weight, the higher the number, the higher the priority. Since a module may satisfy the conditions of multiple cacheGroups, the one with the highest weight will decide which one to extract;

  • reuseExistingChunk: if or not use existing chunk, if true means if the current chunk contains modules that have already been extracted, then no new chunk will be regenerated.

  • minChunks (default is 1): the minimum number of times the chunk should be referenced before it is split (translation: to ensure that chunks are reusable, the default policy is to not require multiple references to be split)

  • chunks (default is async): initial, async and all

  • name(name of packed chunks): string or function (function can be customized according to conditions)


Reducing Redundant Code for ES6 to ES5 Conversions


Babel transformed code to achieve the same functionality as the original code requires the help of some helper functions, such as:

class Person {}

 will be converted:

"use strict";

function _classCallCheck(instance, Constructor) {
  if (!(instance instanceof Constructor)) {
    throw new TypeError("Cannot call a class as a function");
  }
}

var Person = function Person() {
  _classCallCheck(this, Person);
};


Here _classCallCheck is a helper function, and if the class is declared in many files, then many such helper functions will be created.


Here the @babel/runtime package declares all the helper functions that need to be used, and @babel/plugin-transform-runtime serves to bring in all the files that need the helper function, from @babel/runtime包 :

"use strict";

var _classCallCheck2 = require("@babel/runtime/helpers/classCallCheck");

var _classCallCheck3 = _interopRequireDefault(_classCallCheck2);

function _interopRequireDefault(obj) {
  return obj && obj.__esModule ? obj : { default: obj };
}

var Person = function Person() {
  (0, _classCallCheck3.default)(this, Person);
};


Instead of compiling the helper function classCallCheck , the helpers/classCallCheck from @babel/runtime is directly referenced.

 

npm i -D @babel/plugin-transform-runtime @babel/runtime

 Use In the .babelrc file

"plugins": [
        "@babel/plugin-transform-runtime"
]

 11. Reduction of redrawing and rearranging

 browser rendering process

  1.  Parsing HTML to generate a DOM tree.
  2.  Parsing CSS to generate a CSSOM rule tree.

  3. Parsing JS, manipulating the DOM tree and CSSOM rule tree.

  4. Merge the DOM tree with the CSSOM rule tree to generate the rendering tree.

  5. Iterate through the render tree to start the layout and compute the position size information for each node.

  6. The browser sends the data of all the layers to the GPU, which composites the layers and displays them on the screen.

 


When changing the position or size of a DOM element, it causes the browser to regenerate the rendering tree, a process called rearrangement.

 


Once the render tree has been regenerated, it is necessary to draw each node of the render tree to the screen, a process called redrawing. Not all actions will result in a rearrangement, for example changing the font color will only result in a redraw. Remember, rearranging causes redrawing, redrawing does not cause rearranging .


Both rescheduling and redrawing are very expensive because the JavaScript engine thread and the GUI rendering thread are mutually exclusive, and only one of them can be working at the same time.

 What operation causes a rearrangement?

  •  Adding or removing visible DOM elements
  •  Element position change
  •  Element size change
  •  change in content
  •  Browser window size change

 How can I minimize rearranging and redrawing?


  • When modifying styles in JavaScript, it’s best not to write the style directly, but to replace the class to change the style.

  • If you want to perform a series of operations on a DOM element, you can take the DOM element out of the document flow and bring it back into the document once the modifications are complete. The recommended use of hidden elements (display:none) or document fragments (DocumentFragement) both work well for this scenario.

 12. Use of event delegates


Event delegates utilize event bubbling to manage all events of a certain type by specifying only one event handler. All events that use buttons (most mouse events and keyboard events) are suitable for event delegate technology, and using event delegates saves memory.

 

// good
document.querySelector('ul').onclick = (event) => {
  const target = event.target
  if (target.nodeName === 'LI') {
    console.log(target.innerHTML)
  }
}

// bad
document.querySelectorAll('li').forEach((e) => {
  e.onclick = function() {
    console.log(this.innerHTML)
  }
}) 

 13. Attention to the localization of procedures


A well-written computer program often has good localization; they tend to refer to data items in the vicinity of a recently referenced data item, or to the recently referenced data item itself, a tendency that is known as the principle of localization. Programs with good locality run faster than programs with poor locality.

 Localization usually takes two different forms:


  • Temporal locality: in a program with good temporal locality, a memory location that has been referenced once is likely to be referenced many times in the near future.

  • Spatial locality : In a program with good spatial locality, if a memory location is referenced once, it is likely that the program will reference a nearby memory location in the near future.

 Example of temporal localization

function sum(arry) {
	let i, sum = 0
	let len = arry.length

	for (i = 0; i < len; i++) {
		sum += arry[i]
	}

	return sum
}


In this example, the variable sum is referenced once in each iteration of the loop, so there is good time locality for sum

 Example of spatial localization

 Programs with good spatial localization

 
function sum1(arry, rows, cols) {
	let i, j, sum = 0

	for (i = 0; i < rows; i++) {
		for (j = 0; j < cols; j++) {
			sum += arry[i][j]
		}
	}
	return sum
}

 Programs with poor spatial localization

 
function sum2(arry, rows, cols) {
	let i, j, sum = 0

	for (j = 0; j < cols; j++) {
		for (i = 0; i < rows; i++) {
			sum += arry[i][j]
		}
	}
	return sum
}


Looking at the two spatial locality examples above, accessing each element of the array sequentially starting at each row like in the example is called a reference pattern with a step of 1. If the array is accessed every k elements in the array, it is called a reference pattern with a step size of k. In general, spatial locality decreases as the step size increases.


What is the difference between these two examples? The difference is that the first example scans the array by rows, scanning each row and then scanning the next row; the second example scans the array by columns, scanning an element in one row and then immediately scanning the same column element in the next row.


Arrays are stored in memory in row order, with the result that the example that scans the array row by row gets a 1-step reference pattern with good spatial locality, while the other example has a step of rows with very poor spatial locality.

 performance testing

 Running environment:

  • cpu: i5-7400

  • Browser: chrome 70.0.3538.110


Ten spatial locality tests are performed on a two-dimensional array of length 9000 (the subarray length is also 9000) and the time (in milliseconds) is averaged as follows:

 The examples used are the two spatial localization examples described above

 Step size is 1 Step size is 9000
1242316


From the above test results, the execution time of the array with step 1 is an order of magnitude faster than the array with step 9000.

  •  Programs that repeatedly reference the same variables have good time locality

  • For a program with a reference pattern with step size k, the smaller the step size, the better the spatial locality; a program that jumps around in memory with large step sizes will have poor spatial locality


14. if-else vs. switch


As the number of judgment conditions grows, the more you tend to use switches rather than if-else.

if (color == 'blue') {

} else if (color == 'yellow') {

} else if (color == 'white') {

} else if (color == 'black') {

} else if (color == 'green') {

} else if (color == 'orange') {

} else if (color == 'pink') {

}

switch (color) {
    case 'blue':

        break
    case 'yellow':

        break
    case 'white':

        break
    case 'black':

        break
    case 'green':

        break
    case 'orange':

        break
    case 'pink':

        break
}


In cases like the one above, it’s better to use a switch in terms of readability (the switch statement in js is not based on a hash implementation, but on a circular judgment, so if-else and switch are the same in terms of performance).

 15. Look-up tables


When there are a lot of conditional statements, using switches and if-else is not the best choice, so try a lookup table. Lookup tables can be constructed using arrays and objects.

switch (index) {
    case '0':
        return result0
    case '1':
        return result1
    case '2':
        return result2
    case '3':
        return result3
    case '4':
        return result4
    case '5':
        return result5
    case '6':
        return result6
    case '7':
        return result7
    case '8':
        return result8
    case '9':
        return result9
    case '10':
        return result10
    case '11':
        return result11
}

 You can convert this switch statement into a lookup table

const results = [result0,result1,result2,result3,result4,result5,result6,result7,result8,result9,result10,result11]

return results[index]


If the conditional statement is not a numeric value but a string, you can build a lookup table with objects

const map = {
  red: result0,
  green: result1,
}

return map[color]

 16. Avoiding page jams

 60fps with device refresh rate


Most devices today have a screen refresh rate of 60 times per second. So if there is an animation or gradient on the page, or if the user is scrolling the page, then the rate at which the browser renders each frame of the animation or page needs to be consistent with the refresh rate of the device’s screen. The budget for each of these frames is just a little more than 16 milliseconds (1 second / 60 = 16.66 milliseconds). But in reality, the browser has finishing work to do, so all your work needs to be done in 10 milliseconds. If you can’t meet this budget, the frame rate will drop and the content will jitter across the screen. This is often referred to as lagging and can negatively impact the user experience.


Say you modify the DOM with JavaScript and trigger a style change that goes through a rearrangement redraw and finally draws to the screen. If any of these take too long to execute, it will take too long to render the frame and the average frame rate will drop. Assuming that this frame took 50 ms, the frame rate would be 1s / 50ms = 20fps, and the page would look like it was lagging.


For some long-running JavaScript, we can use a timer to slice and dice and delay execution.

for (let i = 0, len = arry.length; i < len; i++) {
	process(arry[i])
}


Assuming that the loop structure above is too complex due to process() or too many array elements, or even both, try a little slicing and dicing.

const todo = arry.concat()
setTimeout(function() {
	process(todo.shift())
	if (todo.length) {
		setTimeout(arguments.callee, 25)
	} else {
		callback(arry)
	}
}, 25)


If you’re interested in learning more, check out High Performance JavaScript, Chapter 6 and Efficient Front End: Efficient Programming and Optimization Practices for the Web, Chapter 3.


17. Use requestAnimationFrame to implement visual changes.


As we know from point 16, most devices have a screen refresh rate of 60 times per second, which means that the average time per frame is 16.66 milliseconds. When animating in JavaScript, the best case scenario is that the code is executed at the beginning of the frame every time. The only way to ensure that JavaScript runs at the beginning of the frame is to use requestAnimationFrame .

/**
 * If run as a requestAnimationFrame callback, this
 * will be run at the start of the frame.
 */
function updateScreen(time) {
  // Make visual updates here.
}

requestAnimationFrame(updateScreen);


If you take setTimeout or setInterval for animation, the callback function will run at some point in the frame, possibly just at the end, and this may often cause us to lose frames, leading to lag.

18.  Web Workers


A Web Worker uses other worker threads and is thus independent of the main thread, allowing it to perform tasks without interfering with the user interface. A worker can send messages to the JavaScript code that created it, by sending them to the event handler specified by that code (and vice versa).


Web Workers are intended for long-running scripts that deal with pure data or have nothing to do with the browser UI.


Creating a new worker is as simple as specifying a script URI to execute the worker thread (main.js):

var myWorker = new Worker('worker.js');
 first.onchange = function() {
  myWorker.postMessage([first.value,second.value]);
  console.log('Message posted to worker');
}

second.onchange = function() {
  myWorker.postMessage([first.value,second.value]);
  console.log('Message posted to worker');
}


After receiving the message in the worker, we can write an event handler code as a response (worker.js):

onmessage = function(e) {
  console.log('Message received from main script');
  var workerResult = 'Result: ' + (e.data[0] * e.data[1]);
  console.log('Posting message back to main script');
  postMessage(workerResult);
}


The onmessage handler is executed as soon as the message is received, and the message itself is used as the data property of the event in the code. Here we simply multiply the two numbers and use the postMessage() method again to pass the result back to the main thread.


Back in the main thread, we use onmessage again in response to the message returned by the worker:

myWorker.onmessage = function(e) {
  result.textContent = e.data;
  console.log('Message received from worker');
}


Here we get the data of the message event and set it to the textContent of the result, so the user can see the result of the operation directly.


Within the worker, however, you can’t manipulate DOM nodes directly, nor can you use the default methods and properties of the window object. However you can use a lot of things underneath the window object, including data storage mechanisms such as WebSockets, IndexedDB, and the FireFox OS-specific Data Store API.

 19. Use of bit manipulation


Numbers in JavaScript are stored in 64-bit format using the IEEE-754 standard. However, in bitwise operations, numbers are converted to a signed 32-bit format. Even with this conversion, bit operations are much faster than other math operations and Boolean operations.

Since the lowest bit of an even number is 0 and an odd number is 1, modulo operations can be replaced by bitwise operations.

if (value % 2) {
 
} else {
 
}
 
if (value & 1) {
 
} else {
 
}
~~10.12 // 10
~~10 // 10
~~'1.5' // 1
~~undefined // 0
~~null // 0
const a = 1
const b = 2
const c = 4
const options = a | b | c


By defining these options, you can use the bitwise-and operation to determine whether a/b/c is in the options.

 
if (b & options) {
	...
}

 20. Do not override native methods


No matter how optimized your JavaScript code is, it’s no match for native methods. This is because native methods are written in a low-level language (C/C++) and are compiled into machine code that becomes part of the browser. Try to use native methods when they are available, especially for math operations and DOM manipulation.


21. Reduce the complexity of CSS selectors


(1). The browser reads the selector, following the principle of reading from the right to the left of the selector.

 Look at an example.

#block .text p {
	color: red;
}
  1.  Find all P elements.

  2. Finds if the element in result 1 has a parent element with class name text.

  3. Find out if the element in result 2 has a parent with id block.

 (2). CSS Selector Prioritization

 

 A conclusion can be drawn from the above two pieces of information.

  1.  The shorter the selector, the better.

  2. Try to use high-priority selectors, such as ID and class selectors.
  3.  Avoid using the wildcard *.


As a final note, from what I’ve found, there is no need to optimize CSS selectors because the performance difference between the slowest and slowest selectors is very small.


22. Use flexbox instead of older layout models


In earlier CSS layout methods we could position elements absolutely, relatively or float. Now, we have a new layout, flexbox, which has one advantage over earlier layouts: better performance.


The following screenshot shows the layout overhead of using floats on 1300 boxes:

 Then we reproduce the example with flexbox:


Now, for the same number of elements and the same visual appearance, the layout takes much less time (in this case 3.5 ms and 14 ms respectively).


However, flexbox compatibility is still a bit of a problem and not all browsers support it, so use it with caution.

 Compatibility across browsers:

  • Chrome 29+
  • Firefox 28+
  • Internet Explorer 11
  • Opera 17+
  • Safari 6.1+ (prefixed with -webkit-)
  • Android 4.4+
  • iOS 7.1+ (prefixed with -webkit-)


23. Animating with transform and opacity property changes


In CSS, changes to the transforms and opacity properties do not trigger reordering and repainting; they are properties that can be handled separately by composites.

 24. Rational use of rules to avoid over-optimization

 Performance optimization falls into two main categories:

  1.  Load-time optimization
  2.  run-time optimization


Of the 23 recommendations above, the first 10 are for load-time optimization and the last 13 are for run-time optimization. In general, it is not necessary to use all 23 performance optimization rules, it is best to make adjustments according to the user group of the website to save energy and time.


Before solving a problem, you have to find out the problem, otherwise there is no way to start. So before doing performance optimization, it is better to investigate the loading performance and running performance of the website.

 Check loading performance

 How well a site loads depends mainly on the white screen time and the first screen time.


  • White screen time: the time from when the URL is entered, to when the page starts displaying content.
  •  First screen time: the time from typing the URL, to the page being fully rendered.


Place the following script in front of </head> to get the white screen time.

<script>
  new Date() - performance.timing.navigationStart
 
  performance.timing.domLoading - performance.timing.navigationStart
</script>


Execute new Date() - performance.timing.navigationStart in the window.onload event to get the first screen time.

 Checking operational performance


Together with chrome’s developer tools, we can see how a website performs at runtime.


By lzz

Leave a Reply

Your email address will not be published. Required fields are marked *