Some time ago a friend asked me a problem they encountered in their company, said the back-end for some reason did not realize the paging function, so a one-time return of 20,000 pieces of data, so that the front-end select component to display the user interface. After hearing this, I immediately understood his confusion, if the hard-coded way to directly render the 20,000 pieces of data into the select, will certainly be stuck. Later he also said that we need to support search, also front-end to realize, I was interested. At that time, I thought of the following program.

  1. Use lazy loading + paging (front end maintains lazy loading for data distribution and paging)

  2. Use virtual scrolling technology (currently react’s antd 4.0 has support for virtual scrolling of select long lists)

Lazy loading and paging are generally used to optimize long lists, similar to the paging function of the table, the specific idea is that the user only loads the data that can be seen each time, and then loads the data of the next page when scrolling to the bottom.

Virtual scrolling can also be used to optimize long lists, the core idea is to render only the number of lists in the visible area each time, when scrolling dynamically append elements and support the entire scrolling content through the top padding, the implementation of the idea is also very simple.

Through the above analysis can actually solve the problem of friends, but the most a pursuit of front-end engineers, the author carefully combed a bit, and based on the first program to abstract a practical problem: the first one is to solve the problem of the front-end engineers, but the most a pursuit of front-end engineers, the author carefully combed a bit, and based on the first program to abstract a practical problem.

 How to render big data lists and support search function?

I will explore the value of this problem by simulating the implementation scenarios of front-end engineers at different levels. I hope to inspire you to learn to really think deeply.

The author will analyze the above issues through the technical perspective of different experienced programmers, and then start our show.

In the beginning of the code before we do a good job of basic preparation, I first use nodejs to build a data server, to provide basic data requests, the core code is as follows.

app.use(async (ctx, next) => {
  if(ctx.url === '/api/getMock') {
    let list = []
    function genrateRandomWords(n) {
      let words = 'abcdefghijklmnopqrstuvwxyz',
          len = words.length,
          ret = ''
      for(let i=0; i< n; i++) {
        ret += words[Math.floor(Math.random() * len)]
      return ret

    for(let i = 0; i< 100000; i++) {
        name: `xu_0${i}`,
        title: genrateRandomWords(12),
        text: `${i}~~`,
        tid: `xx_${i}`

    ctx.body = {
      state: 200,
      data: list
  await next()

I am using koa to achieve the above basic mock data server, so we can simulate the real back-end environment to start our front-end development (of course, you can also directly in the front-end to manually generate 100,000 pieces of data). The genrateRandomWords method is used to generate a specified number of strings, which is a lot of mock data technology, interested parties can learn to understand . The next front-end code I unified using react to achieve (vue the same way).

 Programs for junior engineers

A hard-coded solution that requests data directly from the backend and renders it to the page, along the following lines.

  The code might look like this.

  1.  Requesting backend data.
fetch(`${SERVER_URL}/api/getMock`).then(res => res.json()).then(res => {
  if(res.state) {
    data =
  1.  rendering page
{, i) => {
      return <div className={styles.item} key={item.tid}>
        <div className={styles.tit}>{item.title} <span className={styles.label}>{}</span></div>
  1.  Search data
const handleSearch = (v) => {
    let searchData = data.filter((item, i) => {
      return item.title.indexOf(v) > -1

This is essentially to achieve the basic requirements, but there is an obvious drawback, that is, the data is rendered to the page at once, the large amount of data will lead to a significant reduction in the performance of the page, resulting in page lag.

 Programs for mid-level engineers

As a front-end development engineer with some experience, you must have some understanding of page performance, so you will be familiar with the anti-shake function and throttle function, and used such as lazy loading and paging solutions, we will look at the intermediate engineers of the program.

Through this process of optimization, the code has been basically usable, the following to introduce the specific implementation of the scheme:.

  1. Lazy loading + paging scheme Lazy loading is mainly achieved by listening to the window scrolling, when a placeholder element is visible to load the next data, the principle is as follows.

    Here we implement a lazy loading scheme by listening to the window’s scroll event and using getBoundingClientRect on the poll element to get the distance of the poll element relative to the visible window.

In the process of scrolling summary we also need to pay attention to a problem is when the user scrolls back, in fact, do not need to do any processing, so we need to add a one-way lock, the specific code is as follows.

function scrollAndLoading() {
    if(window.scrollY > prevY) {  
      prevY = window.scrollY
      if(poll.current.getBoundingClientRect().top <= window.innerHeight) {

useEffect(() => {
    // something code
    const getData = debounce(scrollAndLoading, 300)
    window.addEventListener('scroll', getData, false)
    return () => {
      window.removeEventListener('scroll', getData, false)
  }, [])

where prevY stores the distance from the last scroll of the window, and is only updated when the window is scrolled down and the scroll height is greater than the previous one.

As for the paging logic, the native javascript implementation of paging is also very simple, we define a few dimensions by.

  •  curPage current page number
  •  pageSize Number of displays per page
  •  data The amount of incoming data

With these conditions, our basic paging function can be completed. Front-end paging of the core code is as follows.

let data = [];
let curPage = 1;
let pageSize = 16;
let prevY = 0;

// other code...

function scrollAndLoading() {
    if(window.scrollY > prevY) {  
      prevY = window.scrollY
      if(poll.current.getBoundingClientRect().top <= window.innerHeight) {
        setList(searchData.slice(0, pageSize * curPage))

  1. Anti-shake function implementation Since the anti-shake function is relatively simple, here is a simple anti-shake function code directly on it.
function debounce(fn, time) {
    return function(args) {
      let that = this
      fn.tid = setTimeout(() => {, args)
      }, time);
  1.  Search Implementation The code for the search function is as follows.
const handleSearch = (v) => {
     curPage = 1;
     prevY = 0;
     searchData = data.filter((item, i) => {
       let reg = new RegExp(v, 'gi')
       return reg.test(item.title)
     setList(searchData.slice(0, pageSize * curPage))

It needs to be combined with paging to realize, so here in order not to affect the source data, we use temporary data searchData to store. The result is as follows.

  After searching.

Lazy loading is utilized both before and after the search, so you don’t have to worry about performance bottlenecks caused by large amounts of data~.

 Senior Engineer’s Program

As a battle-hardened programmer, we should consider a more elegant implementation, such as componentization, algorithm optimization, multi-threading and other issues, such as our problem of big data rendering, we can also use the virtual long list to solve our needs in a more elegant and concise way. As for the implementation of the virtual long list of the author at the beginning of the point, here will not be described in detail, for a larger amount of data, such as 1 million (although the actual development will not encounter such a brainless scenario), how do we deal with it?

The first point is that we can use a js buffer to slice and dice 1 million pieces of data, the idea code is as follows.

function multistep(steps,args,callback){
    var tasks = steps.concat();

        var task = tasks.shift();
        task.apply(null, args || []);   
        if(tasks.length > 0){
            setTimeout(arguments.callee, 25);

We can also use the web worker to move the logic that requires a lot of computation in the front-end into it, to ensure that the main process of the fast response of the js, so that the web worker thread in the background computation, computation is completed and then through the web worker’s communication mechanism to notify the main process, such as fuzzy search, etc., we can further optimize the search algorithms, such as dichotomous methods, so that the search algorithms can be further optimized, so that the search algorithms are more efficient and effective. These are all issues that senior engineers should consider. But it is important to distinguish between scenarios, to find a more cost-effective solution.

By hbb

Leave a Reply

Your email address will not be published. Required fields are marked *