How to increase your frontend app’s performance – 5 coding tips

How to increase your frontend app’s performance – 5 coding tips

Tomasz Świstak

12 min
How to increase your frontend app’s performance – 5 coding tips

In many of our frontend projects, at some point we’ve encountered the issue of the app’s performance decreasing. Things like this happen – it’s normal when the complexity of a solution increases. But still, it’s something that we developers need to take care of. In this article I’d like to show you 5 tips to help optimize your app (things I’ve done in my projects). Some of them may seem obvious, some of them are programming basics, but I think that it’s always good to refresh our memory. Each tip is backed by benchmarks that you can run on your own and check performance by yourself.


Remember one important thing: when your code doesn’t need optimizations, don’t mess with it. You should always write code that is fast, but it should also be readable for other developers, because there is almost always a faster way to achieve something. Donald Knuth wrote in “Computer Programming as an Art” the most important thing you should know about optimizing code:The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming.


 When we’re working with data, we can often have situations like “find an object, do something with it, find another object…” Since the most common JS data structure is that of an array, it’s normal for our data to be stored in one. However, each time we want to find something in an array we need to use methods like find, indexOf, filter, or we can iterate with loops – so we must go from the start to the end of a structure. So, we will use a linear search, which has complexity O(n) (it means that in the worst case we will need to perform as many comparisons as there are elements in an array). Although it may be not noticeable when we do it a few times in small arrays, it will absolutely affect performance when there are many elements and we need to do it many times.

In such a scenario, it may be a good idea to convert an array to an object or map and perform lookups based on keys. In these data structures we can access an element with O(1) complexity, so it will always be just one call to memory, no matter the size. That’s because there’s a data structure behind those two called a hash table.

You can perform a benchmark here: Below you can see my results:

The difference is significant – for map and object I obtained millions of operations per second, when in an array the best result was slightly above 100 operations. Of course, it doesn’t take the data conversion into account, but it will still be a lot faster, even when you have to convert.


Sometimes people find it easier to omit null checking and just catch exceptions when something doesn’t exist. Of course, that’s a bad practice and no one should do it. So, if you have sections like this in your code, just replace them. Naturally you may want to see the benchmarks to be 100% sure. In my benchmark, I’ve decided to check three ways – try-catch expression, if condition and short-circuit evaluation.

Here’s a benchmark: Below you can see my results:

I think it goes without saying that it’s always better to perform the null check. Also, as you can see, there’s nearly no difference between if condition and short-circuit evaluation, so just use whatever suits you more.


This is another obvious but potentially controversial notion. Since we have all those great functions on arrays like map, filter, reduce, it’s very tempting to use them. They usually make code cleaner and more readable to other developers. But when performance goes down, we may seek to flatten it. I decided to create two cases: filter then map and filter than reduce. I test each of these cases with a function chain, forEach and a traditional for loop. Why these two? As you will see, the benefits may not be very significant. Additionally, for the second one I decided also to do filtering while reducing.

Benchmark for filter then map: And my results:

Of course, a single loop is faster, but the difference is not that big. That’s because of the push operation, which we don’t need to do while mapping. So, in this case you may want to consider if it’s really necessary.

So, let’s now check filter with reduce: Here are my results:

Here the difference is more significant. Flattening two functions into one gave us nearly 100% faster execution time. However, the change to a traditional for loop gives a huge speed boost.


This one may also be a bit controversial, because developers love function loops. They look nice and can provide a nice workflow. However, using them is not as efficient as using traditional loops. I think that earlier you could see the difference in using for loops, but let’s look at it on a separate benchmark: As you can see, along with the built-in ways, I’ve also tested forEach from Lodash and each from jQuery. Here are the results:

Once again, we can see that the most basic for loop is much faster than any other approach. However, they are only good for arrays. In case of other iterables, we should use forEach, for…of or iterator directly. As for for…in – well, better not use it. Only if there’s absolutely no other way. Also, please remember that because for…in takes all of object properties (and in an array, properties are indexes) it may lead to unpredictable results. Surprisingly, methods from Lodash and jQuery aren’t that bad from performance’s point-of-view, so you don’t need to worry about using them in some cases instead of built-in forEach (what’s surprising in the benchmark, one from Lodash performs better than built-in).


Have you ever been looking at someone’s JS code and seen that jQuery was imported just to manipulate DOM? I’m pretty sure that you have, since it’s one of the most popular JavaScript libraries. Of course, using libraries to manage DOM is not bad – we use React or Angular nowadays, and they do the same thing. However, some people are convinced that they have to use jQuery for the simple operation of taking element from DOM and performing minor modifications to it.

Here’s the benchmark comparing jQuery and native DOM functions for three different use cases: Below are my results:

Again, the most basic functions – getElementById and getElementsByClassName – are the fastest way to traverse DOM. For id and advanced selectors, querySelector is still faster than jQuery. There’s just one case where querySelectorAll is slower than jQuery (getting elements by class name). If you want more information on how to replace jQuery, you may want to check out

Of course, remember that when you’re using a library for DOM management, it’s highly recommended to stick to it. But the truth is that for simple cases it’s not at all necessary.


Those five tips are for sure a good start to write faster code in JavaScript. If you want to go further, here are some things to check out next:

  1. Read about optimizing JavaScript bundles with Webpack. That’s a tremendously broad topic but doing it properly can boost significantly the loading times of your applications.
  2. Learn about data structures, basic algorithms and their complexity. Many consider it to be merely theoretical knowledge, but in the first tip we saw how it works in practice.
  3. Browse test cases on the jsPerf page. It’s a great place where you can compare different ways of achieving the same thing in JavaScript, but with the most important practical information – specifics about the time difference.

This post was also published on ITNEXT.

If you want to read more tech tutorials and information, check out my previous articles!