Spicing up your app’s efficiency – a easy recipe for GC tuning | Weblog | bol.com

The rubbish collector is a posh piece of equipment that may be tough to tune. Certainly, the G1 collector alone has over 20 tuning flags. Not surprisingly, many builders dread touching the GC. In case you don’t give the GC just a bit little bit of care, your complete utility is likely to be working suboptimal. So, what if we inform you that tuning the GC doesn’t need to be laborious? Actually, simply by following a easy recipe, your GC and your complete utility might already get a efficiency increase.

This weblog put up reveals how we bought two manufacturing functions to carry out higher by following easy tuning steps. In what follows, we present you ways we gained a two occasions higher throughput for a streaming utility. We additionally present an instance of a misconfigured high-load, low-latency REST service with an abundantly massive heap. By taking some easy steps, we decreased the heap dimension greater than ten-fold with out compromising latency. Earlier than we achieve this, we’ll first clarify the recipe we adopted that spiced up our functions’ efficiency.

A easy recipe for GC tuning

Let’s begin with the elements of our recipe:

ingredients of performance recipe

Moreover your utility that wants spicing, you need some strategy to generate a production-like load on a take a look at atmosphere – except feeling courageous sufficient to make performance-impacting modifications in your manufacturing atmosphere.

To guage how good your app does, you want some metrics on its key efficiency indicators. Which metrics rely on the precise targets of your utility. For instance, latency for a service and throughput for a streaming utility. Moreover these metrics, you additionally need details about how a lot reminiscence your app consumes. We use Micrometer to seize our metrics, Prometheus to extract them, and Grafana to visualise them.

Along with your app metrics, your key efficiency indicators are coated, however ultimately, it’s the GC we like to boost. Until being desirous about hardcore GC tuning, these are the three key efficiency indicators to find out how good of a job your GC is doing:

  • Latency – how lengthy does a single rubbish accumulating occasion pause your utility.
  • Throughput – how a lot time does your utility spend on rubbish accumulating, and the way a lot time can it spend on doing utility work.
  • Footprint – the CPU and reminiscence utilized by the GC to carry out its job

This final ingredient, the GC metrics, is likely to be a bit more durable to search out. Micrometer exposes them. (See for instance this weblog put up for an outline of metrics.) Alternatively, you might get hold of them out of your utility’s GC logs. (You possibly can seek advice from this text to discover ways to get hold of and analyze them.)

Now we’ve all of the elements we’d like, it’s time for the recipe:

recipe for performance

Let’s get cooking. Hearth up your efficiency assessments and maintain them working for a interval to heat up your utility. At this level it’s good to write down down issues like response occasions, most requests per second. This manner, you may examine completely different runs with completely different settings later.

Subsequent, you establish your app’s dwell knowledge dimension (LDS). The LDS is the scale of all of the objects remaining after the GC collects all unreferenced objects. In different phrases, the LDS is the reminiscence of the objects your app nonetheless makes use of. With out going into an excessive amount of element, you have to:

  • Set off a full rubbish gather, which forces the GC to gather all unused objects on the heap. You possibly can set off one from a profiler comparable to VisualVM or JDK Mission Management.
  • Learn the used heap dimension after the complete gather. Beneath regular circumstances you need to be capable of simply acknowledge the complete gather by the large drop in reminiscence. That is the dwell knowledge dimension.

The final step is to recalculate your utility’s heap. Usually, your LDS ought to occupy round 30% of the heap (Java Efficiency by Scott Oaks). It’s good observe to set your minimal heap (Xms) equal to your most heap (Xmx). This prevents the GC from doing costly full collects on each resize of the heap. So, in a formulation: Xmx = Xms = max(LDS) / 0.3

Spicing up a streaming utility

Think about you could have an utility that processes messages which can be revealed on a queue. The applying runs within the Google cloud and makes use of horizontal pod autoscaling to routinely scale the variety of utility nodes to match the queue’s workload. All the things appears to run high quality for months already, however does it?

The Google cloud makes use of a pay-per-use mannequin, so throwing in additional utility nodes to spice up your utility’s efficiency comes at a value. So, we determined to check out our recipe on this utility to see if there’s something to achieve right here. There definitely was, so learn on.

Earlier than

To ascertain a baseline, we ran a efficiency take a look at to get insights into the applying’s key efficiency metrics. We additionally downloaded the applying’s GC logs to study extra about how the GC behaves. The beneath Grafana dashboard reveals what number of parts (merchandise) every utility node processes per second: max 200 on this case.

grafana graph

These are the volumes we’re used to, so all good. Nevertheless, whereas inspecting the GC logs, we discovered one thing that shocked us.

GC LogGC Log

The common pause time is 2,43 seconds. Recall that in pauses, the applying is unresponsive. Lengthy delays don’t must be a difficulty for a streaming utility as a result of it doesn’t have to answer shoppers’ requests. The stunning half is its throughput of 69%, which signifies that the applying spends 31% of its time wiping out reminiscence. That’s 31% not being spent on area logic. Ideally, the throughput must be at the very least 95%.

Figuring out the dwell knowledge dimension

Allow us to see if we will make this higher. We decide the LDS by triggering a full rubbish gather whereas the applying is underneath load. Our utility was performing so unhealthy that it already carried out full collects – this usually signifies that the GC is in hassle. On the brilliant aspect, we do not have to set off a full gather manually to determine the LDS.

We distilled that the max heap dimension after a full GC is roughly 630MB. Making use of our rule of thumb yields a heap of 630 / 0.3 = 2100MB. That’s virtually twice the scale of our present heap of 1135MB!


Interested by what this could do to our utility, we elevated the heap to 2100MB and fired up our efficiency assessments as soon as extra. The outcomes excited us.

GC LogGC Log

After rising the heap, the typical GC pauses decreased quite a bit. Additionally, the GC’s throughput improved dramatically – 99% of the time the applying is doing what it’s supposed to do. And the throughput of the applying, you ask? Recall that earlier than, the applying processed 200 parts per second at most. Now it peaks at 400 per second!

Grafana graph

Spicing up a high-load, low-latency REST service

Quiz query. You may have a low-latency, high-load service working on 42 digital machines, every having 2 CPU cores. Sometime, you migrate your utility nodes to 5 beasts of bodily servers, every having 32 CPU cores. Given that every digital machine had a heap of 2GB, what dimension ought to or not it’s for every bodily server?

So, you have to divide 42 * 2 = 84GB of whole reminiscence over 5 machines. That boils right down to 84 / 5 = 16.8GB per machine. To take no possibilities, you spherical this quantity as much as 25GB. Sounds believable, proper? Nicely, the proper reply seems to be lower than 2GB, as a result of that’s the quantity we bought by calculating the heap dimension based mostly on the LDS. Can’t consider it? No worries, we couldn’t consider it both. Subsequently, we determined to run an experiment.

Experiment setup

We now have 5 utility nodes, so we will run our experiment with 5 differently-sized heaps. We give node one 2GB, node two 4GB, node three 8GB, node 4 12GB, and node 5 25GB. (Sure, we’re not courageous sufficient to run our utility with a heap underneath 2GB.)

As a subsequent step, we hearth up our efficiency assessments producing a steady, production-like load of a baffling 56K requests per second. All through the entire run of this experiment, we measure the variety of requests every node receives to make sure that the load is equally balanced. What’s extra, we measure this service’s key efficiency indicator – latency.

As a result of we bought weary of downloading the GC logs after every take a look at, we invested in Grafana dashboards to point out us the GC’s pause occasions, throughput, and heap dimension after a rubbish gather. This manner we will simply examine the GC’s well being.


This weblog is about GC tuning, so let’s begin with that. The next determine reveals the GC’s pause occasions and throughput. Recall that pause occasions point out how lengthy the GC freezes the applying whereas sweeping out reminiscence. Throughput then specifies the share of time the applying is just not paused by the GC.

2 Grafana graphs

As you may see, the pause frequency and pause occasions don’t differ a lot. The throughput reveals it greatest: the smaller the heap, the extra the GC pauses. It additionally reveals that even with a 2GB heap the throughput continues to be OK – it doesn’t drop underneath 98%. (Recall {that a} throughput larger than 95% is taken into account good.)

So, rising a 2GB heap by 23GB will increase the throughput by virtually 2%. That makes us surprise, how important is that for the general utility’s efficiency? For the reply, we have to take a look at the applying’s latency.

If we take a look at the 99-percentile latency of every node – as proven within the beneath graph – we see that the response occasions are actually shut.

Grafana graph

Even when we think about the 999-percentile, the response occasions of every node are nonetheless not very far aside, as the next graph reveals.

Grafana graph

How does the drop of virtually 2% in GC throughput have an effect on our utility’s total efficiency? Not a lot. And that’s nice as a result of it means two issues. First, the easy recipe for GC tuning labored once more. Second, we simply saved a whopping 115GB of reminiscence!


We defined a easy recipe of GC tuning that served two functions. By rising the heap, we gained two occasions higher throughput for a streaming utility. We decreased the reminiscence footprint of a REST service greater than ten-fold with out compromising its latency. All of that we achieved by following these steps:
• Run the applying underneath load.
• Decide the dwell knowledge dimension (the scale of the objects your utility nonetheless makes use of).
• Measurement the heap such that the LDS takes 30% of the full heap dimension.

Hopefully, we satisfied you that GC tuning does not must be daunting. So, deliver your personal elements and begin cooking. We hope the consequence will probably be as spicy as ours.


Many due to Alexander Bolhuis, Ramin Gomari, Tomas Sirio and Deny Rubinskyi for serving to us run the experiments. We couldn’t have written this weblog put up with out you guys.

Leave a Reply

Your email address will not be published. Required fields are marked *