Logmatic.io Blog

Lucca pushes its static content
delivery performance x10

Driving infra changes with Logmatic.io as a compass:
from Windows to NGINX

Lucca is an intelligent & modular HR SaaS that builds all their applications with a “best of breed” approach to the services and languages they use. Their focus on delivering smooth experience for their users explains the constant growth they’ve been experiencing, and their position as a leader in the mid-size companies market, with over 1 000 clients and 130 000 final users.

This time, Lucca wanted to push their static content delivery performance one step further. They teamed up with Logmatic.io for the challenge:

“Logmatic.io is especially useful to us when working on our infrastructure evolution. Infrastructure changes are now driven from a global point of view, rather than by code snippets. Performance gains are much more substantial.” Bruno Catteau, VP for Platform and Security @ Lucca.

It helps the team in:

  • Identifying clearly overall tendencies
  • Monitoring changes of performance once a release is done
  • Spotting bugs and code remnants

Discover along Bruno Catteau’s testimonial the steps they followed with Logmatic.io as a compass to smoothly improve their static content delivery performance from under 10% to more than 80% delivered under 1ms.

I. Some background

“We started out Lucca within a Microsoft environment, but already with the idea – straight out of Lucca’s philosophy – to always use the best tool available for every single tech functionality we needed” Bruno explains. The team thus installed PFsense, an open source firewall much used by the OVH dedicated cloud community, as soon as 2014.
firewall scheme

So “with our scalability needs in mind, we keep on driving our platform continuous transformation. And Logmatic.io helps us monitor the impact of each change” adds Bruno. Their first step was a load balancer integration, and then, implementing a highly efficient static web server.

II. Analysing static content servers performance

“With our company and our systems ever growing, we ended up with our servers delivering both static and dynamic content”. To identify static content performance, we had two options:

  • Filtering data by the static subdomains specific to each servers
  • static subdomain servers

  • Filtering by extension:
    custom.uri:(*.gif OR *.png OR *.jpg OR *.jpeg OR *.bmp OR *.svg OR *.ttf OR *.eot OR *.css OR*.js OR *.htm OR *.html OR *.ico OR *.woff OR *.map)

And here is what server response times for static content were displaying:

server response display

A lot of responses were taking much more time than they wished for. Clicking on the longest ones, it was clear the longest times were all generated by servers while they were dealing with static content.

II. Setting up HAProxy

“We decided to set HAProxy up so that static content could be delivered by specialized servers”. This solution enabled Lucca’s team to keep working with their Windows servers.

setting up HAProxy scheme

It immediately made a lot of difference:

haproxy performance

By dedicating a specific IIS server to static content, performance was clearly enhanced. All the longest response times had disappeared. “Still approximately 20% of responses were much longer than what we wished for, as they were taking between 10 and 50 ms rather than an acceptable 10 ms” Bruno comments.

III. One step further with Nginx servers

So Lucca moved their specialised static servers to Nginx: “NGINX servers only need 1Go RAM and 2 CPUs to perform, which makes them the best building block on the market for this task”.

NGINX server scheme

Once in production, “it was obvious when looking at our platform that the Nginx servers (in brownish/ beige shades) were failing over 70% of the data flow”. Their IIS servers were thus receiving about 30% of NGINX failover flow:

Nginx data flow

IV. Dealing with case handling

We knew Linux is case sensitive, so we investigated where exactly corrections needed to be done. In almost 100% of cases it’s a case handling issue.

nginx metrics

Working on case handling correction over all of their applications, they saw within their logmatic.io platform that 99,6% of flows were now actually being delivered by Nginx. So they stopped one of their IIS servers and reduced the power of the second one:

iis server data flow

V. Checking final performance

“Checking delivery time again, we can see that thanks to gzip, static delivery is rendered faster than ever – from 80% up to 91% delivered under 1ms” Bruno concludes:

delivery time nginx server

Moving forward

Lucca’s next step for optimal static content management is to use a CDN to further decrease load time for its end users.They’ve already deployed Logmatic.io’s JSRUM library for that purpose.
“And then, we’re going to reduce the fixed link between our servers and clients by making our front-end servers interchangeable. But this is a whole new story…” concludes Bruno.

Related Posts