Xebia is a driven IT service company operating on agile principles. It specializes in Big Data, Web, Cloud, Java architectures and transition to agile environments. Quality without compromise and customer intimacy are their longheld focus, and Xebians are at the heart of it all. People come first, and it shows from projects assignments to knowledge sharing, with the emphasis put on the monthly organisation of Xebia Knowledge Exchange days to ensure knowledge transfer among Xebians.
One mobility company called on Xebia to update their Point Of Interest acquisition system. Indeed their existing processing chain, hosted on internal servers, took up to 4 days to fully integrate the one million POIs they were managing for their client facing search engine. Their POI processing chain went through multiple steps such as partner API crawl, mapping, database insertion validation, indexing… And neither devs nor business teams had any simple mean to check POIs treatment progress, errors or reasons why some POIs were being turned down. The only existing way was to search logs that were lacking information within unintelligible text files.
Xebia took charge of the project, suggesting a whole new solution based on AWS and lambda for the POI treatment chain. Both the client need for monitoring and the use of lambda called for a log centralization tool able to consolidate all product logs and make sense of them. Jérémy Pinsolle, who had already used Logmatic.io with another client – suggested using the tool to get quick insights over the massive amount of logs that would be generated.
Getting down to work, Xebians had the foresight to write new “log friendly” code holding the information they needed to get proper insights. They thus made sure to wrap treatment time, errors type or ID data within logs to later be able to tap into this data. Xebians explain that once the initial setup had been completed, it would literally take them 2 minutes to build proper graphs and show their clients what steps of the processing chain were taking the longest. A great plus for them was that it was extremely simple, with no need for anyone to build Elastic search requests.
“With Logmatic.io I know I get direct ROI. There is no fighting for anything, it simply works.” Jérémy Pinsolle
Logmatic.io clearly showed how many POIs had an error status, and what was the problem for each one of them. Now that issues could be clearly identified, it was just a matter of building the appropriate code solutions for Xebia.
So now analyzing a POI error for their client’s dev team has become much easier, as they can have a direct look at Logmatic.io dashboards and do direct searches to understand what’s going on. The whole process now takes a couple of minutes instead of the 0.5 day taken up previously. Indeed, to get such visibility, devs formerly had to request a production logs extract through a Jira ticket, wait some time to get it and then read the file in a text editor with linux commands such as cat, sed, awk, cut… which wasn’t a smooth process.
“Considering we’re working with lambdas, that we have 1 million POIs and between 50 and 100 parallel processings, it’s pretty much impossible to develop without a tool” Jérémy Pinsolle points out. We knew that if coded right, each processing should take approximately 5 seconds. But checking code performance with the default Amazon log centralization solution CloudWatch Logs is not as rich as with a tool such as Logmatic.io. Drawing response time graphs to see at which moment processing time deteriorates is not possible for example.
“Logmatic.io was really helpful to closely monitor our new code performance, to tune it and, looking at treatment time for each part of our code, highlight optimization areas.”, says Jérémy Pinsolle.
So Xebians made a smart use of Logmatic.io to check the processing performance of their code. They were able to see performance of their new code – for example with a new algorithm – before releasing it, just with the graphs they built out of logs. After implementing their first code changes, processing time was still 100 seconds long. Xebians thus spotted some issues with databases or realised that processings were made twice, as they saw 2 millions of them in their logs instead of 1 million. They were thus able to iterate on their code by iterating on the graphs they were creating, not only iterating on troublesome releases. The result of the new code Xebians built? Processing a POI went from taking 4 days down to 7h!
Jérémy Pinsolle also told us how much the ability to create graphs very easily and quickly helped communication between stakeholders. Such graphs make results of backend code visible and each one of them becomes a deliverable for clients, showing the work that has been done in a high-quality, relevant way. Logmatic.io graphs thus serve as discussion materials that can show clients in real-time the type of errors there are, what was solved and why – thus facilitating communication and understanding on all parts.
Xebia’s use of Logmatic.io is an especially great example of how to use logs to help new features development and to properly assess backend performance in addition to speed up complex troubleshooting across multiple data sources.« Previous
"Logmatic.io was really helpful to closely monitor our new code performance, to tune it and, looking at treatment time for each part of our code, ..."
Start your Free Two Week Trial Now. Get setup, and your first insights in minutes.