The path to
Rail logs analytics

From Papertrail to Logentries to is a full-service digital agency based in Oslo, Norway. A dedication to turning ideas into business, you can get a sense of their spirit and read their latest news on Twitter.hyperoslo. A lot of their projects involve Ruby on Rails, and we got the chance to discuss with one of their web developers, Felipe Espinoza, about his logging challenges and the two speaker deck presentations he made about it: cleaning logs and practical logs vol2. Felipe explains that he was working with Rails on a BI project and really needed to analyse performance through logs. So he started to get a closer look into logging and realised that there was a lot of things in logs they were not taking advantage of by simply using Papertrail.

I. The first steps to get better Rail logs

Testing several log management tools at the same time, Felipe started cleaning his logs to facilitate insights – more details on this are available in his online presentation here. Following his tests, Felipe chose to work with Logentries because of cost considerations. Their integration with Heroku was good, and Felipe thought the possibility to get request ID in each logline was interesting. He was thus able to answer questions such as:

  • Which device is making the request?
  • Which device OS version?
  • Which API version was requested?

II. The next steps for improved Rail logs insights

Being able to answer the questions was a good first step. Still, it was difficult to cross information through their multi-line logs and answer questions such as the following ones:

  • For a specific request, how many unique users did this request?
  • Who are the most active users? What status codes do they get back?
  • What are the endpoints requested by a specific user?

So after some rework on their multi-line logs, Felipe tried again, as he had previously noticed more flexible analytics in the platform. That’s when he realized: “all the questions I had before can be answered by

  • First, each file is recognized & so easily parsed
  • Second, I can do analytics, save it, and share it with other people
  • Third, I can do dashboards. And all the data can be sliced and diced on my dashboards.”

Felipe goes on explaining that working with APIs means monitoring response status, number of requests, endpoints identification, number of users, or what happens if a user is not logged in for example. And in the end, all this can be tracked in a simple view, a dashboard:

count production

If he sees a spike on the number of requests, then he can click on it and refresh the whole dashboard to see what else was happening exactly at that moment. If he notices a particular user had a huge amount for a while, and asks himself why? He can simply filter the data: “That made investigation super quick” Felipe says. For this specific user example, he realised that even though there was a huge spike, it was only happening for one user, so that nobody needed to panic, as they were able to know who it was and where. And beyond troubleshooting purposes, “we use as a microscope for the interesting cases to analyse” Felipe ads.

You can check Felipe Spinoza’s presentation “Practical Logs Vol 2. How I keep improving with my logs” here. His presentation also includes his github gem link, as well as explanations for good practices in Rails logs. Or follow Felipe on Twitter/fespinozacast for further updates on Rails, React/Redux and Swift!!

Related Posts

Get notified when new content is published