This is how we do it:
our top user tracking best practices

This post is part of the Real User Monitoring series on our blog. Read more on How to build your RUM strategy, Web app performance with Boomerang.js library or Website monitoring with W3C APIs if you’re interested in further Real User Monitoring specifics.

Understanding user’s behaviour is key to making a business successful. It can provide you with insights into what works and what doesn’t work for your product. It can tell you what your users are really interested in, enable you to identify target groups, and quantify the impact of your improvements in terms of performance.

As a front developer, I’ve now been working with user tracking data for a while at So I’ll first explain in this post the specific challenges of user tracking scalability, before explaining in details how the library and wrapper we built precisely to solve these challenges. I’ll finally share my day-to-day favorite use cases.

I. The need for standardized user tracking

In a web application, your users behaviour is defined by the way they interact with your interface. The most basic strategy to turn these interactions into actionable user tracking data is to send events to a server together with some context ,via an AJAX request.

So a user checking his web cart would trigger an event such as the following one:

`$.post(‘/tracking’, {
	name : ‘go’,
	user: ‘michel92i’
	page: ‘cart’

On the plus side, the implementation is trivial, but this user tracking strategy doesn’t scale well. Finding correlations between events defined on a case-by-case basis is hard. In order to make further user analytics possible, you need your events to meet two requirements:

  • Same set of context information
  • Standardized attributes

Context is a set of attributes such as SessionId, Username, URL, IP, or User-Agent that you can specify for every event. Attributes allow you to group your data and make sense of it: Average number of requests by session, Load time by browser… What suits you best =).

Having a standardized way of describing an event helps you select relevant data and ensures that you will always be able to analyse them in the same way. If all your events have an ‘action’ attribute, it’s easier to count the different actions over a period of time.

The most famous js library to implement these criteria may be Google Analytics. It tracks the page url, browser, geolocation, time-spent and referrer by default. The API also contains a set of default ‘commands’ that come with predefined fields.

`ga('create', 'UA-XXXXX-Y', 'auto');` // creates accepts trackingID and cookieDomain as default parameters

The main API supports page, events, social interactions, user and exceptions tracking, with a set of plugins to extend functionality to e-commerce purposes. It covers most user analytics use cases with a simple and ready-to-use standard to analyse events.

However, using analytics.js binds you to the Google Ecosystem. Events can only be accessed once aggregated and through google analysis tools that are marketing oriented. Google Analytics is about understanding which customer is the most likely to generate revenue as you define goals to be reached, get information about your audience characteristics and understand where they came from. It’s not a tool meant to understand how to generate value for your customers or to help your product owners understand platform usage.

II. The way to leverage user tracking standardization

1) Js library and wrapper specifics

So how did we solve the issue of understanding what our users are really doing and experiencing within our platform? We built a library logmatic-js and a wrapper working with it to track user behaviour.

easy rum

Logmatic-js library (github here) provides you with ready-to-use context attributes for IP, URL and User Agent events. And the addMeta function it contains allows you to add your own context in one line (see our Github on Logmatic-js). Logmatic.js also supports automatic errors and console.log sending, which is super useful for troubleshooting (obviously, once the data is collected, you do need to send it to your analytical place of choice to make sense of it. For some best practices about javascript errors troubleshooting, see our blogpost here: JavaScript errors and logs.

The wrapper built around logmatic-js is specifically targeted to track user behaviour. You can find it on Github here. It (track.js) provides an easy API to describe user interactions, relying on a logging library to send the events to a server.

For user tracking purposes, it is necessary to have a simple but meaningful way to describe interactions. It should be meaningful both for humans to write proper code and for machines to create efficient analytics. In order to keep things simple, we decided to go for a Fluent API with a minimal set of features.


The user refreshes a view.
Which will emit
{ verb: 'refresh_view' } 

Now, we know when a user wants new data within their web analysis. But we can gradually improve the information we get by adding several small and simple functions. Each one of these functions can be used separately to categorise and filter user tracking information later on:

const event = tracking.track('refresh_view')
.of('origin', 'click', true)
// when done

Will then emit:

JSON Payload sent:

	Verb: ‘refresh_view’, // .track() - action
	Object: { // .of() - event properties
		ObjectType: ‘origin’
		Id: ‘click’
		Content: true
	Target: { //.on() - target properties
		ObjectType : ‘dashboard’,
	Took: 100 // .timed()

You’re now in a position where you get enough information to understand where your users need to update datas, how long it takes them to refresh their data, and what interaction they use to trigger it. One way you could leverage this information would be to have different automatic refresh rates depending on the scenario.

The simple structure ObjectType – Id – Content is really neat, as it allows for standardized ways to filter and analyse different events (see Use Cases section below). If I wish to filter data for my Timeline feature only, then I would simply use the formula ObjectType = ‘TIMELINE’`. Plus having 3 functions is ideal for progressive enhancement. You can start logging only Verbs (Click, Select…), and add object/targets easily when you feel ready to use more data in your user analytics.

2) Getting started

  1. Install logmatic-js
  2. ```
    npm install --save tracekit@0.3.1 //OPTIONAL TraceKit is optional but it provides better error handling
    npm install --save logmatic/logmatic-js#master
  3. Initialize logmatic-js
  4. You can read the documentation (GitHub Logmatic-js) to activate log and error forwardings or change the endpoint.
    Now set your custom metas: they will thus be attached to every single event generated. It will greatly enhance your capacity to trace interactions and complete rich user analytics without requiring you to add new attributes to each of your events.

    userId: 001, 
    clientId: 042
  5. Add the track.js file to your project:
  6. You can get the file on Github. As logmatic-js is really small (2kB gzipped), it is fine to load it synchronously. If you are using tracekit as well and want the best performance possible, you may use a wrapper to load your logging libraries asynchronously.

    • Under the hood
    • The track.js file is fairly straightforward, which is made possible because logmatic-js handles a lot of complexity.
      Every call to track creates a new object with a defined verb. When you call the .on, .of and .timed function it saves attributes as private properties. Once the emit function is called, the timer is stopped, and the private properties are converted into both a payload and a message. These are then forwarded to logmatic-js to be sent.

  7. Start logging!

III. Everyday User tracking Use Cases

Now comes the best part: what insights we can get from these user analytics. I personally use our logmatic.js front-end logging solution on a daily basis. Below are some of my favourite use cases:

1) Understand feature usage

In order to prioritize new features and updates, we need to know what our users are interested in. Of course we have feedback loops to collect wishes from our customers. But these feedbacks often give us what I would call a “gut feeling”, with no proper way to quantify the need or the impact a feature update would have for users. Being able to see and put numbers on which sections and features our users interact with on a daily basis really helps us understand which parts are the most valuable for them, and which ones are underused. If a feature is underused, we’ll dig further to understand why (Can they find it easily? Did they get what they can do with it? Is the documentation clear? What are the ones using it doing with it?), estimate how much work would be needed to update it and compare it to the estimated impact for our users. We then have a rational way to update product features and prioritize backlog actions.

We thus have an overview of which pages our users are on in real-time:

// this code would usually be in your router
Tracking.track('go_on') //event type
.of('view', dest) // event attributes
.emit();	// send the event

With the following visualization

real user monitoring

To get more specific insights on each page usage, we use the ‘go_on’ verb with different objects types. We thus use `sandbox.track('go_on').of('add_source', id).emit();` on our Add New Source page:

And we can simply filter the data on the objectType to instantly get a new analysis of the subsections usage for our Add New Source page:

user tracking dashboard

2) Discover Pain points

Though we get global performance stats through logmatic-rum (github available here, and more to read on our blogpost “webapp performance”), we sometimes need more granular information about a specific feature.The track.js file allows us to time events, and to find out corner cases where user experience is suboptimal.

For example, to optimize performance experienced by our users, we track which dashboards take the longest to load and investigate it in further details:

`// when loading starts 
this._loadtrack = Tracking.track('load')
// once loading is finished
.of('dashboard', dashboardId, model.attributes.title)

user tracking

Longest time to load on our internal dashboards

3) Target Specific User Experiences

Bearing in mind that different users have different settings influencing their experience of your platform is important. Different browsers have different rendering engines, JS runtimes and performance, geoposition, internet speed and access time to our servers all impact their experienced performance. It is thus critical to be able to know if a country is experiencing worse performance than others to keep user satisfaction up.

By using the library and wrapper mentioned above, structured with multiple small functions, we already have all the metadata needed, ready to be split in exactly the way we need. We divide it here by country:

user tracking geomap

Loading time on a typical day – `user happiness over 9000`

Wrapping up

Thanks to logmatic.js user tracking, we are confident that the decisions we take for our platform are going in the right direction. We have a better and clearer understanding of our platform usage, can more precisely prioritize our backlog and we can measure more specifically the impact of any change we implement.

The use of a generic front-end logging library extended with a layer targeted towards describing interactions allows for minimized costs – adding a new event is now virtually free – while keeping things simple for developers – the API being minimal – and still provides as much information as possible to create meaningful user tracking analyses.

Related Posts

Get notified when new content is published