Better Applications Performance
with MySQL logs and metrics

Anyone in the web industry for even the shortest amount of time has probably heard of MySQL. It’s the ‘M’ in LAMP, the no-brainer, go-to, web application stack (before the introduction of Node.js). It’s in use by over 80% of websites, including many web giants like WordPress and Facebook.

It’s an open-source, relational database-management system (RDBMS) developed specifically for client/server architectures that became a high-flier due to its legendary performance and reliability. Under the hood, MySQL is a multi-threaded C/C++ server application dating back to 1995 and is now under Oracle’s stewardship.

To take advantage of all its features and performance enhancements, particularly in distributed applications that require multiple clusters of MySQL servers, you need to setup and configure them properly. If you paid top dollar and hired industry gurus to deploy them, you might feel confident that nothing will go wrong, but how do you really know that? How do you know what’s really going on?

How do you know if your servers are actually reaching or maintaining performance objectives? How do you know that connections and queries are being made and completed as expected and efficiently? It may be the case that your applications ‘seem’ to be running correctly with no noticeable problems, but perhaps there are small drips and leaks. And if not repaired, these small problems could lead to application performance degradation or worse — a major catastrophe that could require huge investments in time and effort to rectify.

Logs and Metrics To The Rescue

life jacket

This is where MySQL log events and metrics come in. They can help server admins and application owners be proactive in evaluating and improving performance, meeting application objectives, and maintaining demanding service level agreements (SLAs).

MySQL has six built-in logging mechanisms:

  • Binary log
  • Relay log
  • DDL log
  • Error log
  • General query log
  • Slow query log

The binary, relay, and DDL MySQL logs are mainly used for replication and recovery tasks. The binary log holds master server changes that can be sent for replication on slave servers, which in turn store these changes in their relay logs for recovery purposes. The DDL log stores metadata changes and is also used for recovery purposes when crashes occur during metadata operations.

The error, general query, and slow query MySQL logs are very useful for administering and maintaining application and server performance.

MySQL error log is invaluable for debugging purposes because it contains a record of any exceptions that occur. It should be the first place you check if a server fails to start or shuts down unexpectedly.

The general query log records client activity and can help identify what actions a client is taking on a particular server and how often. It records when clients connect and disconnect, as well as, what SQL statements they executed.

MySQL slow query log is probably the most useful log to track performance issues. Each time a submitted query takes longer than some threshold amount of time, the SQL statement is recorded in the log. By default, the threshold is set to 10 seconds, however, this can be changed by setting the long_query_time variable to suit your application needs.

Typically, MySQL slow query log is examined for SQL statements that recur often. This could indicate an inefficient query or a structural design problem that needs to be improved or fixed.

The slow query log can be written to a data table, file, or both. You can specify this using the log_output variable. The example below records slow query events to a file and also sets the file destination:


It should be noted that all of the MySQL logs (except the error log on Windows) are turned off by default. You must explicitly turn these on at runtime or in your server settings configuration file.

For example, to turn on MySQL general query log, you can start the server with these command line options:

mysqld --general_log=1 --general_log_file=file_name

If you forget to turn on the log or need to change the log destination file while the server is running, you can use the SET command:

SET GLOBAL general_log = 'ON';
SET GLOBAL general_log_file = ‘/var/log/mysql/file_name.log’;

You can also set these options in your my.cnf configuration file located in the /etc/mysql/ directory. For more information on how to manage these logs, see the official MySQL documentation here.

Beyond MySQL logs, Metrics

For the more adventurous admin, recent versions of MySQL (5.5.3 or later) offer Performance Schema. This feature provides detailed performance data on internal server events like how long the host system takes to respond, how long SQL statements or segments of a statement take to execute, how long disk I/O calls take, and much more.

To enable Performance Schema you need to use a distribution that does not explicitly exclude Performance Schema from the build process (all Oracle released binary distributions include it). These builds specifically modify the server source code to add instrumentation that enables data collection with minimal impact on server performance. It does not use more memory than what is allocated at server startup, and the collected data is non-persistent.

The Performance Schema tracks events which can be defined as any processes that takes time to execute. You must enable Performance Schema in your configuration file to start event collection:


To enable all event collections and event timing, execute the following after the server starts:

UPDATE setup_instruments SET ENABLED = 'YES', TIMED = 'YES';
UPDATE setup_consumers SET ENABLED = 'YES';

Once started, there are several ways you can retrieve information from Performance Schema. The most direct way is to execute specific queries for the information you are seeking. For example, to check for potential communication issues between slave replication server and master you can execute the following:

SELECT * FROM performance_schema.replication_connection_status\G

If you don’t know what you are looking for, a convenient place to start is by examining the Performance Schema summary tables. Below are sample statements that display the summaries sorted by number of executions and by wait times:

# Sort by count
FROM events_waits_summary_global_by_event_name 

# Sort by wait times
FROM events_waits_summary_global_by_event_name

After you get a feel for what Performance Schema offers, you can delve deeper by displaying current thread executions and drilling down into a particular thread’s history. For specifics on how to use Performance Schema, refer to the MySQL documentation.

Less Is More?

Perhaps looking at individual threads or examining the lengths of mutex locks using Performance Schema is a bit more than you need? If you don’t require such a low-level look at your server, you may find the MySQL Server Status variables useful. Be warned there are many variables to choose from, so it’s best to take into consideration what you are trying to achieve and then search for the most relevant variables in the Status Variable Summary table. Most of the variable names are self-explanatory; however, you can find a detailed description of each variable in the section following the table. You can also display a listing of all variables by executing the following:


To get you started, here is a list of variables to help you monitor common key performance metrics:

Variable Name
Number of connections refused due to internal errors.
Number of connections refused due to max connections being reached.
Total number of connection attempts.
The maximum number of simultaneous connections made.
Max_used_connections_timeTime of maximum number of simultaneous connections.
Number of query cache hits.
Number of queries removed from query cache due to low memory.
Number of statements executed by the server.
Number of statements executed by the server (excluding statements from stored programs).
Number of threads that have taken more than some threshold value of time to launch.
Slow_queriesNumber of queries that have taken more than some threshold value of time to execute.
Number of currently open connections.
Server uptime in seconds.

Querying these alone can provide useful information, but combining them with MySQL System variables or recording them on a periodic basis can be more revealing. For instance, Queries provides the number of statements executed, but it doesn’t really tell you about your server’s ability to process queries. To do this, you can execute the following to tell you how many queries your server is processing per second, per minute, and per hour:


SELECT <Queries_result> / <Uptime_result>; # second
SELECT (<Queries_result> * 60) / <Uptime_result>; # minute
SELECT (<Queries_result> * 360) / <Uptime_result>; # hour

Another example is understanding how many simultaneous connections you really need. To do this you can periodically record the difference between the max_connections system variable and the Threads_connected status variable to gain a better understanding whether more or less simultaneous connections are warranted.

A Sea of Servers


You’re probably wondering, if not screaming at this point, how often you should be checking your MySQL logs and performance metrics, especially if you employ multiple MySQL servers for distributed applications! The best answer is that it depends.

If you are early in your application development lifecycle you probably want to ensure everything is working correctly, so checking for MySQL errors and slow queries might be best. If your application has been in the wild for some time, you probably want to always look out for errors, failed connection attempts, low disk space, uptime; generally, things that can indicate that some larger problem could be lurking. If you are interested in checking performance or need to establish baseline statistics you might want to profile thread usage, memory usage, cache hits, query processing, simultaneous connections, and more.

Manually, you ask? It’s possible to do all this without automation, but it isn’t practical, nor would it yield reliable results since things can change by the time you get through compiling, sifting, and correlating hundreds or thousands of logs and metric reports. Luckily, there are ways to streamline the process of finding key information and even ways to aggregate it all in a central place.

You can use mysqldumpslow to summarize information contained in MySQL slow query logs. You can use pre-filtering and post-filtering to limit collected or queried events if using Performance Schema. And you can consolidate interactive logging statements to the system logging facility which is syslog in Unix and Eventlog in Windows. To enable this, you need to run the mysql client with the syslog command line option:

mysql --syslog

MySQL error log is automatically sent to the Eventlog on Windows, but this feature must be enabled on Unix machines. To do this, edit your configuration file with the following:


You can also turn it on at runtime by including the syslog command line argument:

mysqld --log_syslog=1

Syslog and Eventlog will by default write to a local file, so we need to take a few more steps to centralising your MySQL log data. On Unix, we can configure Rsyslog to have log events sent to a remote machine, and on Windows we can configure a subscribing server using the Windows Remote Management Service. Both of these are good solutions, however, there are other log shippers out there that are worth checking out. Some popular ones are:

It’s also possible to use a DIY solution using a simple POST (via curl) or telnet.

Centralised. Now What?

Pat yourself on the back. Having your MySQL logs and metric reports in one location is a huge step to empowering yourself and your team to monitor, assess, and improve your applications. You all share the same ground truth about what is going on and can work together to find and share unique insights to fix bugs, reduce bottlenecks, speed up queries, monitor connections, track resource usage, and more. There’s just one problem. It’s still a very manual process.

It still requires a team of people to filter the information for the task at hand and then find ways to present and share the information with others. It’s possible you could write custom scripts and even a custom dashboard to do the legwork of analyzing and presenting the information, but that’s no small project and requires significant time and effort. Additionally, with changing technological landscapes and application environments, you might have to continually make modifications that eventually become cumbersome and painstakingly annoying.

Don’t despair! Many others have travelled this same path and have found enlightenment in analytics service providers. There are many to choose from and they all understand the situation at hand. They provide your team with a comprehensive set of tools to quickly interface with your data and provide you with the understanding and insights you seek, whether it be debugging, performance tuning, or monitoring.

There are several top-notch analytics providers out there and it’s worth reviewing each one carefully to understand what services they offer, including interconnection technologies, dashboard platforms, levels of customer support and engagement, and customer feedback for varying use cases.

Love this? Share it!
Want to share your experience in a blogpost like this one? Contact us!

Related Posts

Get notified when new content is published