MySQL Tuning Best Practices

Detailed tips and tricks for getting the most out of MySQL in the enterprise

by Richard Cooley

When it comes to enterprise aplications, there’s no question that data volumes are growing. From analytics to simple transaction processing, databases are being pushed to support more records, do more reporting, and do it all with quick response.

For enterprises interested in or using open source solutions, MySQL is a familiar database, but the question many IT departments need answered is: Is MySQL up to handling enterprise-sized tasks? Can it be scaled to handle the workloads we have?

The simple answer is MySQL is scalable if you look at the entire architecture and not just at the database tier. I have worked with very large media and entertainment organizations implementing open source technologies for Web content management. Understanding how to set up MySQL and how to improve performance isn’t just about tweaking MySQL but understanding the applications that use it and how to reduce the overall dependency on writing to the database.

Each layer of the architecture needs to be addressed to reduce the impact on MySQL and optimize its capacity. The first step is optimization of the code along with caching strategies at the code and Web server levels.

After the code has been optimized and memory cache fully utilized, a scalable Web server and database architecture are key in supporting a high-demand Web site.

There are many options to database architecture including vertical/horizontal partitioning, sharding, and proxy utilization. There are also options with hot/standby databases to isolate table-locking publishing events on the back-end administration interface. NorthPoint recommends the initial implementation of database replication via MySQL’s asynchronous master-slave model, as the other options add complexity and issues when scaling MySQL.

A simple approach for increasing database capacity is to add slaves to the MySQL master-slave cluster. The key to leveraging this architecture is back at the code level where all writes are sent to the master database and all reads are sent to the slave databases allowing for true scaling of the read requests. One caveat to this approach is that the asynchronous nature of the master-slave update process, necessary for performance reasons, may cause temporary delays to database updates while writes are sent across the slave members. This can also be handled programmatically at the application layer.

Some of the major name brand companies that tout their use of MySQL include Yahoo, YouTube, AOL, Facebook, MySpace, Twitter, and WordPress. If these companies can run portions of their companies on MySQL so can you.

That’s not to say MySQL doesn’t encounter bottlenecks. For example, disk I/O is a common challenge and tends to be the precursor to other bottlenecks that occur during peak loads against the database. Some other common bottle necks appear to be CPU related but are generally caused by processes waiting for a disk I/O to complete. CPU can be impacted by poorly written queries that have a higher than normal time to compute, or a large number of requests for a poorly written query. Understanding and looking at query performance is a good practice that should be performed on any site experiencing bottlenecks with MySQL.

Another common area to review is the memory dedicated to the server. Inadequate RAM is uncommon these days but is still a possible bottleneck.

Running Faster

Traditionally IT addresses bottlenecks by first looking at slow queries. Depending on the system you are running, you can examine the slow query logs to identify which queries need tuning. The simplest tool available for evaluating and tuning MySQL is the EXPLAIN command. Run “explains” against poor performing queries and evaluate the results to determine if additional indexes or modifications to the query are needed to improve the performance. In general, there are no drawbacks or trade-offs to reviewing and correcting these poor performing queries.

One of the configuration considerations when setting up MySQL is between InnoDB and MyISAM. InnoDB provides transaction capabilities and row-level locking, the downside to InnoDB is slightly slower reads and writes. If foreign key constraints are needed for your application, then you must use InnoDB. Depending on your application need this is one of the key decisions to make to ensure a scalable, responsive MySQL application.

Configuring the memory allocation settings appropriately based on data requirements is generally the appropriate solution for memory tuning. Memory is generally cheap these days, so make sure you have a sufficient amount. I generally recommend 8GB on each server.

Overall, setting up a proper replication environment to scale to the requirements of the application is necessary for scaling MySQL. Although replication is generally quite easy to set up on the database side, the application logic can sometimes be difficult as we have started to address in this article. A key consideration for data replication is the timing of the replication as it may be seconds to minutes behind depending on the amount of data being inserted into the master database.

Setting up a master-master replication environment by partitioning your data is a more complex but necessary solution for systems with a high number of WRITEs. This requires a much larger amount of customization to both the application logic as well as how the replication is set up.

Richard Cooley is director of content solutions for NorthPoint Solutions. You can contact the author at

comments powered by Disqus