Blog  Subscribe to our RSS feed RSS

Nathan’s MySQL Crash Course: Part Two

Nathan's MySQL Crash Course: Part Two

Continuing on from yesterday’s post on MySQL storage engines, GT’s lead sysadmin Nathan March gives some MySQL tips, to be followed by some Percona-specific insights tomorrow.

  • Hard disks are by far the slowest single part of any server. When a write comes in it can be pretty time-consuming to go around updating all of the data on the disk, so instead MySQL just puts that new data into the buffer pool (in-memory cache), and marks those pages (set of blocks) as “dirty”. It then writes the update to the ib_logfiles immediately so that if the server crashes, it can replay the log and recover those dirty pages that haven’t yet been written to disk.
  • Likewise, when reading data MySQL will first check the buffer pool to see if it is available there before going to disk. The buffer pool is based on the Last Recently Used algorithm, which keeps track of when a block of data was last needed and uses memory to store only the most frequently accessed blocks. This helps InnoDB to avoid going back to hard disk unless it absolutely has to.
  • In order to deal with those “dirty” pages, there’s a background job in MySQL that writes the pages to disk. Once it’s done a set of them, it stamps the logfiles with a checkpoint so that it knows how far through the dirty pages it’s progressed. On a proper shutdown of MySQL, it flushes all the dirty pages to disk. This means that if you’ve got a busy MySQL server and you try and shut it down, a shutdown can take anywhere from 5 to 20 minutes during which time MySQL is unavailable!
  • When the server crashes (due to an improper shutdown, server reboot, or anything other than a safe shutdown) InnoDB goes into crash recovery mode. It opens the log files, looks for the last safe checkpoint, and then works its way forward from there, checking each query and updating the ibdata files as needed to get things into a consistent state. Since MySQL doesn’t know which blocks have been updated properly and which haven’t, it has to check everything, making for a much slower than normal shutdown. It’s much better to let MySQL do a safe shutdown than it is to forcibly kill it and suffer through crash recovery.
  • So what does the above mean? If you have a bunch of bad queries running, it’s much better to kill the queries and not restart all of MySQL. If you restart MySQL you’ll lose time saving the dirty pages (or going through crash recovery if you forcibly kill it), as well as losing all of that cached data in ram. Depending on site and traffic levels, it can take anywhere from 5 to 30 minutes for a database server to warm up again and get all the data it needs into its cache. Sometimes you have no choice but to forcibly kill MySQL if it hangs, but if it’s responsive and you can access it it’s usually not necessary.
  • If a website is down because the db servers are flooded with queries, it’s usually best to just stop Apache while you sort out MySQL. This way, you’re not combating an incoming flood of queries and you only have to deal with the existing ones. Once you’ve got MySQL cleaned up and ready to go, you can then start Apache again. Again, restarting MySQL may seem like a quick fix but it will often make the problem worse due to all the caches being empty on startup.
  • Sometimes killing queries takes time, especially if it’s an update. MySQL may have to undo a transaction and have a significant amount of data cleanup to do; give it a minute or two before you start to look for alternatives.
  • If you’re doing a shutdown or startup of MySQL and wondering what’s taking so long, take a look at the log files. They’ll reveal if you’re flushing dirty pages, doing crash recovery, or something else. Don’t jump to forcibly shutting down MySQL if it’s taking a long time, instead tail the log and see if it’s just busy with a certain task.
  • /var/lib/mysql/ib_logfile0 and /var/lib/mysql/ib_logfile1 are customizable in size using the innodb_log_file_size. A general rule of thumb is that they should be big enough to hold an hour’s worth of changes. If these files fill up, MySQL will stop handling any updates until it’s able to go through and do a checkpoint, so it’s important that they are large enough. Likewise, if you make them too large, a server shutdown or crash recovery will take longer since it’ll have more data to work through. If you want to resize these files, you need to update my.cnf, do a proper safe shutdown of MySQL, move the existing files out, then restart MySQL. It’ll take a moment to startup as it rebuilds those files and it will not resize them on their own, so if the config doesn’t match the existing files, MySQL will error out and fail to start.
  • InnoDB does not reclaim space when data is removed! If you have a 40gb table and you delete 38gb of data, your InnoDB data file will still be 40gb! It will reuse that space as the table grows instead of allocating more, but it does not rework that file to reclaim the space. You can do so by running an optimize table, but on a big table that can take a long time since it requires copying the data to a new file.
  • By default InnoDB will store all table and index data in /var/lib/mysql/ibdata1. If you turn on innodb_per_file_table, InnoDB will create a /var/lib/mysql/dbname/tablename.ibd file (similar to MyISAM). You take a small performance hit for this but gain the benefit of easier maintenance, since if you drop a big table you will regain that space immediately. As previously stated, with this is turned off, if you drop a table the ibdata1 file will stay the same size.
  • Even if you have innodb_per_file_table turned on, you can’t move around ibd files like you can MyISAM MYI/MYD/frm files. Even with per file on, some table metadata is stored in the /var/lib/mysql/ibdata1 file. You can move the entire MySQL data directory, but not specific tables.
  • By default InnoDB will call sync() after each write it does to the ib_logfiles to get the block written to the disk right away. This ensures data integrity and prevents the loss of any data in a crash. While this is great for absolutely critical apps (as in banking, for example), for web applications it’s usually not a big deal if a couple rows get dropped in a crash. Calling sync() after every write is pretty costly, but you can set innodb_flush_log_at_trx_commit to 2 which makes it call sync() every second instead. I’ve seen this drop IO load on a busy database server by as much as 50%! Read more here.

Stay tuned for Nathan’s tips on working with Percona tomorrow.