TCMalloc and MySQL
Over the last month or so, we noticed steady week-over-week rises in overall MySQL query time. A poorly-performing database makes for a slow site and a slow site makes for…
Over the last month or so, we noticed steady week-over-week rises in overall MySQL query time. A poorly-performing database makes for a slow site and a slow site makes for a terrible experience, so we dug in to see what was going on.
The investigation is probably more interesting than the actual solution. We started with the most basic things: taking a look at long-running queries, database size, indexes. Nothing in that was especially fruitful.
We turned to some extremely helpful open source tools from Percona to figure out what was going on. I set up pt-stalk to trigger alerts and dumps of diagnostic data whenever MySQL’s running thread count got inordinately high. We sent stats about frequency of pt-stalk
triggers to our Graphite instance, and I set up an alert in our Campfire that audibly pinged me on every trigger. The constant sound of the “submarine” alert in my Campfire client was driving me to Poe-esque insanity, so I was doubly motivated to fix this.
With more investigation, we were able to rule out a number of other potential causes, including physical problems on the machine, network issues, and IO issues. Naturally, the submarine noises kept going off, and at one point I may or may not have heard Sean Connery say “Give me a ping, Vasily. One ping only, please.”
Luckily, I noticed pt-stalk
data dumps and SHOW ENGINE INNODB STATUS
revealing hundreds of these bad things:
--Thread 140164252591872 has waited at trx/trx0trx.c line 807 for 1.0000 seconds the semaphore:
Mutex at 0x24aa298 '&kernel_mutex', lock var 0
Issues in MySQL 5.1 and 5.5 (we run 5.1 for now) with kernel_mutex
around transactions have been known for some time. They have also been known to my friend and increasingly well-regarded yoga expert @jamesgolick, who last year wrote about the exact issue we were encountering.
So we took a page from Golick’s work and from our own thinking on the issue, and went forward with the switch from stock malloc to TCMalloc.
Different allocators are good at different things, and TCMalloc (available as part of gperftools) is particularly good at reducing lock contention in programs with a lot of threads. So @tmm1 dropped in a straight-forward export LD_PRELOAD="/usr/lib/libtcmalloc_minimal.so"
, and with a database restart we were off to the races.
Good things happened. After the switchover, those pt-stalk
triggers declined to about 1/7th of their original frequency, and the previously-seen contention dropped significantly. I had expected at best a stop in rising query times with this change, but we ended up with something much better — a rather shocking 30 percent improvement in query performance across the board and a general decline in spikes.
I allowed myself a small celebration by reading The Raven in silence and enjoying a non-trivial amount of Nutella.
Written by
Related posts
Unlocking the power of unstructured data with RAG
Unstructured data holds valuable information about codebases, organizational best practices, and customer feedback. Here are some ways you can leverage it with RAG, or retrieval-augmented generation.
GitHub Availability Report: May 2024
In May, we experienced one incident that resulted in degraded performance across GitHub services.
How we improved push processing on GitHub
Pushing code to GitHub is one of the most fundamental interactions that developers have with GitHub every day. Read how we have significantly improved the ability of our monolith to correctly and fully process pushes from our users.