As I’m dealing with larger and larger sets of data, I’m hitting situations where a mysql operation starts to take many seconds, and may lock the table it’s working with for reads and writes. That means not only does the task that’s waiting for the result get stuck, other parts of the system can end up wedged too.
If mysql does stop responding, fire up the command-line client, and then run
SHOW PROCESSLIST;
This will output a table describing every job that the database server is working on, or has in its queue. The Info column gives a summary of what the command is, and the State describes whether it’s being processed, or if it’s waiting for another job to finish first. What you’ll usually see with these hangs is that there’ll be one row that’s showing a high number of seconds in the time row, and then a queue of other commands waiting for that one to finish. So what can you do to fix that?
The first thing is to stop the server processing the job if it’s going to take a crazy amount of time to complete. If you look at the process list, note down the Id number in the first column of the one you want to stop, for example 25991, and type
KILL 25991;
There’s no guarantee that bad things won’t happen to your data if you do stop a job halfway, so use this with caution! If you want to understand why a query ended up taking a long time, that deserves a long post to itself, but the best place to start is by running DESCRIBE <your query>, for example
DESCRIBE SELECT * FROM messages;
That will give you a rundown of how many rows it will look at, and whether with will be able to use any indices or keys to speed up the operation.