I’ve been having some issues with a particular server lately where it keeps going down. I probably should have given it serious attention sooner, but it’s a “personal” server (runs this site, my wife’s blogs, and a few other sites I’m hosting as a courtesy to friends), and I’ve had a lot going on lately.
This morning it seemed to be worse off. MySQL just wouldn’t start. My monitoring script that fires off every 10 minutes so I don’t have to be on-call 24/7 was doing its best, but it just kept restarting in vain.
Time to look into the issue. I found that MySQL was running, consuming over 100% CPU (it’s a multi-CPU machine so the maximum percentage is over 100), but nothing was loading.
Running systemctl status mysql.service
showed this, which kind of surprised me:
Status: "Server startup in progress"
So, something was causing MySQL to just get stuck in the startup process and never actually get up and running. I figure that is usually a corrupt database, which could be a nightmare, especially since I’ve been ignoring this issue for a week or so. Having to restore from a week-or-more-old backup would be a minor inconvenience to me, but my wife writes a lot on her blog and she would not be happy to lose several days of work.
I needed to check out the MySQL log file. At 17 GB — yikes — that meant using tail
to just check out the last few hundred lines.
Here’s something interesting:
2025-06-08T14:33:14.651332Z 1 [ERROR] [MY-011899] [InnoDB] [FATAL] Unable to read page [page id: space=0, page number=5] into the buffer pool after 100 attempts. The most probable cause of this error may be that the table has been corrupted. Or, the table was compressed with with an algorithm that is not supported by this instance. If it is not a decompress failure, you can try to fix this problem by using innodb_force_recovery. Please see http://dev.mysql.com/doc/refman/8.0/en/ for more details. Aborting… 2025-06-08T14:33:14.651347Z 1 [ERROR] [MY-013183] [InnoDB] Assertion failure: buf0buf.cc:4110:ib::fatal triggered thread 140583111841344 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/8.0/en/forcing-innodb-recovery.html InnoDB: about forcing recovery.
I went straight to the last link to the MySQL docs, Forcing InnoDB Recovery. Since this site has a bunch of WordPress databases that all use InnoDB tables, that would — hopefully — be the solution.
It’s pretty simple, once you find the right configuration file to put this line of code in:
innodb_force_recovery = 1
My server is running Ubuntu, so the file I wanted (I had to hunt around a bit) was:
/etc/mysql/mysql.conf.d/mysqld.cnf
I added that line at the end of the file, ran systemctl start mysql
and much to my surprise, after about 3 seconds, the command prompt returned, with no errors. I fired up Safari and checked out my site and… well, since I’m writing this here, you can guess the rest.
Of course, is this really a solution? I was hoping so. The name of the parameter sounds like it’s, y’know, going to fix any problems it encounters. But reading the documentation further, it looks like it is really designed just to bypass certain safety mechanisms in order to allow the system to run so you can do your own troubleshooting.
Unfortunately I’m not quite sure where to begin with this troubleshooting. There are over 30 databases on this server, so I’m looking at somewhere over 500 tables, any of which could be the culprit, and the log files don’t give any indication of which table — or even which database — is the source of the problem.
So, when in doubt, I like to start as simple as possible. Since innodb_force_recovery
is supposed to be only a temporary setting and it limits certain functionality, I knew I would eventually have to turn it off again. Let’s just try that now and see what happens.
I commented out the line I had just added to the config file, tried restarting MySQL, and… it worked. I’m not sure if starting up with innodb_force_recovery
did do something that cleaned up the problem, or if just using that setting to get past whatever was hanging things up before allowed the normal boot process to do some standard cleanup, but in any case, it seems to be working fine now.
But if I get another alert that things have gone down, I’m not going to wait a week to investigate this time, no matter how much more pressing work I have going on.