Crazy WordPress tricks: Exporting ACF Flexible Content field data with MySQL

I have a client whose WordPress website is basically shutting down, and they need to keep some archives of the existing version of the site.

I’m making three different archives, for various purposes:

  1. Full backup. Of course, I’m doing this. A full backup of the database using mysqldump and a full archive of the site files using tar -cvzf. I like to do these things at the command line, and that’s what I’m doing here, but if I don’t have terminal access, I use the Updraft Plus plugin. This backup is not human-readable, but it allows us to spin up the site again on a new server in the future, if needed.
  2. Crawled static snapshot. Once again I’m at the command line, this time using wget to crawl the entire site and make a static copy we can just stick on a thumb drive and the client can navigate like it’s a live site. This is the most useful for day-to-day reference for the client, or if they need to show anyone what content existed on the site at the point when it was taken down.
  3. CSV export. This is, honestly, the clunkiest element, but it’s not without its uses. Yes, exporting content directly from the database into a spreadsheet means a lot of raw HTML. But it’s handy as a reference tool if the client needs to quickly find a bit of text that might have appeared anywhere on the site.

The tricky thing with the CSV export is, how do we go about that? That’s what this post is about.

My go-to tools for any WordPress import/export task are WP All Import and WP All Export. These are super flexible plugins that let you get really granular with selecting and organizing the data you’re working with, and they’re especially great in situations like where a client needs to go through and update a bunch of SEO content all at once. They can just blast through a spreadsheet instead of tediously editing individual pages and posts one-at-a-time.

I especially like the fact that there are add-ons that support Advanced Custom Fields. I make extensive use of ACF (and did even more so before Gutenberg matured). My old theme this client’s site runs on stores almost all of its content in ACF Flexible Content fields rather than in the standard WordPress content block.

But here’s where things get messy. The standard content block is a single field in the post’s main record in the wp_posts database table. But ACF stores each individual content field as a separate record in the wp_postmeta table. And a Flexible Content field isn’t just one ACF field; it’s an indefinitely large group of fields, depending on how complicated your page content is.

So there’s a one-to-many relationship between posts and Flexible Content records in the database. WP All Export makes it easy to pull in all of the Flexible Content subfields, but… yikes… it makes each one a separate column in the resulting CSV.

I ran WP All Export on this client’s content, and the CSV contained over 4,000 columns. Not at all easy to work with. I wondered, “Is there a way to concatenate all of the Flexible Content subfields into one column?” Granted, you’re going to lose the keys, but depending on your purposes — such as mine, here — that doesn’t matter. I just need the content, I want it in a single cell for each post, and I don’t care if there’s extra clutter in there too (e.g. booleans, key IDs for attached images, etc.). As long as the full text content is there, I’m good.

I quickly realized that, even if WP All Export does have a way to do this (although it seems like it doesn’t), it would probably take me less time to figure out a direct SQL query for the task rather than figuring out how to do it with WP All Export and custom PHP functions.

Yes, there is a way!

The key is the MySQL GROUP_CONCAT function. I can perform a subquery on the wp_postmeta table for my Flexible Content subfields and concatenate that into a single field in the query output.

Here’s my query. (Note: This is based directly on the example I found here, modified to be WordPress-specific.)

SET SESSION group_concat_max_len = 32767;
SELECT DISTINCT
  p.`ID`,
  p.`post_type`,
  p.`post_title`,
  p.`post_name`,
  p.`post_content`,
  (SELECT DISTINCT
    GROUP_CONCAT(m.`meta_value` SEPARATOR '|')
    FROM
      `wp_postmeta` m
    WHERE
      m.`post_id` = p.`ID`
      AND m.`meta_key` LIKE 'modular\_content\_%'
    GROUP BY
      m.`post_id`
  ) as `modular_content`,
  p.`post_excerpt`,
  p.`guid`,
  p.`post_status`,
  p.`post_date`
FROM
  `wp_posts` p
WHERE
  p.`post_type` IN ('page', 'post', 'tribe_events')
  AND p.`post_status` NOT IN ('inherit', 'revision', 'trash')
ORDER BY
  p.`post_type` ASC, p.`post_title` ASC;

What do you need to know about this?

First off, GROUP_CONCAT by default truncates the data at 1024 characters. The first SQL statement bumps that up to 32,767. Guess what, that’s probably not long enough, either, but since it’s the maximum character length of a cell in Excel, which is what my client will most likely be using to view the file, I went with it. As it happens, out of more than 1700 results returned, only 6 exceeded that length. If you’re not going to be using Excel, you can make the number much larger… just be sure to check if the spreadsheet application you’re using has its own per-cell character limit.

In my main query, I identified specific fields I wanted to include in the output, since there are some fields in the wp_posts table I don’t really care about here, such as ping_status or menu_order. But if you just want everything, you can select p.* and leave it at that.

The subquery where GROUP_CONCAT comes in has a few specifics to note: first, I chose the pipe character as my delimiter, but you could use whatever you want. Also, in the WHERE clause I’m specifying only rows that have a meta_key that starts with modular_content_. (The backslashes are to escape the underscores, which is a single-character wildcard in SQL.) That’s because my Flexible Content field’s key is modular_content and each of its subfields has that string prepended. (I included the trailing underscore because I don’t want modular_content itself, which just stores a number indicating how many subfields are in that particular instance.)

The main query’s WHERE clause has a couple of additional limits on the scope of the query. I only want pages and posts, plus The Events Calendar events, so I am limiting the post_type to those three. And I also don’t want to include posts that are revisions or in the trash, so I limit the post_status as well.

As always, the standard caveat: Running direct SQL queries can permanently alter your database. You should be pretty safe with this one since SELECT is a read-only action. But take anything here with a grain of salt, and don’t run code you found on the Internet in a production environment unless you’re sure you know what it’s going to do!

MySQL startup loops indefinitely and consumes a ton of CPU? This might be the fix

I’ve been having some issues with a particular server lately where it keeps going down. I probably should have given it serious attention sooner, but it’s a “personal” server (runs this site, my wife’s blogs, and a few other sites I’m hosting as a courtesy to friends), and I’ve had a lot going on lately.

This morning it seemed to be worse off. MySQL just wouldn’t start. My monitoring script that fires off every 10 minutes so I don’t have to be on-call 24/7 was doing its best, but it just kept restarting in vain.

Time to look into the issue. I found that MySQL was running, consuming over 100% CPU (it’s a multi-CPU machine so the maximum percentage is over 100), but nothing was loading.

Running systemctl status mysql.service showed this, which kind of surprised me:

Status: "Server startup in progress"

So, something was causing MySQL to just get stuck in the startup process and never actually get up and running. I figure that is usually a corrupt database, which could be a nightmare, especially since I’ve been ignoring this issue for a week or so. Having to restore from a week-or-more-old backup would be a minor inconvenience to me, but my wife writes a lot on her blog and she would not be happy to lose several days of work.

I needed to check out the MySQL log file. At 17 GB — yikes — that meant using tail to just check out the last few hundred lines.

Here’s something interesting:

2025-06-08T14:33:14.651332Z 1
[ERROR] [MY-011899] [InnoDB] [FATAL] Unable to read page [page id:
space=0, page number=5] into the buffer pool after 100 attempts. The
most probable cause of this error may be that the table has been
corrupted. Or, the table was compressed with with an algorithm that is
not supported by this instance. If it is not a decompress failure, you
can try to fix this problem by using innodb_force_recovery. Please see
http://dev.mysql.com/doc/refman/8.0/en/ for more details. Aborting…
2025-06-08T14:33:14.651347Z 1 [ERROR] [MY-013183] [InnoDB] Assertion
failure: buf0buf.cc:4110:ib::fatal triggered thread 140583111841344
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/8.0/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.

I went straight to the last link to the MySQL docs, Forcing InnoDB Recovery. Since this site has a bunch of WordPress databases that all use InnoDB tables, that would — hopefully — be the solution.

It’s pretty simple, once you find the right configuration file to put this line of code in:

innodb_force_recovery = 1

My server is running Ubuntu, so the file I wanted (I had to hunt around a bit) was:

/etc/mysql/mysql.conf.d/mysqld.cnf

I added that line at the end of the file, ran systemctl start mysql and much to my surprise, after about 3 seconds, the command prompt returned, with no errors. I fired up Safari and checked out my site and… well, since I’m writing this here, you can guess the rest.

Of course, is this really a solution? I was hoping so. The name of the parameter sounds like it’s, y’know, going to fix any problems it encounters. But reading the documentation further, it looks like it is really designed just to bypass certain safety mechanisms in order to allow the system to run so you can do your own troubleshooting.

Unfortunately I’m not quite sure where to begin with this troubleshooting. There are over 30 databases on this server, so I’m looking at somewhere over 500 tables, any of which could be the culprit, and the log files don’t give any indication of which table — or even which database — is the source of the problem.

So, when in doubt, I like to start as simple as possible. Since innodb_force_recovery is supposed to be only a temporary setting and it limits certain functionality, I knew I would eventually have to turn it off again. Let’s just try that now and see what happens.

I commented out the line I had just added to the config file, tried restarting MySQL, and… it worked. I’m not sure if starting up with innodb_force_recovery did do something that cleaned up the problem, or if just using that setting to get past whatever was hanging things up before allowed the normal boot process to do some standard cleanup, but in any case, it seems to be working fine now.

But if I get another alert that things have gone down, I’m not going to wait a week to investigate this time, no matter how much more pressing work I have going on.


Update (June 12, 2025): Although the server hasn’t completely crashed again, I’ve been observing over the past couple of days that my monitoring script is still restarting services a couple of times a day, so there’s still a problem.

I realized that the most likely database to be having issues was the one running this site, since it was originally installed with WordPress 2.1 back in 2007, and the core tables were still running the MyISAM engine (instead of the modern standard InnoDB), and several tables also had odd character set collation settings. Those have been causing problems in recent months, such as the odd issue I had with a previous WordPress update where it was mangling image uploads. (Yes, that actually had to do with the data table structure!)

I decided to rebuild the database: the tl;dr is that I exported the data to my computer, dropped all of the tables in the live database, and then re-imported the data to create clean new tables.

The long version is, I did a bit more than that. Those old tables had several NOT NULL datetime fields where the default value was 0000-00-00 00:00:00 — a value that was acceptable in some earlier version of MySQL but now is not. (The dates have to map to a valid Unix timestamp, so the new default is 1970-01-01 00:00:00. Although I went with 2000-01-01 00:00:00 as my new defaults.)

I also took this opportunity to change all of the tables to using the InnoDB engine and utf8mb4_unicode_520_ci as the collation. (Yes, I probably could have/should have used utf8mb4_0900_ai_ci but I went with the newest value I saw in other tables.)

I’ll need to give it a day or two to see if this resolves the issue. If not, I think it will mean that the principle is correct, but I just haven’t found the damaged database yet. So, I just have about 29 more to test after this!)

Is this a better CSS clearfix approach, now that we’re mostly not using floats anymore, anyway?

For years, variations on this have been my go-to CSS “clearfix” approach:

.clearfix::after {
  clear: both;
  content: '';
  display: table;
}

It’s the “conventional wisdom” on how to do this. But I found recently I was trying to do it and… it wasn’t working. In the middle of trying to figure out why it wasn’t working, it struck me that with modern CSS there’s probably a better way to do it. Why not this?

.clearfix + * {
  clear: both;
}

It doesn’t require the trickery of creating an empty content block on a pseudo-element. It’s less code. It seems like exactly the kind of situation the CSS + selector is intended for.

One little snag — although as I recall this was a snag with the old method too — is that it doesn’t introduce any spacing between the elements, and even putting a margin-top on the latter element doesn’t have an effect, although padding does.

So, it’s not ideal… or maybe there’s just something else I’m overlooking. But when is anything in CSS ideal? And how often isn’t there something I’m overlooking?

Stupid CSS tricks: Flexbox method for integrating a horizontal divider into a heading, with centered text

I’m mainly writing this here so I’ll find it again in a few years when I need to do this and can’t remember how I did it before.

Say you want a nice looking, centered text header integrated with a horizontal rule, with the line on both sides of the text and a bit of a gap, like this:

Here’s My Nice Header

The only HTML involved there is this:

<h4 class="my-nice-header">Here's My Nice Header</h4>

Besides Flexbox, obviously, I’m leaning heavily into CSS Pseudo-elements to make this work. It occurred to me immediately that I could use the ::before and ::after pseudo-elements for the lines, but my real flash of insight (at least it felt like a flash of insight to me) was using the ::first-line pseudo-element to apply Flexbox styling to the text itself, without needing anything like a nested <span> tag.

Here’s the exact CSS code I’ve included in the page to render this header:

.my-nice-header {
  align-items: center;
  display: flex;
  gap: 1rem;
  width: 100%;
}
.my-nice-header::before, .my-nice-header::after {
  background: gainsboro;
  content: '';
  display: block;
  flex: 1;
  height: 1px;
}
.my-nice-header::first-line {
  display: block;
  flex: 1;
  text-align: center;
}

Note: I removed some font styling and margins that aren’t pertinent to the actual “trick” itself. I left in the background color, because you need to specify some color for the lines not to be invisible.

YouTube’s recommendation algorithm is pushing AI-generated nostalgia slop on me

I watch a lot of YouTube. Most of what I watch on YouTube is related to music or video games, but I also have a penchant for videos about cooking, architecture, and the history of 20th century technology. A couple of my favorite channels are Tasting History with Max Miller and Phil Edwards.

By this point all of the algorithms know I am a 50-something GenXer, with a moderate affliction of nostalgia. So as much as I know listicles (or the video equivalent) are clickbait… well, I take the bait.

So, you know just as well as the algorithm did that I would not scroll past a video called “15 FORGOTTEN Sandwiches That FADED From Your Family Table.”

Go ahead, watch it.

I was struck immediately by a few things: first, the weird overuse of “aged film” effects on the apparently stock video clips that vaguely corresponded to the sandwiches being described.

Next, I noticed a weird monotony to the narrator’s delivery. I didn’t like it, but I figured it was just his style.

But that was when things got weird. Out of nowhere, the introduction of the “Mock Ham Salad Sandwich” was spoken in a different voice, with a strong Asian accent. Then it was back to Mr. Monotone.

Suddenly it all clicked. This entire video was AI generated.

I’m not sure if the video content itself is AI-generated, or if it’s just… um… AI-concatenated. I didn’t scrutinize it super closely, but I didn’t see any of the telltale signs, like mangled text, deformed human hands, objects spontaneously transforming into something else.

It might all just be stock footage. But a) I do know AI-generated video has improved a lot recently, and b) it also seems unlikely that they’d have found enough marginally-relevant (and some of this is very marginal) stock video footage for each of these sandwiches.

I was struck specifically by the pimento cheese sandwich, and how the shot of it shows a paper wrapper with the Masters logo. Yes, the pimento cheese sandwich is inextricably tied to the Masters golf event at Augusta National in Georgia. Why wasn’t that detail mentioned in the narration? (It’s probably worth noting that this is a fact I’m able to recall only because I saw a Facebook post about it a few days ago.)

The channel that posted this video only has two videos, both posted this week. Combined, they have fewer than 1000 views.

Both of their videos share a similar format. But what really freaked me out was that my YouTube home page also had another video from a different channel that was similarly nostalgia-stoking and, based on the little preview that played, had the same telltale “aged film” effect.

Who is behind this crap? How much worse is this all going to get?