Many have argued that WordPress’s Gutenberg blocks were the way forward. In fact, I remember a conversation with colleagues back in 2014, debating whether blocks were the future and if we should build such a system ourselves.
But there was a problem. We knew that doing it well would be incredibly hard.
There are key traps you have to watch out for with any block library, mostly centering on maintainability. In essence, a block-built website can be a migration nightmare. We had customers come to us with hundreds of pages built using random custom blocks; they wanted a rebrand, but the job was huge. You had to locate every block and work out how to migrate it to the new platform. It was a mess.
So, at Standfirst, we stuck to the Classic Editor, custom fields and Carbon Fields (similar to Advanced Custom Fields but, at the time, more developer-friendly). It just worked. Our customers didn’t feel overwhelmed, as they often did when faced with Gutenberg or Elementor, and we could tightly control the output. And feedback was always positive.
However, today I finally switched to block support for this website. Only today. I have built multiple sites with the block editor over the years, but only recently have I started to enjoy the process, rather than repeatedly asking: “Now what the hell is it doing to my layout?” I hated feeling like I was fighting the block editor. I love that it’s gone.
The themes have matured. The blocks have matured. It feels solid.
I like solid, proven systems. It is why I waited for Gutenberg to mature, and why I still prefer classic RDBMS approaches to highly indexed buckets of JSON when storing data. Just as SQL relies on set theory elements with mathematically proven solutions, I need my page editor to feel deterministic, not lucky.
You can, of course, recreate that in MongoDB with a pile of JSON. The problem is that your indexing issues grow exponentially over time if you don’t think carefully about your data patterns. Consequently, systems built on this approach often become clunky. You can index like hell, but those indexes just grow and grow.
WordPress solves much of this by stuffing a handful of tables with a weird hybrid of HTML and JSON-in-comments. Then, in a common enterprise stack pattern, the rendered output is shoved into a big Elasticsearch bucket.
My issue with this is that loose data patterns generate complexity. I see this in our task management software at work – it allows you to expand on the data stored against each task, so when you have an idea, you can just execute it. The downside is that you make the system incrementally more complex until it becomes an unholy mess. You become a data pattern hoarder.
Software developers are prone to this, too. It is too easy to extend a schema or create a JSON blob to store in an RDBMS or key-value store. This removes friction, which is good, but it also encourages action before thought. In my career, the best work I did – the stuff that lasted without trouble – was always code and systems I had thought about carefully in advance.
When I wrote the first iteration of Search Replace DB, I tried to solve my problems the ‘correct’ way. I would take an SQL dump of a WordPress site, search-replace it, and upload that dump over the old database. Boom! Widgets and settings blinked out of existence.
I realised it was a problem with PHP serialisation. So, I wrote a small script to fix these serialisations. It worked, but it seemed dumb. Suddenly you had a multistep process, and multistep processes increase the risk of errors creeping in. Or, if you are like me, the risk of getting distracted by a squirrel outside.
There was no simple solution. Storing PHP serialisations in a database is, in my opinion, a bad idea. But people do it. It became common in WordPress because the core teams have always been rightly wary of database spread and PHP serialisation was just there. It provided an easy solution to storing structured data where you didn’t want another table, and it was generally easy for the developer to work with and change. But if you want to leverage the database power you should ideally use proper JSON data types which have been supported in MySQL since around 2015.
Except… coupled with the way WordPress supports older infrastructure – current minimum requirements still allow for older PHP and MySQL versions in many environments – developers cannot always commit to doing things the ‘correct’ modern way. The cost of switching later then becomes too high as the legacy methods cling on. WordPress essentially supports a twelve year old version of database software and we all carry the cost of that in every day we develop on it. Even if that support for legacy is why it’s also very popular!
So when working in WordPress you should actively avoid creating massive migration or archival headaches for future you or, in business, the future customer. Which means we should be wary of jumping onto a new tech until it’s well proven.
So I tend to wait. I don’t think anyone should have been storing JSON or serialised PHP in the database to begin with. Moving quicker than the infrastructure you have to work with creates a gain today for a cost tomorrow. And so it’s been with WordPress.
This problem is common across many fields. “Move fast and break things” is a fine motto, provided you accept the future outcome. But you only have to look at Meta’s software to spot that they have created a huge amount of legacy mess. It has become a ‘doom box’ of code, dependencies and legacy tooling.
All of which is a long way of saying that I don’t like to commit to new tech until I know it is proven and reliable. I hate creating a future liability for an easy win today. And similarly, I don’t like creating a kludge when I know that a better solution is around the corner when I can just wait and use a well established method today.
And that is probably why I am relatively poor, but the client sites of my customers have always been rich in stability and archival quality. And, interestingly, we’ve seen a lot of success in our customers as they’ve grown.

Leave a Reply