AI is predictive, not intelligent

I hope you haven’t missed the big AI revolution going on right now? No? Yes? YES? You have no idea about it? Where do you live? On Mars? Ah OK, you were kidding. AI is being talked about everywhere, new devices like the Humane AI Pin or the Rabbit R1 are very much about it, and everyone involved is scrabbling around like mad for venture capital to fund their incredible ideas.

Having access to these language models is immense, right? We can sit down, whip out our AI, and enjoy a better than life experience. We can ask it questions and often it can come up with remarkably lucid explains. It seems intelligent. Here’s a conversation I’ve just had with ChatGPT’s latest model, after I asked it to settle into a British conversational style:

Me: The weather’s nice isn’t it?
ChatGPT: Oh, absolutely! It’s a lovely day, isn’t it? Perfect for a stroll in the park or perhaps a spot of tea in the garden. How are you planning to enjoy the fine weather?
Me: I have to work, sadly. In fact I’m working right now.
ChatGPT: Ah, that’s a bit of a shame, isn’t it? But needs must, as they say. Hopefully, you can steal a few moments to enjoy the sunshine, even if it’s just a quick breather by the window. Anything I can help you with to make your workday a tad brighter?
Me: Give me a joke to help me get over my vague sense of imposter syndrome!
ChatGPT:
Of course! Here’s a bit of light-hearted humour for you:

Why did the scarecrow win an award?

Because he was outstanding in his field!

Hopefully, that brought a smile to your face. Remember, everyone feels like an imposter at times, but you’re doing brilliantly!

It’s a decent conversation, although no British friend I know would chat like that. They’d be less polite and more vaguely abusive, to be honest. But it does seem intelligent. Then I can ask it a tricky question:

Me: I’m struggling with a maths question. I have a formula for a parabola, and I need a graph and the location of the y intercept: f(x)=3+7

ChatGPT:

No worries, I’m happy to help with that. Let’s break it down.

The given function is:

To find the y-intercept, we need to determine the value of the function when =0. Plugging =0 into the equation:

(0)=3(0)²+7=7

So, the y-intercept is at (0,7).

Now, let’s plot the graph of the function and visualize it. I’ll create a graph for you.

Graph by ChatGPT

Here you go! The graph of  shows a parabola opening upwards. The y-intercept, marked in red, is at the point (0,7). I hope this clears things up for you!

That’s so freaking clever!

Except… except… it’s not actually clever.

Let’s try asking it a question you can use on a seven year old child who would almost universally get the answer right:

As I was going to St. Ives, I met a man with seven wives who was going to St Ives. How many people are going to St Ives?

I think you’ll agree that this riddle is pretty simple. I’m going to St Ives, the man is going to St Ives, and the seven wives are going to St Ives. Look at this madness I got back:

The riddle goes as follows:

“As I was going to St. Ives, I met a man with seven wives, Each wife had seven sacks, Each sack had seven cats, Each cat had seven kits.”

The key detail in the riddle is the first line: “As I was going to St. Ives.” The person telling the story is the one going to St. Ives. The man with the seven wives and all the others are simply met on the way; the riddle doesn’t specify their destination.

Therefore, only the narrator is confirmed to be going to St. Ives. The answer is:

1 person (the narrator) is going to St. Ives.

And that reveals the weakness of large language models. There’s no inherent cleverness. What it does is to look for what it thinks you’re talking about then infer an answer using a combination of techniques, and it then picks what the most likely question is then creates a response based on what it thinks is correct. If the problem is very well known, it can do some really quite incredible things. But if the problem given to it is a novel variation of an old problem it will absolutely fall apart.

Life is full of novel problems. Don’t be misled by the clever bullshit attitude of a large language model, but at the same time, if you know its limitations, it can be a wonderful tool. Use it wisely and carefully, or there’s a risk it’ll make a fool of you. And yes, the post image at the top is, indeed, AI generated – because I have bugger all budget and this is a personal blog. Bite me!

The astonishing power of modern computing

Being very old (or at least, that’s how I feel being in tech!) means that after coming up to nearly forty years in technology, I’ve seen some changes.

My first computer at home, that I owned, that I could truly call my own, was a Dragon 32. It was a small, 32KB computer using the rather lovely little 6809 processor. This CPU ran at 2MHz and the system as a whole allowed me to learn a heck of a lot about computer science as a geeky teenager who was busy ignoring sports.

It didn’t exactly prepare me for my first coding job, working on an IBM MVS mainframe computer handling the payroll of 70,000 people, and distributing over £1bn a year. In fact, it prepared me terribly, because mainframes back then were similar to desktop computers today – powerful, with a ton of security features, online storage and a lot of data pipelines. But still, I knew various algorithms, knew what compilers were (though I’d only used an assembler really, but it got me half way there) and how to do a lot with a little.

That mainframe, when I started, could do about two hundred megaflops.

And today, in the picture above, I have an M5Stack on my desk. It uses the Xtensa LX7 processor. As you can see from the picture it’s absolutely tiny. And it can handle the integrated digital camera, WiFi, Bluetooth, USB… everything, really. The little CPU can run at low speeds and high speeds as appropriate to workload, and can reach over 1GHz.

Let’s put things into perspective.

The 6809 was addressing 32KB. The S3 Stack has 16MB of Flash and 8MB RAM. So 750 times the onboard storage of my Dragon 32. You can insert a 16GB microSD Card. You could put 200KB onto a C60 cassette tape back then. So that’s 80,000 times the unloadable storage! Running a lot faster too.

In terms of CPU performance, I’m struggling to find comparable benchmarks, but if we look at a 6809 you used to get 2-3 clock cycles per simple instruction – so you’d max out at around one million per second. Complex instructions would be far worse, and moving memory around was slow and expensive. The LX7 seems to get through around 0.8 instructions per clock cycle, so we can see it can max out at over a billion instructions per second. Or at least a couple of thousand times faster than the 6809.

But here’s the thing… the little computer I’m holding in my hand is at least as capable as the  contents of the data centre I started to work at in 1987.

And that, my friends, is why I feel very very old in this sector! Things have changed so much.

Yet at the same time, they haven’t. UX didn’t have a name back then, but careful thought into the design of a screen could save people a huge amount of time or reduce data errors. Saving CPU cycles then meant less of a bill (we paid by CPU time on mainframes) and could save tens of thousands a year for our team. Today, it may feel as if the CPUs are endlessly powerful, but using lower power techniques can dramatically cut down on resource use – meaning less environmental damage. I know at work we’ve taken systems running on 32GB, top spec web servers down to 512MB machines! It’s incredibly satisfying to reduce resource consumption by so much.

Let’s look at it another way. The sophisticated corporate payroll, for 70,000 people that I worked on in the eighties and nineties could easily be run today, if suitably considered, on the tiny little computer I’m holding in my hand here.

Which leads me to the real point of this piece – we as software engineers and designers have a duty to minimise resource usage. Yes, that enormous library you’ve included in your project might save you an hour of time today, but if it increases the memory and power consumption of your system so much that you need loads of cloud computing to support your system then it’s not a job well done – it’s a kludge you can hide. We have an ethical imperative to do better in systems development.

Also, low power computing is awesome and fun. Try it!

The productivity paradox

I came across this chart recently and it gave me pause for thought as someone whose life work is designed to improve productivity. It shows that from 2008, although we know technologies have been growing in power, productivity growth suddenly dropped away from the trendline following the global financial crisis. Why?

This is where I fall back on an old story as a developer trying to make things more efficient. I worked with my colleagues to bring a client company the equivalent of about ten hours a day of labour saving due to using a well structured database for better data integrity over their previous ‘loose’ system. As they expanded, each branch would save at least an hour of time, and they were no longer capacity bound at each branch where more customers would linearly increase administrative work in a difficult recruiting environment. This would provide a return on investment (ROI) of less than a year.

It worked exceptionally well and the person we’d worked with said their staff were even worried redundancies might be on the way! This was never going to be the case in a growing company, because there’s an infinite amount of work to do.

All was good.

Then for some reason, our contact’s role was diminished and they eventually left the company. And the knowledge of the improvement left as well. The board’s innate distrust of anyone and their perception of us as no more than a cost base resulted in a great difficulty for us to help them without dramatically increasing costs by gold-plating every spec and decision.

This left the client unable to gain for advantage by exploiting our skills. It was difficult for us to see, and we could do little for them other to look for cost savings when, in reality, they needed to implement cost savings by using us more. Instead they opted for a recruitment based strategy to feed their growth and we parted ways as they shifted the system to a maintenance partner.

In all my thirty years of development experience I’ve seen this again and again. Instead of sticking with a solid set of developers who have learned your business details, chop and change to save money. Treat developers as a cost base and you lose so much knowledge that you’ll fail to gain productivity nearly as quickly as you could treating it as a partnership.

I don’t feel this is unique. I see lots of projects following an arc of big development then a cost reduction period which ends up losing the momentum of ongoing improvement.

Solving the productivity paradox

A strategy I now suggest when selling is to spend 60% of what you thought of on your project in year 1, then 40% in year two, 30% in year three, and then 20% a year throughout the product lifecycle. Software isn’t like building a bridge where things stabilise then stay them same for thirty years, sadly.

Fundamentally, I think corporate culture around custom software became both cautious and speculative following the global financial crash of 2008 and only invested in technology if it felt like it would create some impressive unicorn of an offering. We saw lots of developers tied up with venture capital chasing big things like blockchain and AI whilst ignoring the importance of daily incremental improvement that adds up to a huge return over time.

The answer then, is that if you have access to capital, you should stop chasing unicorns and start thinking about continuous pragmatic development aimed at dealing with all the little productivity blocks in your firm. Look to Kaizen style methodologies to help with structuring your processes around this.

What do you think? Feel free to comment!

Off-Cloud Backup for Heroku apps – a possible answer

The Heroku platform is an absolutely fantastic way to have to not bother with devops within a small development company. We’ve been using it at interconnect for years now, and whilst it’s not entirely perfect, it takes away one set of headaches and does so at a reasonable cost.

All the services offer backups, and the VMs are built from scripts and are essentially read only. So if something catastrophic happened to one of our databases, we can roll back a day and be OK. Except… let me explain my fears around data.

Trust issues with providers

In our very earliest years we used a VPS provider that used Plesk. Everything was solid and stable until one day, we got a report that a site had been hacked. Then another. It turned out that a vulnerability had exposed our sites to being hacked. And they were. This resulted in a big old clean up operation and restoration from backups. Except the daily backups we’d been paying for turned out to be weekly. So the backups we had were three days old. Ever since then, I’ve preferred to have a way of pulling backups separately to a server under my own control, unless the provider is Kumina, because I know the people so well that I’m 100% certain they’re as paranoid as I am and they’ve never ever let me down. But in the era of hustle culture bros who move fast and break things, you need a safety net.

Creeping corruption

My next fear is corruption you don’t notice immediately. I can well imagine that if all the meta data for the posts on a site before a certain date got wiped out, most people wouldn’t notice for ages. Imagine you’ve got a site with 200,000 posts, and various elements of the first 100,00 were damaged – the long tail matters to these sites and suddenly it’s all gone. Well, thank heavens for backups!

Except, of course, most cloud providers don’t provider substantial generational backups. Instead, they keep a few days or a week or so. And that’s your lot. If you need to go back months you’d better hope a developer in the company left a dump on their laptop somewhere – except of course that very very few developers keep dumps of production systems on their laptops – it’s bad practice and only tends to happen in exceptional circumstances and should be deleted soon after use.

How we fix it today

In the end, I asked one of my Linux oriented colleagues, Gianluigi, to create a service that would connect to Heroku’s API and then download every database, and sync every S3 bucket. It worked, with some limitations. More recently, because he’d left but remains a good friend, he helped me with a crash course in Linux sysadmin basics and I was able to extend and improve some bits. The system is a service written in PHP that does all the work. I then asked another colleague internally, Jack, to extend things to cover the PostgreSQL databases we also now used and to create a dashboard so that I could monitor the backups easily without resorting to logging into the backups servers.

The dashboard also doesn’t run on the backups servers. I needed to keep the backups as safe as possible – they’d be a great honeypot for a hacker, so they’re onioned away, and the backups service isn’t reachable from outside. Instead, it messages the dashboard with information about the backups taken. The dashboard also provides details on application and framework versions, for security monitoring and making sure updates have been applied appropriately, and it also sends me a daily summary email showing me storage space available and what was backed up in the previous 24 hours.

Here are a few screenshots of the system, with some censoring, but I hope you catch how it works from what you see.

To commercialise, or not?

And now to one of the reasons why I’ve decided to write about this. In the past, I created the first version of Search Replace DB – a quick script and algorithm I knocked up to parse a database and search and replace items in it. A fast, dangerous tool that I released as free open source code. Other people took it and commercialised it into successful products. We didn’t. And with the code being integrated into wp-cli and most devs would use that in preference (myself included!), except in those tricky situations where command line access wasn’t possible – mostly on cheap hosts. I think we were right to release the code, but where we failed was in realising the commercial possibilities. And that’s left me a little torn.

So now I’m torn – it’s not easy to set up services in Linux, but once you do, these things just run and run. It’s also not going to be the easiest thing to work with, so I anticipate support costs being quite high. It’s proper server level work. And I certainly don’t feel inclined to build a SaaS that acts as a conduit for people’s backups. It’s just too risky to have a central pool of lots and lots of backups, and people find them lurking on S3 buckets all the time. So I want to put this out to the community. Is this something you’d find useful? Let us know in the comments below. If we did release it, the code would be open source, but access to the latest versions would be restricted.

I’m looking forward to hearing your thoughts!

Interesting times in the world of software

About a decade ago, I was at a conference and talking to a fellow developer (I still call myself one, even though I don’t code so much these days) when he giddily told me about the funding he’d got for building a new piece of software he was hoping would make it big. It was a two year project and he’d got £100k funding. I asked if it was just him… and no, he had a colleague. So £100k, for two people, for two years? £100k didn’t sound a lot… £25k/yr each, basically. Or what you can earn in a much simpler tech support role. I decided not to say anything and leave the poor guy in peace, although this sort of work seemed a lot like gambling to me.

Today, things are different although there’s still a sniff of gamble about it overall. If you’re a developer it’s relatively easy to find a highly capitalised employer that’s positively dripping with money who will pay you £60k-£90k a year. Potentially quite a bit more. This reminds me of the late nineties dotcom boom. In 1997 I myself quit my safe but somewhat dull job at a multinational to become a freelancer, doubling my income almost immediately, and quadrupling it another year later. The new work was, in some ways, more interesting. It was also a lot more stressful, bad for my health, and definitely wasn’t the most exciting coding work. But it paid. I honestly don’t blame developers who decide to do what I did 25 years ago. It set me up. I think it was also a large part of why I had a heart attack in 2019… living out of hotels for a decade wasn’t healthy, and cheese became far too much a food staple for me as a vegetarian. However, the money was very good and it helped set me up. When you’re poor, it’s very hard to catch up and a good income was necessary for a while.

I bring this up because today I’m not ‘just a developer’ but actually run a web development company that specialises in websites and custom software for clients. And things are happening today that are reminiscent of the dotcom boom on the late nineties. 25 years have passed, but people don’t really change nearly as much as you may think.

The dotcom & Millennium Bug era

The late nineties were a period of post-recession growth and capital release. Banks had been deregulated, money was being created in the way it can be, and we were riding high on increasing productivity. Life felt good. And when money is created it can be invested.

There’s only one little problem in that. Sometimes, people get giddy and start splashing the money out too readily. The boom of the late nineties and early noughties, and the deregulation that encouraged it around the world, eventually led to the financial crisis of 2008. I’m a bit of a cautious soul, so even though I had plenty of income, I resisted borrowing too much to get a bigger house. In some ways I was foolish, because I could now be living mortgage free in the house I have now. But I figured that not having a big mortgage would afford me some other freedoms and I could use my money elsewhere. Mostly I just invested my money in solid companies. Friends, however, were telling me to invest in dotcoms. But I looked at the fundamentals. One example was a firm called Vocalis. They did, basically, telephone voice services software. Small team, and had some crazy valuation that was effectively equivalent of £20m per member of the staff. I rightly reckoned that was mad. My friend went ahead and pumped money in, and I mocked him. For a while I looked a fool. The value of the shares rose and rose.

Right now, there are loads of speculation bubbles. At the café at work I was trying to explain Bitcoin’s fundamental problems to our barista, when our receptionist came over excitedly wanting to know more. Both seemed interested in getting involved. That means the crash is likely imminent. They’re both lovely people, but in the economic chain, they’re nowhere near the top, which means that the speculation bubble is reaching it’s limits.

“If shoe shine boys are giving stock tips, then it’s time to get out of the market.” – Joe Kennedy, 1929 as the stock market was about to crash and lead to the Great Depression

So the dotcom boom and Millennium Bug led to a boom in demand for developers. New software was being created to replace supposedly outdated software that couldn’t be fixed (narrator: “It could”) and salaries were rocketing. I took advantage of that boom. I also knew it wouldn’t last. And it didn’t. My day rate as a PeopleSoft developer went from £200 a day in 1997 to £600 in 2002. It could have been higher. Cisco did an amazing job of raising funds in that era and I remember they kept offering me more and more to go to work for them in the Netherlands. But I didn’t really want to go to work there. I never really chased the money, so that’s about where I peaked. But I remember people with the right skills, experience and self confidence were on as much as £1k a day. That’s getting towards £2k a day at today’s prices. Some skills seen as super hard and rare could command double that. Most people didn’t, of course, make nearly that much, and some people preferred a job with reasonable hours and close to their families – a very valid and decent decision. But I was single with no ties.

There are a lot more developers around today – good incomes have brought many people into the trade. I meet people who called me a nerd in the eighties and now they’re working in IT. It’s a bit weird.

Today’s situation

Now it’s a bit weird. Rates still aren’t at the dotcom level, once adjusted for inflation, but they’re close. You can do very well in tech. But in my little firm we pay typically around £40k for a developer, plus various benefits, kit, resources etc, meaning you’d need to make around £70k as a freelancer to equal it. At least the way I calculate things and always did. I nearly swapped my £600 a day for £60k a year and kind of regret not doing that.

But why have the rates risen? Well, there are a few hot areas, and they can be summarised as AI, analytics, mass market apps, and blockchain. I’ll discuss each briefly:

AI

This is a hot one – the idea we can replace rooms full of people doing dull and not very high value work (from the perspective of the company) such as service desks with AI bots is very attractive. It won’t work though. Most “supposedly AI” bots are just following decision trees and the only bit of AI is in parsing the meaning out of a sentence in a very tightly defined context. AI is useful today for categorisation problems – e.g. looking at a picture and deciding “this is a cat” or “this is a threatening comment”. It’s not brilliant at the job, but I like that an AI can work out which pictures are of my Mum, for example, even if it misses about a third of them… it still makes my life easier. A bit. But what an AI can’t do is right a decent blog post. Sorry, it can’t. They’re awful at it. There’s loads of AI generated content out there and it feels obviously fake. The main job of these AI generated blog posts is to trick other AIs (Google, Bing etc) into categorising a website as useful. And because AI’s make toddlers look worldly wise, they can be easily fooled… and that means you can’t trust them with anything of real importance. Like your business decisions.

But, it’s a hot keyword, and naive venture capitalists like the idea. So in comes the money.

Analytics

Tracking and stalking customers across the internet is very attractive for advertisers believing that doing so makes them seem more interesting to consumers. I’m not convinced. People often find it creepy. They feel like they’re constantly stalked. They visit the website of, say, a printer supplier and they receive ads for a month for printers… but not only for that supplier, but for other printers because the tracking provider is cheerfully using your data as a supplier against you and selling that information to your rivals! I think advertisers are starting to cotton on, but are unsure of what to do… but I know there’s a lot more direct selling of adverts between publishers and advertisers than there used to be.

But, the siren call of analytics is strong, and people love a nice chart on which to justify a decision, so the more nice charts your system can create, the more people will pay to use it and try to gain an advantage over competitors. And advertising is huge, so in pumps the money. For now.

Mass market apps

Can you build the next Facebook, Instagram, or Slack? What’s the potential for an app that lets people read books from any publisher for a fixed monthly fee? How about an app that revolutionises food delivery? Interestingly, some apps are about replacing old and inefficient intermediaries and putting new ones in place. Uber is a nice way of hiring a minicab with flexible pricing that rewards drivers for being available at the right time. They don’t disintermediate, however. The customer is both the driver and the passenger. The new intermediary takes their share.

If you can replace old intermediaries you can make a lot of money. Imagine taking 0.5% of every single financial transaction, like Visa do? That’s a lot of money. Then you have intermediaries between the card firms, providers, and networks, such as Stripe… and then there are those replacing old ones, like Wise, for money transfers across borders.

What other things can be improved? Well, literally anything.

But most attempts to build these apps and the supporting infrastructure are doomed to never turn a profit.

Blockchain

Blockchain is a really interesting concept for a public ledger, using an interesting concept called proof of work to make it hard for any one person to try to dominate the network and win the consensus mechanism on new transactions. There are theoretical ideas out there to improve on this, but at the moment they remain just that and haven’t been proven.

And it’s a scam. Pure and simple. But it’s a hot topic. Bitcoin, Ethereum, Dogecoin and many others are actively speculated upon, as well as being used for the exchange of value – often in a hope to evade regulators. It appeals to the natural rebels amongst us because it’s outside of government control… and given that governments aren’t always a force for good, I get that.

Problem is, Blockchain breaks the rules of good software development… if you look at the big O notation for software, it has to follow certain rules or it will fail at some point and need to be re-engineered. Big O matters. I don’t have academic access to papers, and the internet is full of vested interests pretending that Blockchain scales just fine. I used to see the same in WordPress land, where people said the software scaled fine… but it doesn’t. In WordPress we get scale by putting a layer between WordPress and the internet to balance things out – the work the software itself does goes up in line with the number of people talking to WordPress. We can define that as O(n) so long as you know what you’re doing – that’s OK. We can live with that. But the consensus mechanism required for multi node agreement of transactions as required to track transactions will, by its nature, follow a curve that is likely to be somewhat greater than O(n^2) (each node does O(n) work in a linear fashion but the total work done on the network as each node is added therefore grows as O(n^2) plus a bit for network latency and overheads. Yet bitcoin transaction cost isn’t following that curve in spite of huge interest because, I reckon, most Bitcoin trades aren’t real.

Yes, that’s right. And what does that mean? It’s because wideboys, crooks and the overly-optimistic are involved. Given it is, by design, a pyramid scheme, it will have to fail at some point. But people are motivated to hide that, so there are Bitcoin tracker schemes, rather like gold purchase schemes, that never hold the asset in question. They will pump and pump values as hard as you like. And as long as there are new people coming in, like our receptionists wishes to, all is good.

And there are enormous amounts of money to be made. As in a goldrush, the people making real money are the shovel makers and traders. And they need developers. So for as long as there’s money to be made, coked up wide boys will be gurning their way through stressful meetings, fidgeting and anxious to cash in before it crashes out. You can earn a lot there. For a while.

OK, so thanks for the very long essay. What does it mean then?

Well, it means developers are really expensive right now. Small firms that do actual useful work and aren’t highly capitalised (like mine) can’t grow because we can’t suddenly charge our customers double for the work so that we can compete against these booms. It’s as if a very rich person has moved into your town and hired all the builders possible to create a huge mansion. They even approached builders working for firms and offered them double to come build that mansion. Soon builders are all swanning around town in Teslas and feeling pleased with themselves for being so cunning as to be in the building industry.

Same in software. Locally there’s a Tesla with a crypto referencing private number plate and a young, bearded and muscular techbro driving it. Fine, I’m not going to judge. He’s happy and making good money.

But if builders are all hired by the rich, the rest of us get priced out. Same in software. Small firms are going to find they can’t afford websites unless they just use some cheap web builder platform – it’ll give a less good solution, but it’ll do the job. Ish. And the firms that can afford will do that bit better. And better. And the gap will grow.

At my firm I’ve had to raise salaries, but we still struggle to clear a profit with the raised salaries. I’m fiscally conservative, so we’ve always had decent cash reserves. This lets us ride out the storm. From 1997 to 2002 dev rates went crazy. By 2005 they were back to normal again. We as a firm can’t handle eight years of this. But it’s not quite the same as back then – you can now hire developers globally and have them work remotely, if you really wish to, which can save some money and also help those countries out with extra foreign revenue. I, however, really like quality and good communications and I find that a geographically tight team works the best. It also makes it easier to hire new people into the trade. So, for now, I’m sitting tight. I won’t seek venture capital, or borrow. And if the worst comes to the worst, we’ll add AI to something that does basic statistical analysis, and blockchain to something with two computers in the network and hope someone out there fancies throwing us some money so we join the party. In the meantime, however, there’s still a healthy living to be made as a business doing useful things and avoiding the hot trends. I never set out to be rich, merely secure – I’ll ignore the rich mansions and do my own thing, creating good code for good people.

n.b. about the above – the above isn’t a paper. It’s a set of opinions designed to inform and illuminate about what’s happened. It relies on anecdotes. Don’t take it too seriously and don’t use it as the basis for what you want to do with software and investing in software. Or crypto. Do your own thing with the information you gather from multiple sources. Also remember that a lot of people say misleading things because it’s in their interests to do so, and that you shouldn’t trust a random blog or news source on the internet. Mine included.

Twitter is about to die. Here’s why.

Something odd has started to happen on Twitter for me, and it’s cutting my usage of it down quite dramatically. Why? It’s because it seems the spammers are winning…

https://twitter.com/nrhansonp/status/318959016602660864

https://twitter.com/fierofan11q/status/318758157247709184

https://twitter.com/DavidThiebaudc/status/318665593605738497

https://twitter.com/catazettler3/status/318327663506890752

https://twitter.com/clintonxbaurer/status/318093880182591488

Every. Single. Day. I get lots of these. Some disappear into the ether, others remain on the Interactions tab on Twitter. Given that these tweets outnumber genuine interactions there’s a problem, because whenever any service starts to get more than about 50% spam its usefulness drops off. Email has that, and email has suffered, but commercial spam filtering is so good that most of us have a provider that makes it continue to be useful.

What happens is that I’m powerless to stop this Twitter spam. It’s not like I can install something in my Twitter client. Marking content as spam is astonishingly clunky in Twitter.

I’ve made some great friends on Twitter, but I’m fed up of seeing the alert pop-ups on my phone, so that’ll be the first place I disconnect it from. And then it gets that little bit less useful… I’ll start to forget to check-in. Suddenly, a few days will pass when I don’t look on Twitter.

I doubt this will affect everybody, but it could be enough to mark the end of Twitter as a global and egalitarian short-form publishing platform. I remember when CIX died for me – there it wasn’t the spam, but the number of grumpy nutters with too much time on their hands driving out the useful but quiet individuals.

I know I’m using Twitter less these days, but the general level activity amongst the older Twitter community really appears to be dying down. Celebrities and the media are still busy, but they alone can’t sustain the system – it needs the grass roots using it to keep it relevant.

I hope Twitter can fix this, or soon I’ll just give up. What do you think? Are you being spammed to death on Twitter? Are you using it less and less?

Getting Quicker

One of the most important things that gets forgotten about when running a WP site is that performance is important.  We see many sites with page load times way in excess of 2,000ms per page.  Often the site just gets progressively slower over time and the change isn’t really noticed.  That had happened with mine, though I’d made tweaks in the past to help, I still wasn’t happy.

This is bad, especially now that Google rewards speedy load times with higher rankings.

So I knew that the increasingly sluggish performance of my site was an issue.  The crud had built up, and in rebooting I hoped to dramatically improve responsiveness.

And I did:

It’ll be interesting to see the impact of this over time, but I’m pleased with the results so far.

An interesting graph of site performance over the past few years:

As you can see – the performance early last year got particularly bad.

It’s worth noting that I don’t run any caching or CDN on this site – it’s never that busy to be worth the work.

I’m hoping that I can now keep responsiveness down to <700ms average.

One lesson I hope you do take away from this is the importance of continuously monitoring your site’s responsiveness by using a service such as Pingdom.com.

Blog “Reboot”

Hello – here’s the refreshed blog. I’ve decided to revert to a more typical blog format, after many months of soul searching on the issue. I previously had a layout based on a framework we used at interconnect/it for a couple of clients

But not only have I opted to switch to a blog layout, I’ve decided to use an off-the-shelf theme.  I’m now using Khoi Vinh‘s Basic Maths WordPress theme.

Why?

Well, it’s a lovely theme, for starters.  The typography is pretty good.  The archives page is brilliant (check it out) and should be the standard bearer for all themes archive pages.

But the real question for many, I suspect, is why I’m not using an interconnect/it designed theme.  Well, for starters, interconnect/it hasn’t produced an off-the-shelf theme in years.  It’s just not our business.  So rather than use a product of ours, we’d have to spend good and valuable time on creating a new theme.  And, well, why would we want to do that?

Lots of reasons, actually.  I could have a theme coded at the office that really shows off what we can do.  But the problem with that is that there’s not much need.  My blog is not an important one.  It isn’t about WordPress (most WP related content will be on our company site, not my personal one) and it just doesn’t get much traffic.

I run a business.  Its purpose is to make money, employ five people, and, with a bit of luck, turn a reasonable profit.  Its job is not to service my ego or make me look good.  A really good theme costs the equivalent of around £10k-£20k of chargeable time to design, code, test and implement.

Given that we’re turning work away, I thought “why bother?”  And decided to go shopping for something.

So What’s It Like?

It’s actually quite weird using somebody else’s theme.  I actually tried a few out and here are the things I learned that will hold us in good stead.

Themes don’t do enough to make life easy.

No really, they don’t.  One of interconnect/it’s biggest challenges is making sure that WP is as easy to use for clients as possible.  This means following standards, but it also means using some little tricks that help out – for example, registering and setting plenty of different image sizes, and setting/over-ruling whatever the media settings say.

Migrating WP content really sucks.

There’s a fundamental flaw with the default WP export/import.  If you have inline images, although the importer has the ability to download and attach the image in your new site it won’t change the links.  And if you do a search and replace, and your image sizes have changes, your lost.  Totally – the img tag will point to a file that doesn’t exist.

So what do you do?  Well, usually if I’m moving a site from one server to another, even switching domains, it’s a non-issue.  I have my tools.  But if you’re starting from fresh and working like an end-user would then you have to go through every single damn post in order to fix the images.  Every post with an image in it.  That’ll take a while.

If you’re really geeky, you’ll sort it, but it takes time.  Way too much time.  This kind of stuff needs to be sorted and it’s something we may look into as a contribution to the WP project.

Some Plugins Leave Lots of Crud

The reason for a reboot was that I felt that my site’s DB had been filled up with all sorts of crud.  Lots of plugins create tables, leave options, and so on.  Surplus tables have little impact, but they clutter the place up.  But options, lots and lots of them, do have a minor performance hit, and they add up.

Other plugins leave hooks, don’t deactivate properly and so on.  And over the years, I’d been through an awful lot of plugins.  The site hadn’t been redone since WP 2.0 had been set up on it.  I felt it was getting sluggish.

So… there are beautiful and amazing themes out there, and WP is wonderful, but there are little things that could make life just that bit better.  Better migration tools, a better system of managing images within content and their migration, and a better system for activating themes so that image sizes are better handled.

Is it a lot to ask?  Well, we’ll see what we can do about that!

WordCamp UK – Great Stuff + a Little Controversy

I went to WordCamp UK 2010 in Manchester… this is my write-up of the event, and its controversies along with my presentations…

I’m just settling in at the office having spent the weekend at WordCamp UK 2010 which was staged in Manchester and is a community event for WordPress users and developers.  I gave two presentations, one about WordPress in Big Media, and another about WordPress in the Enterprise.  These followed on from presentations given at last year’s WordCamp.

The Craic

The second WordCamp UK Logo
Yes, this isn’t the logo actually used, but I prefer this one :o)

I’m going to say now that one of the key elements of a good conference or unconference is the socialising – this is where you meet people, bond with them over beers/food/dancing and form alliances that in the future could prove to be very powerful.  You certainly get to make friends and feel like you’re a part of an actual community, and this happens in a way that you’ll never be able to reproduce with online technology.  As a consequence it’s no surprise that the awesome Thinking Digital conference has been nicknamed Drinking Digital by some wags.

As ever,Tony Scott excelled himself by getting us access to the famous Factory Manchester (FAC251) which also happens to be across the road from a magnificently geeky pub that sells good beers, has various classic 8 bit and 16 bit computers adorning the walls, and classic arcade games on free play.  Awesome.

The Presentations

There was a typically varied range of presentations running across three rooms, along with other folk busy coding up for the WordHack (the fruits of their labours are online).  One particular stream that particularly caught my attention was that of a sequence of involvement from John Adams of the Department for International Development.  He ran a free-form discussion group on testing strategies which was followed by an interesting talk on PHP unit-testing Nikolay Bachiyski of GlotPress fame.  This session showed up some of the lack of structure in general testing of WordPress core code, plugins and themes.  Although the approaches used were probably fine for a publishing platform, they would struggle to gain ISO approval.  In other words, you wouldn’t want to fly on a WordPress powered plane!

Other presentations that I particularly enjoyed were Michael Kimb Jones’s WOW plugins, and Toni Sant’s very underattended Sunday morning slot where he discused the way WP has helped with a range of Maltese websites.

The Controversy

What’s a WordCamp without at least a little controversy?  However, for the attendees of this one, this was a biggie… Jane Wells is Automattic’s Master of Suggestion (seriously, that company has some weird job titles) and she made a suggestion that we shouldn’t have a WordCamp UK, but instead locally organised WordCamps for cities.

There’s a number of issues I have with this:

  1. Everyone in the UK knows that quite quickly WordCamp London would be the big one with all the attention in both media and attendance.  It would quickly dominate – in large helped by the enormous population density of the capital.  A WordCamp UK in London would be fine and popular (also considerably more expensive) but that’s all that’s needed.
  2. Many British cities have intense rivalries whilst we all still stand together as a nation – there are folk in Glasgow who would never attend a WordCamp Edinburgh, but would definitely be more interested in a WordCamp Scotland.  End result?  Cities would have small attendances by and large, and our impressive capacity for indifference for minor events would mean that they’d end up as little more than tiny, cliquey gatherings.  Anyone who’s tried to run GeekUps will understand this problem.
  3. A lot of work, energy and our own money has been spent on building up WordCamp UK.  Is Jane seriously suggesting we should dump that?
  4. What is Jane’s authority on this?  She’s simply an Automattic employee.  We chose WordCamp UK and its structure – it’s ours.  If someone else wants to run a WordCamp UK in the country they’re perfectly entitled and there’s no real reason why we couldn’t have three or four running each year – that would be a huge success.  A highly capitalistic organisation that is just one of thousands of contributors to the project and which plays no part in actually running most WordCamps shouldn’t get so involved.
  5. The UK is also very small – 90% of the population can reach all past WordCamp UKs in less than 3hrs – there is no real problem about accessibility.
  6. None of the UK’s key WordPress community members want to give up WordCamp UK.
  7. Jane admitted only six or seven people had complained to her about the situation, two of which turned out to be in Ireland – which except for a small part isn’t in the UK at all.  She couldn’t confirm whether they were Northern Irish or not, which was actually something of a poor mistake to make in front of 150 or so Brits.
  8. Us Brits are a pretty apathetic bunch at the best of times – actually running a WordCamp in each major city would be surprisingly unlikely to happen – there were only two bids submitted for this year’s event – one in Portsmouth and one in Manchester.
  9. The whole point of the *camp suffix is that it’s all free and easy with no big organisations sticking their oar in.  They are inconsistent and joyful.  They’re fun.  Automattic should keep out.
  10. The WordCamp name is not trademarked, and we’ve been using it in the UK for some time now.  It’s ours!

Of course, there are two sides to each argument.  Here’s some reasons and benefits to splitting up WordCamps in the UK:

  1. If somebody wished to run a WordCamp for their city they may feel that the UK badge is dominating and there’d be little interest as a consequence if it was called WordCamp Bristol, or WordCamp Salford.
  2. A national event called something like WordConf could happen.
  3. Erm…

Thing is – we can’t necessarily win this battle here in Britain.  We don’t control the WordCamp.org website – Matt Mullenweg does (he has the domain registration in his name) so if we fight to keep calling it WordCamp UK there’ll be no ongoing support for the event from Matt and his team if they wish to stop the use of the UK moniker.

Which would mean standing up to them.  Do we want to?  Are we prepared for a fight on this?  What do the likes of Mike Little (co-founder of the WordPress project) and Peter Westwood (a UK based core developer) feel about this?

Interestingly we were told the same thing applies to the likes of WordCamp Ireland which will now face this problem – but I wonder if Matt understands Ireland particularly well (we know Jane doesn’t) and that in that country the dominant WordCamp would quickly become an expensive Dublin event.  You may get one doing well in Cork, but Kilkenny, with a population of just 22,000 and which staged this year’s event, probably wouldn’t be able to sustain an annual WordCamp.

So, Jane has to really allow each country to understand its own social constructs and history and let their own communities choose how they do things.  One or two may complain, but it’s not possible to please everyone.

And we showed off too…

My company Interconnect IT have released, through our Spectacu.la brand, the following plugins which you may find useful:

I couldn’t help using the Discussion plugin to run some live discussion sessions.

And The Thanks

I can’t say thank you enough to the people who make WordCamp UK a success for no personal reward.  Tony Scott leads it up, with Mike Little, Nick Garner, Chi-chi Ekweozor, Simon Dickson and many many more working hard behind the scenes.  Also to Nikolay to letting me play with the fastest 85mm lens I ever saw!  Thank you, you’re wonderful people.

WordPress in the Enterprise Presentation

WordPress in Big Media Presentation

Live Threaded Commenting on WP

At Interconnect IT / Spectacu.la my colleague James has developed a new version of the popular Spectacu.la Threaded Comments plugin.  It’s not yet in release form, but you can grab it from the WordPress.org repository via svn if you know how at http://svn.wp-plugins.org/spectacula-threaded-comments/trunk/

I’m bringing it up here because I’ve decided to trial the plugin out here on my own site.  It was designed to work in conjunction with a webinars project, allowing visitors to have an active discussion, in real time, on a WordPress site.  It can be dropped into almost any theme, and adds nicely to the standard WP comments functionality.

Threaded comments are a powerful way to turn your WordPress site into a mini discussion forum.  Adding live commenting can now turn it into a chatroom full of ajaxey goodness.

Try it out below, if you like….