What Really Happened in Y2K? That’s the question Professor Martyn Thomas is asking in a forthcoming lecture and in a recent Chips With Everything podcast, from which I picked a few quotes that I particularly enjoyed.
On why choosing to use two digits for years was arguably a reasonable choice, in its time and context:
The problem arose originally because when most of the systems were being programmed before the 1990s computer power was extremely expensive and storage was extremely expensive. It’s quite hard to recall that back in 1960 and 1970 a computer would occupy a room the size of a football pitch and be run 24 hours a day and still only support a single organisation.
It was because those things were so expensive, because processing was expensive and in particular because storage was so expensive that full dates weren’t stored. Only the year digits were stored in the data.
On the lack of appreciation that, despite the eventual understated outcome, Y2K exposed major issues:
I regard it as a signal event. One of these near-misses that it’s very important that you learn from, and I don’t think we’ve learned from it yet. I don’t think we’ve taken the right lessons out of the year 2000 problem. And all the people who say it was all a myth prevent those lessons being learned.
On what bothers him today:
I’m [worried about] cyber security. I think that is a threat that’s not yet being addressed strategically. We have to fix it at the root, which is by making the software far less vulnerable to cyber attack … Driverless cars scare the hell out of me, viewed through the lens of cyber security.
We seem to feel that the right solution to the cyber security problem is to train as many people as we can to really understand how to look for cyber security vulnerabilities and then just send them out into companies … without recognising that all we’re doing is training a bunch of people find all the loopholes in the systems and then encourage companies to let them in and discover all their secrets.
Similarly, training lots of school students to write bad software, which is essentially what we’re doing by encouraging app development in schools, is just increasing the mountain of bad software in the world, which is a problem. It’s not the solution.
On building software:
People don’t approach building software with the same degree of rigour that engineers approach building other artefacts that are equally important. The consequence of that is that most software contains a lot of errors. And most software is not managed very well.
One of the big problems in the run-up to Y2K was that most major companies could not find the source code for their big systems, for their key business systems. And could not therefore recreate the software even in the form that it was currently running on their computers.
The lack of professionalism around managing software development and software was revealed by Y2K … but we still build software on the assumption that you can test it to show that it’s fit for purpose.
On the frequency of errors in software:
A typical programmer makes a mistake in, if they’re good, every 30 lines of program. If they’re very, very good they make a mistake in every 100 lines. If they’re typical it’s in about 10 lines of code. And you don’t find all of those by testing.
On his prescription:
The people who make the money out of selling us computer systems don’t carry the cost of those systems failing. We could fix that. We could say that in a decade’s time – to give the industry a chance to shape up – we were going to introduce strict liability in the way that we have strict liability in the safety of children’s toys for example.