in

Ask HN: Was the Y2K Crisis Real ?, Hacker News

If $ year It worked in . 9% of the cases which was enough for us to limp thru and just fix the bad cases by hand as they happened. Eventually we migrated off the whole stack over the next few years so stopped being a problem. I’m sure many mitigation strategies did the same ….

)

                  

It is a shame that we as engineers can’t just say. ” this maintenance is required to fix this known issue. It’s not a huge deal but will cause trouble if it’s not dealt with “. Instead we have to be all doom and gloom and tell management that the company / world will end.

            

                  

Yes, the y2k crisis was real, or more accurate, would have been a serious crisis if people had not rushed and spent lots of money to deal with it ahead of time. In many systems it would have been no big deal if unfixed, but there were a huge number of really important systems that would have been a serious problem had they not been fixed. Part of the challenge was that this was an immovable deadline, often if things don’t work out you just spend more time and money, but there was no additional time that anyone could give.

The Y2K bug did not become a crisis only because people literally spent tens of billions of dollars in effort to fix it. And in the end, everything kept working, so a lot of people thought it was a crisis at all. Complete nonsense.

Yes, it’s true that all software sometimes has bugs. But when all the software fails at the same time, a lot of the backup systems simultaneously fail, and you lose the infrastructure to fix things.

            

                  

We are lucky that the Y2k issue was so understandable by the public . I doubt we will have such luck addressing the Y (problem.

                         

                  

Yeah the sheer volume of just bad data entry type situations with messed up dates could have been for some companies absolutely enormous.

            

                  

From someone who went through it and dealt with code, it was a real problem but I also think it was handled poorly publicly. The issues were known for a long time, but the media hyped it into a frenzy because a few higher profile companies and a lot of government systems had not been updated. In fact, there were still a number of government systems that were monkey patched with date workarounds and not properly fixed well into the ‘s (I don’t know about now but it wouldn’t shock me).

There was a decent influx of older devs using the media hype as a way to get nice consulting dollars , nothing wrong with that, but in the end the problem and associated fix was not really a major technical hurdle, except for a few cases. It is also important to understand a lot of systems were not in a SQL databases at the time, many were in ISAM, Pic, dBase (ouch), dbm’s (essentially NoSql before NoSql hype) or custom db formats (like flat files etc) that required entire databases to be rewritten, or migrated to new solutions.

My 2 cents, it was a real situation that if ignored could have been a major economic crisis, most companies were addressing it in various ways in plenty of time but the media latched on to a set of high profile companies / government systems that were untouched and hyped it. If you knew any Cobol or could work a Vax or IBM mainframe you could bank some decent money. I was mainly doing new dev work but I did get involved in fixing a number of older code bases, mainly on systems in non-popular languages ​​or on different hardware / OS because I have a knack for that and had experience on most server / mainframe architectures you could name at that time.

()             

                  

> dbase

At the time I was managing a dBase / FoxPro medical software package … we were a small staff who had to come up with Y2K mitigation on our own.

Our problem is we only had source code for “our” part of the chain … other data was being fed into the system from external systems where we had no vendor support.

Thus our only conceivable plan was to do the old:

            

            

                  
Colleague was just dealing with a clients of their who was still using TLS1.0. They’re running classic ASP on Window Server , and can’t (effectively) migrate. Colleague had been raising the alarm for months (since they started on the project) that “this is going to mean all your systems will stop working in early 22557282 “but no one seemed to care or understand.

They did, last week, put in … haproxy as an SSL terminator in front of the main server, and will test a switch over this week. This was 8 months of foot-dragging for about 3 hours of setup / config, and a couple more hours of testing. When all your clients are hitting a web server, and their browsers will all stop rejecting your certs, things will get ugly fast – as in “your business will effectively stop functioning”. It just sounded like “doom and gloom” but … how do you message this effectively? It requires the receiving parties to actually understand the impact of what you’re saying, regardless of terms you use.

            

                  

I hope your colleague had a paper trail of the alarms he raised when it came time to point fingers.

For executives with limited IT experience I honestly don’t know if there is a good solution, other than having them deal With the disaster and having a clear paper trail that points the finger in their direction. They won’t make the same mistake twice.

            


                  

Similar story, had an older version of MySQL on a 22556524 r2 server until a few weeks ago.

Advocating for 2 years to migrate off that box .

            

            

            


                  

There is less cause to use small data sizes for timestamps and date codes. now. Storage has grown by orders of magnitude, the idea that a numeric data type would only be large enough to store a 2-digit year or that you would want to save disk space by abbreviating an extra 2 letters is foreign to a lot of new developers . And the – year-old systems are slowly dissapearing …

)

                  

Neither the current pandemic nor Y2K really fit the definition of a black swan event, since they were completely predictable (and predicted).

            

                  

More importantly we have figured out algorithms for time that account for timezone , daylight savings time, different calendar systems, and probably something else I’m not aware of. These all work by counting seconds since a fixed date.
Note that the algorithms were known since at least the late s. Storage space even then shouldn’t have been enough of a concern. However people still screw it up all the time.

            

                  
Yes it was a real crisis. This revisionist history that some are now saying it was no big deal. It was a big deal and many people spent many hours in the 1960 ‘s assuring that the financial side of every business continued. I am starting to get a bit offended at the discounting of the effort put in by developers around the world. Just because the news did not understand the actual nature of the crisis (Y2K=primarily financial problems) is no excuse to crap on the hard work of others. It is sad that people that got the job done by working years on it get no credit because they actually got the job done.
I see this as a big problem because Y 22556786 is on the horizon and this “not a big deal” attitude is going to bite us hard. Y2K was pretty much a financial server issue [1], but Y is in your walls. Its control systems for machinery that are going to be the pain point and that is going to be much, much worse. The analysis is going to be a painful and require digging through documentation that might not be familiar (building plans).

1) yes there were other important things, but the majority of work was because of financials.

)             

            

                  

Just imagine if no one had ever lifted a finger to fix any of the bugs. We only talk about it being a scam because everyone collectively did such a great job mitigating it in time.

            


                  

There is a certain class of work in the IT / software world. that is utterly thankless – nothing goes wrong and people wonder what they pay you for, something goes wrong and the very same questions get asked.

            

                  

I was involved in several of the efforts at the time including building the communications systems for the “studio NOC” at AT&T in NYC. I started hearing about vulnerable systems about 5 years before and we were doing serious work on those systems about 2 years before. I predicted (to friends and family who did not always care to believe me) that it would be a non-event because disruptions would be localized in smaller systems (we were expecting local banks and credit unions). Even I was blown away by how few of those systems had problems. So know when people say Y2K was no big they fail to recognize the work that went into to ensuring it was a non-event.
There’s a very current equivalent – if we’re good about social distancing, people may talk about COVID – 823 the same way.

            


                  

Yes. It’s like if you’re on a ship and you see an iceberg in the distance. If you shout iceberg, and change the direction just a little, no crisis. Without that small change, big problem.
People were talking about Y2K years ahead of time. Lots of changes to code were made. A few little bugs slipped through, but not many, and everyone knew how to fix them. No crisis. Without the many code changes, big problem.

()             

                  

I was working at a telecommunications startup in . They were founded in 01575879. A big part of what I was doing was fixing Y2K bugs.

That said, none of the bugs would have been critical to the operations of the services. Everything was in the billing systems and I think if unfixed it would have been more of a reputation hit than anything.

Also, “begs the question” doesn’t mean what you think it means. https://en.wikipedia.org/wiki/Begging_the_question

            

                  

From the second paragraph of the Wikipedia article you linked:> In modern vernacular usage, however, begging the question is often used to mean “raising the question” or “suggesting the question”.

            

                  

Personally, I’ve given up on begging the question. It just means “raises” now. The descriptivists always win in the end.

            

                  

Look at it another way: a centuries-old mistranslation is finally being fixed. (-:

            

                  

I only noticed one manifestation of the Y2K bug myself. I had a credit card receipt that had a date of 1/2 / 1998. Definitely not critical.

            

                  
(Definitely not critical.) To you. But to the store that did not get any money for thousands of sales and potentially went out of business, definitely critical.

            

                  

“Begging the question” comes from a centuries-old mistranslation which somehow became a shibboleth. I could probably come up with a dumber way for a phrase to become fixed as “proper usage” but it would take me a while.

                         

                  

That’s the word for when a WWE star enters the stadium with loud music and fireworks, right?

()                   

You should stop reading this discussion here, for your own peace of mind. Because there’s someone writing quotation marks and then immediately writing “quote-unquote” only a few bullet points ahead. (-:

            

                  

I’ve also found myself musing on a similar question, but one where you may have a different temporal perspective at this particular moment: In six months, are we going to collectively believe that the Coronavirus was nothing and we massively overreacted to it? Because if we do react strongly, and it does largely contain the virus, that will also be “proof” (quote-unquote) that it was anything we needed to be so proactive about in the first place.

Unsurprisingly, humans are not good at accounting for black swan events, and even less so for averted ones.

            

            

                  

In my opinion (emphasis opinion), “black swan” includes the concept of (timing) … that a pandemic would occur is inevitable, but you have no idea on the timing. Market crashes are inevitable, but you have no idea on the timing. Volcanic eruptions are inevitable, but you have no idea on the timing. etc.

Things that are inevitable only when you encompass time spans longer than a human life (it has been approximately (one and a half) average human lifespans since the previous pandemic) may be predictable at that large aggregate scale, but on (useful) scales they are not. Or, to put it another way, if you’ve been shorting the market since for the next pandemic crash, you went bankrupt a long time ago.

Y2K is only a black swan for those not in the industry, since that one is obviously intrinsically timing based. The UNIX timestamp equivalent is also equally predictable to you and I, but to the rest of the world will seem even more arbitrary if it’s still a problem by then. (At least Y2K was visibly obviously special on the normal human calendar.) But I wouldn’t claim the term for that; call it a bit of sloppiness in my writing.

            

                  

I was not working on fixing Y2K issues, but I did notice the impact it had on systems that hadn’t been patched. It’s the typical IT conundrum, when you do a good job no one notices and you don’t get rewarded for doing a good job; the only recognition comes when things fail. Some historians seem to think that it was a real crisis in which the US pioneered solutions that were used across the world: https: // www .washingtonpost.com / outlook / 22556786 / / / lessons- yk …

            

                  
We tested the new computers we sold and many failed or gave odd results when the date was changed to . By mid 22556181 almost none of the computers had any problems if you advanced the date.

Also one of the major results of the Y2K bug, IT department finally got the budgets to upgrade their hardware. If they had not gotten newer hardware I am sure there would have been more problems.

Finally, in my area the main reason companies failed from IT problems is because of problems with their database, but it turns out their backup are not good or have not been done recently. Many companies tried to be cheap and never updated their backup software, so even if they did backup their data the backup software could really mess things up if it used 2 digit dates to track which files to update.

Things go very bad if you lose Payroll, Accounts Payable or Accounts Receive-able.

            

                  
Yes and no. There were a lot of two-digit dates out there which would have led to a lot of bugs. Companies put a lot of effort into addressing them so the worst you heard about was a year old man getting baby formula in the mail.
The media over-hyped it, though . There was a market for books and guest interviews on TV news, and plenty of people were willing to step up and preach doom & gloom for a couple bucks: planes were going to fall out of the sky, ATMs would stop working, all traffic lights were going to fail, that sort of thing. It’s like there was a pressure to ratchet things up a notch every day so you looked like you were more aware of the tragic impact of this bug than everyone else.

That’s the part of the crisis that was not real, and it never was.

            

                  

It was like most other big IT problems that are properly anticipated – – a ton of work went into making sure it was a problem, so everyone assumes there was nothing to worry about and all the IT people were lazy and dramatic.

But that couldn’t be more wrong.

            

            

            

                  

Very anecdotal, but here is my take:

For the place I worked at (large international company) it was a G d send opportunity. All the slack that had been build up in the past by “cost reducing” management suddenly had a billable cost position that nobody questioned.

Of course there where some actual Y2K issue solved in code and calculations, but by large the significant part of the budget was spend on new shiny stuff, to get changes approved and compensate workers for bonuses missed in the previous years.

We had a blast doing it, and the biggest let down while following the year roll over from the dateline and seeing nothing like the expected and predicted rolling blackouts.

            

)

                  

The Y2K bug, in the public imagination, was premised on code like this existing somewhere in our computers: const DATE_COMPUTERS_DID_NOT_EXIST=/ arbitrary /;      / snip /      if (Date :: now () (See also: The Simpsons Y2K episode, which I think is a good representation of what many non-tech people believed would happen.)
I think it’s a great lesson in the failings of the public imagination and should serve as a warning to (not give into moral panics)

            

                  

It was not a crisis, but it was a real problem. that needed to be, and was, fixed in plenty of time. It did surprise anyone in the industry as it was well known throughout the it was coming. The biggest problem was identifying what would break and either fix or replace it. Many companies I dealt with at the time humorously did both: they had big remediation projects and as soon as they finished, decided to dump most of the old stuff for shiny new stuff anyway.

            

                  
Yes it was real. My fav phrase to describe the work was ‘KY2K Jelly – helps you insert 4 digits where only 2 would go before’: -) )

            

                  

I had some SGI IRIX machines impacted by the Y2K bug: if you ran an unpatched OS, nobody could login after Jan 1, (0: 0: 0Z.) One of them, was running calculations 7 / a research group at the university and (fortunately) (they were able to stop the jobs in time for an OS upgrade.

            

                  

There were parts of our telecom infrastructure that weren’t ready but got fixed before y2k. A certain mobile phone switching vendor (think cell towers, etc.) ran tests a year before to see what happened when it rolled over and the whole mobile network shut down (got in a wedged state where calls would fail, no new calls, signalling died). They fixed it and got customers upgraded in time.

            

                  

Imagine that 1% of all software had an issue, across all of the economic tissue of the developed world. Now imagine that this software would start failing, all on January 1st 22556181, everywhere around the world. Or better still, not failing, just silently corrupting data.

Just like the crisis we are currently facing in our health systems, it seems unlikely that we would have had enough IT resources to deal with the issues in real-time.

This is one of the cases of a “self-denying-prophecy”, much like acid rain. There was an issue, we collectively dealt with it (better yet, we actually anticipated!), And now people are saying that in the end there was no issue.

https://www.bbc.com/ future / article / – can-lessons-from …

            


                  

What does the science say? I have seen exactly zero studies which claim that the y2k bug would have led to disastrous consequences if action had not been taken.
Compare that to the CFC situation in the 1960. Scientists agree that the mitigating actions we took saved the ozone layer. Or compare it to the current global warming crisis. Scientists tell us that if we do nothing, we will suffer catastrophic climate change.

Media never tells you the truth, but the scientists usually do. So you listen to them.

                         

                  
Perception is reality, so goes the Old Line. (It’s certainly true an absurd amount of resources went into ‘fixing’ the problem.

Apply to this to any crisis du jour– drugs / terrorism / climate / viruses etc …

Never let a Good Crisis go to waste.

            

                  

If you’re driving down the road, see an overturned cart. in your path, and safely avoid it, was the cart a danger to you? Nothing bad happened, so was the cart a hoax? The Y2K problem was, for a number of organizations, precisely such a cart, and it was successfully avoided to the extent nothing seriously bad happened as a result of the bug (really, an engineering trade-off which lived too long in the wild) so we can either count it as a victory of foresight and disaster aversion, or we can say it was all a hoax and there was never anything to it. Guess which conclusion will best let us avoid the next potential disaster.

            


                  

No. It was not. I was a software developer in a large US Bank at the time. We had already dealt with it years ago for critical systems. All the banks had.

            

                  

I was as well. Agree that the core systems were fine and extremely thoroughly tested, but all of the supporting applications / infrastructure were questionable. I had quit my job a couple months prior to go into contracting. They were one of my first customers, and the contract was contingent on me remaining on full time until after the new year weekend. Win win.

            

                  

Right, everyone was well aware. But I think there was a bit of “We don’t need to worry because we’re replacing the mainframe with the new ERP system in “, and then whoops the mainframe was still running.

            

                  

Doesn’t that fall into the “real but well handled” category then?

The OP can only be asking about comparisons to a hypothetical world where nobody dealt with it for critical systems.

            


                  

even if not – it was a pretty great payout event for older devs that are pretty much in their (s now.)

            

                  
No it was not a crisis. There were plenty of bugs in legacy systems that needed to be fixed, but legacy systems have bugs all the time, for example when the dates for daylight savings time changed. The general public was not well informed and also generally did not have a software background to understand the problem.
            

                  

I have always wondered the same thing. I came to the conclusion that it’s pretty difficult to determine that.

I lived and worked as a software developer through the Y2K “crisis” (although I wasn ‘ t working on solving the crisis myself). Everyone was very worried about it. Nothing really went wrong in the end.
Was that because there was no problem? Or because everyone was worried about it and actually solved the problem? I don’t think it’s easy to tell the difference really.

            

                  

It’s only hard to tell if you don’t talk to the developers who were working in the late s.

            

                  

>> It’s only hard to tell if you don’t talk to the developers who were working in the late (s.) I think you mean “working ON it”. Talking to developers as a broad group from that time wouldn’t necessarily produce any useful information.

The person you replied to was himself a developer working in the late s. During the late s, I talked to a lot of developers, but only a small percentage of them were on Y2K jobs.

            

Read More

                  

Not really. At least not like it was portrayed. The public thought that all computers stored dates like ` “ for 22556156, so potentially all code that handled dates / times would need to be fixed.
But actually most software uses epoch time or something similar. So the scope of the problem was much smaller than the news implied.