back to article Ooh, my machine is SO much faster than yours... Oh, wait, that might be a bit of a problem...

Monday morning has rolled round once again, which can only mean one thing – Who, Me? Yes, it’s time for another trip down readers’ memory lanes in El Reg’s weekly column that celebrates all the times you’ve tried to slink off without someone noticing the monumental error you’ve just made. This time, we meet “Joe”, who was …

  1. Pascal Monett Silver badge

    Well that was an invisible problem

    So everybody got a scare and one guy quietly learned that doing a big file copy was a no-no at the time. Everyone else began their long journey learning just how to equate reliability and Microsoft.

    That was a rather soft one compared to some big scares and even some catastrophes that have already appeared. Oh well, I guess we can't always have tales of how Wall Street servers almost crashed.

    1. Philip Storry

      Re: Well that was an invisible problem

      Loath though I am to defend Microsoft, this really wasn't their issue.

      The important thing to remember here, which isn't mentioned in the article, is that in 1990 switching wasn't really a thing for the average network. It would have been hubs, broadcasting every packet to every machine, with the network card simply ignoring anything that's not for its own MAC address.

      To put that into context, I remember the first switch I ever saw. It was 1996 (IIRC). It was a dedicated 1U rackmountable unit from 3COM that had twelve 10/100Mb ports and cost around £14,500.

      Yes, that's fourteen and a half grand.

      I remember it well, mostly because I was ordered to steal it from a rival department. (It's a long story. Maybe some other time.)

      Again, for context, our hub stacks were 3COM 24 port 10Mbs units, with the 100Mbs backplane connectors grouping them lumps of 4, and a dropdown cable between each group that glowed a soft red when the teams started playing DOOM at lunchtime...

      When we implemented the switch we removed the dropdown cables and plugged each stack into the switch itself, along with our primary SQL Server, the domain controllers, the IIS server (because "intranet" was the latest buzzword), the Exchange server and the WAN link. That really eased up network traffic both for WAN and local users, and was regarded as £14,500 quid well spent.

      These days, if you buy a network device that costs more than about £40 it'll have switching built in, and the scenario described here could never happen on that network. Only the cheapest of kit, or wireless networks (for obvious reasons), have no switching capabilities.

      Of course, everyone probably has stories about managers decreeing that "we won't pay for cabling that new floor we've expanded into, we'll use wireless as it'll be cheaper" and then the network grinding to a halt every day between 08:30 and 10:00 as everyone logs on and Windows pulls down their profiles... ;-)

      That'll be the closest we'd get to this story these days.

      1. Solarflare

        Re: Well that was an invisible problem

        I remember it well, mostly because I was ordered to steal it from a rival department. (It's a long story. Maybe some other time.)

        Oh go on, you know you want to. Besides, your name is basically 'story' anyway :)

        1. Philip Storry

          Re: Well that was an invisible problem

          It's pronounced to rhyme with "lorry", but I get that a lot... ;-)

          (I wrote this up a while ago for submission to El Reg, but was never quite happy with it. All names changed to protect the allegedly innocent.)

          My first job was in the mid 90's, working for a big company with serious political dysfunctions. One of the best demonstrations of those issues is something I refer to as "That time I was told to steal a £14,500 switch"...

          I was working as tech support in a building that was full of outsourced telephone support desks for some very big IT names. One of my colleagues - let's call him Dave - had just moved from working on one of those helpdesks into a project management role. We got an email from Dave saying that a new network switch had arrived, and could we please locate it and configure it?

          (In the mid 90's and a network switch was an exotic bit of kit. These were the heady days of the new 100Mbit "Fast" ethernet. Switching wasn't a feature on hubs as it is today, it was a function for dedicated hardware. And in this case, it was a 12-port 10/100 3COM switch which, including taxes and delivery, cost a little over £14,500.)

          Dave explained that a new helpdesk was going to go live, and there were concerns over the performance of the database server that handled scheduling of hardware engineer visits. That was millions of pounds of business each year, and therefore probably the most valuable server in the building - possibly in our division. Analysis had shown that the server's performance was fine, but it was homed on a network with at least 100 clients and yet more traffic from a WAN uplink. Network congestion was most likely the issue.

          My boss dispatched me to find the switch. It wasn't at Dave's desk. It wasn't with reception/facilities, who said that they had delivered it to Dave's desk. Dave had only recently changed jobs, so it was likely their information was out of date - I went to check his old desk...

          And it had been there.

          But it was now with Terry.

          Terry was a man of initiative, and had decided that as this had been delivered to Dave's old desk the switch therefore belonged to Terry's department. A short but futile conversation left me certain I wasn't going to leave with the switch, as Terry had repurposed it as "something I can put on my CV". (He didn't quite phrase it that way, but the meaning was clear to us both.)

          To complicate things, Terry's department was a flagship project. Big name client, used as a case study, on the tour route for all visiting potential customers - they had serious political clout in the company.

          Dave was on leave, and this was 1996 so he didn't have a mobile phone. But after some calling around we managed to get hold of him, and he confirmed that the switch was ordered under his new budget code. He was very unhappy to hear that "his" switch has been poached. It was made clear that there was a deadline for setting up this new helpdesk - his main concern was that Terry might plug the switch into his department's network (they managed their own IT to some degree), and that would make it hard to get back due to their political capital.

          My boss assured Dave that this would be handled. We hung up the phone, and I was ordered to go and steal the switch.

          Not exactly how I'd planned my day.

          It was approaching lunchtime, so I slid round to a helpdesk adjacent to Terry's, and began to very slowly diagnose a non-existent fault on a PC. The moment Terry went to lunch, I pounced. Swiftly repacking the switch and disconnecting it from a serial port, it was soon retrieved. Now we had to decide what to do with it. My boss decided to lock himself in our small office, and read the manual. I was sent out to distract Terry, and do all our pending jobs in the process. On my way out of the door, I grabbed the empty box.

          "What are you doing with that?", my boss asked.

          "Decoy" was my response.

          The ground floor server room was a repurposed meeting room - so it had glass windows. I dashed in, sat the box on the workbench, and then left - making sure to lock the door as always.

          I spent much of the afternoon running around the building in as unpredictable a pattern as possible. I kept dropping into conversation that we were more busy than usual, and I had to go to $department next - knowing full well that I was going elsewhere. On returning to one helpdesk, I heard that Terry was looking for me. Eventually I bumped into him, and found out that he too had been busy - he knew the switch was in the ground floor server room. Eager to help, I went to fetch the key - but never returned, having been diverted by a faulty computer on the way. Anyone who's done desktop support will know the kinds of distractions that can drag you somewhere unexpected. That afternoon, I made sure that they all did.

          At six in the evening, I dropped in to our office. My boss was still reading the manual. I was sure that Terry would have gone home by now - his helpdesk closed at five - but I headed back out on the distraction trail anyway. At seven thirty, I got paged (remember pagers?) and returned to our tiny office to hear the plan my boss had come up with. Then we went home.

          The next day, shortly past nine, Terry dropped by our office.

          "I want my switch."

          "It's not yours."

          "It's ours, we're a flagship desk, and I want it."

          My boss adopted a soft, conciliatory tone. "OK, let's go and fetch it."

          We walked to the server room, unlocked the door, and ushered him in.

          "There it is. Help yourself."

          Terry was both livid and crestfallen at the same time.

          My boss hadn't just read the manual the previous day, but had also written and uploaded a configuration for a switch he'd never seen before.

          We'd been in since before six, and had racked and cabled it and cut all services across to it - a WAN uplink for the building, a link for each of the local hub stacks (remember 3Com 100Mbits backplane connectors?), a link each for the Exchange, IIS and File/Print servers... And a link for Holly, the multi-million pound database server.

          A single network cable whose traffic was worth more money than most people will earn in their entire career. If Terry wanted his switch, all he had to do was unplug that cable.

          Terry left without his switch.

          That long day and following early start was worth it. Not just for the satisfaction of a job well done, but in other ways. For example, one of the helpdesks ran Doom/Quake servers at lunchtime to help relieve employee stress, and apparently the switch made a noticeable difference to their performance. I was gifted many, many free beers for that.

          And finally, I should note that Dave showed great promise as a project manager.

          He took all the credit for our work.

          1. Anonymous Coward
            Anonymous Coward

            Re: Well that was an invisible problem

            "My boss hadn't just read the manual the previous day, but had also written and uploaded a configuration for a switch he'd never seen before."

            Ah, a technical boss who understands what he's doing!

      2. Anonymous Coward
        Anonymous Coward

        Re: Well that was an invisible problem

        "we won't pay for cabling that new floor we've expanded into, we'll use wireless as it'll be cheaper"

        Ah yes. I’ve had that (only once though). One director thought he could cut costs by skipping cabling, and signed off on everything without consulting anyone. Once the builders were finished, the new office looked amazing. Polished concrete floors, beatiful furniture. It was of little use though, since he skipped *all* cabling. So no power either. For on office area destined for mainly desktop users.

        AC because I still work there, and so does the other guy. In his defense : he didn’t make the same mistake twice, next office expansion included raised floors for an 800 sqm area.

        1. Saruman the White

          Re: Well that was an invisible problem

          This is an old problem. If you read "Most Secret War" by R. V. Jones, he relates how a new building was built for the Admiralty in the late 1930's that had no provision for utilities at all. They had to move in as-is and wait for 6 months, at which point the necessary modifications could be attributed to depreciation.

          1. Rich 11 Silver badge

            no provision for utilities at all

            6 months! That's a bloody long wait for a slash.

            1. Saruman the White

              Re: no provision for utilities at all

              Or, apparently, a cup of tea (which the civil service lived on then, and probably still does).

              1. WonkoTheSane

                Re: no provision for utilities at all

                > Or, apparently, a cup of tea (which the civil service lived on then, and probably still does).

                They used to, until The Great Tea Trolley Disaster of '67.

                1. Saruman the White

                  Re: no provision for utilities at all

                  Dead link!

                  1. Roland6 Silver badge

                    Re: no provision for utilities at all

                    Re: Dead link!

                    It was working at 09:50am UK time yesterday.

                    Plusnet reports:

                    Site removed! This site has been removed because it has exceeded its bandwidth allowance.

                    There is a cached version of the article's text currently available at:


                    1. ds6 Bronze badge

                      Re: no provision for utilities at all

                      Swarm of vultures took the site offline, I bet.

        2. Anonymous Coward
          Anonymous Coward

          Beautiful polished concrete

          It would have cost almost nothing to set conduit pipes before pouring that concrete. Assuming they knew where the desks would be, they could later add what they needed and it wouldn't even show. Heck they could run have it pop up every couple meters in a grid and just cover the unused ones in trafficked areas with a stainless steel cover - which would go nicely with the polished concrete and almost look like a deliberate design feature!

      3. Julian 8

        Re: Well that was an invisible problem

        Should have had Token Ring :)

        1. Anonymous Coward
          Anonymous Coward

          Re: Well that was an invisible problem

          That throws me back to the days I set up my first network ever, based on arcnet. Yes, I truly didn't know what I was doing then, but strangely it all worked :). And yes, this too was in the time we went from 80286 to 80486 before Intel decided that names such as "Pentium" were more interesting (which they then screwed up by the FDIV floating point bug - mercifully, our engineers avoided that one by a hair).

          That said, we had a machine running as a server, but as we were using PowerLAN there was no such thing as quota control. As I had some users who must have been closet hoarders only constrained by the size of their local hard disk, the server storage and backup filled up in no time - so I got crafty.

          The next disk I added, I created my own isolated directory which I filled with large files that basically contained spaces (could have used any character, it was just a Turbo Pascal program that created them). It meant that the drive looked a lot smaller than it was, encouraging the hoarders to be a bit more selective what they submitted to the server for backup (because that was its main purpose besides a clever print spooler which could distribute jobs over a group of printers). It had zero impact on the backup because that was zipped (it zipped onto Panasonic optical rewritables, in the days before we had rewritable CDs and DVDs), and I could free space in seconds by simply deleting one of the big files.

          Primitive as it was, it got the job done. We also had PC Anywhere modem links between the offices to pump files around - this was in the days before email made all of that a lot easier :).

          I left that job to work for an Internet business in the days when Usenet was still usable and you could use "talk user@fqdn" to have a command line chat with people :).

          1. Bluto Nash

            Re: Well that was an invisible problem

            I did the same thing when we added another 4GB drive the the SCSI array on the primary file server - simply ran a script that generated 10MB files until I filled around 3GB of space on the drive. Excluded from backups, naturally. Then whenever the PHB noted that we were "running out of space," we would simply delete a few to make "found room" on the drive. Did that for a year or three, and nobody the wiser.

            1. Nick Ryan Silver badge

              Re: Well that was an invisible problem

              We had to do something similar on a 4GB drive because some bits of software could only cope with up to 2GB free space and anything higher than that wrapped around to become negative space. So we bought a 4GB drive filled it past halfway with empty files and waited until we could remove them...

        2. Zippy´s Sausage Factory

          Re: Well that was an invisible problem

          Should have had Token Ring :)

          I'm sure I've had that. You can get pills for it at Boots...

      4. GAIDA

        Re: Well that was an invisible problem

        You must tell me about it on Tuesday !

        1. Philip Storry

          Re: Well that was an invisible problem

          See my response to solarflare above. ;-)

          Sorry I couldn't make it yesterday!

      5. Tim99 Silver badge

        Re: Well that was an invisible problem

        Where I volunteer they had a 1990s vintage HP unmanaged switch with 8 x 10Mb ports and one 10/100Mb connection that could be used to connect to a single server (we didn’t). It was fully populated and used to share a single internet connection. It cost several thousand dollars, last year we replaced it with an 8 x 1Gb port switch for less than $50.

        1. Roland6 Silver badge

          Re: Well that was an invisible problem

          > It was fully populated and used to share a single internet connection. It cost several thousand dollars, last year we replaced it with an 8 x 1Gb port switch for less than $50.

          Bit of a cheapskate, I would have replaced it with a 12 or 16 x 1Gb port switch. :)

          1. Tim99 Silver badge

            Re: Well that was an invisible problem

            Yes, but the advent of WiFi meant we only needed 5 connections plus a spare for a visitor's laptop with ethernet. The extra $30 for unneeded ports could buy biscuits...

            1. Roland6 Silver badge

              Re: Well that was an invisible problem

              >Yes, but the advent of WiFi meant we only needed 5 connections plus a spare for a visitor's laptop with ethernet.

              Funny thing, I previously thought that (WiFi would reduce the number of physical ports)...

              Last year I got involved in a network upgrade for a charity, the 8 new WiFi AP's (needed to cover the site) and other bits of kit that needed a physical port meant they needed more physical ports (and a larger data comms cabinet) than they had previously...

              I note the $30, suspect funding rules are slightly different, in the UK funders are more prepared to contribute to Capex (new switch) than Opex (biscuits) expenditure...

              1. Anonymous Coward
                Anonymous Coward

                Re: Well that was an invisible problem

                Last year I got involved in a network upgrade for a charity, the 8 new WiFi AP's (needed to cover the site)

                This is the exact problem that made me grateful for Netgear's Orbi. I was at a site which was protected (historical value) so cabling was an issue. A Netgear Orbi router and 4 satellites pretty much sorted it, although it took me a bit to get the daisy chaining going (first enable fun things like beam shaping and MIMO, then get on with daisy chaining).

                I personally prefer APs linked via cable, but this worked surprisingly well.

      6. Bruce Ordway

        Re: Well that was an invisible problem

        >>started playing DOOM at lunchtime.

        Which also led to my campaign to migrate from hubs to switches.

        Even with switches I still must be careful about what time I perform some file copying.

        Much like the example in this article, I support a site where most users rely on access to an ERP application server. I learned not to copy certain large files during business hours.

      7. Jim 59

        Re: Well that was an invisible problem

        Quite impressed you had 100 MB/s in 1990. Being a priveliged Unix lad, we had token ring at the time. I don't think it can have had individual switching either, since every host reads the token on every it goes round the ring. Guessing wildly here.

        1. cmrayer

          Re: Well that was an invisible problem

          In the early 90s early token release was invented so there could be multiple data packets going round the ring at once. By the mid 90s token ring switches existed and did go to 100Mb/s, before then there was FDDI for 100Mb/s and of course ATM at 155 in the early 90s.

      8. swm Silver badge

        Re: Well that was an invisible problem

        When the ethernet was invented at Xerox they had about 100 ALTO machines on one RG11U foam 3MBit coax. They set up a test by having all of the machines transfer data up to 9000% of the cable's capacity. Things slowed down (of course) but there were few lost packets because of the collision detect and retransmit algorithms in the machines. ALTOs were not capable of sending or receiving back-to-back packets so would miss consecutive back-to-back packets sent to the same machine.

        Another test revelled that about .01% of the packets were received corrupted and this was traced to a synchronizer timing error in the ethernet tranceivers. When this was corrected the error rate went down to less than 1 in a million.

        Sounds like a router/switch problem.

        1. shawnfromnh

          Re: Well that was an invisible problem

          Thanks for the history lesson, that was pretty interesting to me since I had no idea. I've only been using since 92 on a crappy 286 on wfw 3.11 so that is pretty cool and the testing was great since they didn't just do capacity but pushed the bondaries like the realized all the cheap customers would do so to keep them happy the made it so cheap customers would buy from them again knowing they can push the envelope even more next time instead of today making it so they barely make the spec to keep costs down and forcing people to want to upgrade.

      9. Deltics

        Re: Well that was an invisible problem

        The other thing to remember is that in 1990 CAT_5_ cabling was still 10 years in the future !

        1. Anonymous Coward
          Anonymous Coward

          Re: Well that was an invisible problem

          FWIW, the specification for Category 4 and category 5 was published in 1991.

        2. Allan George Dyer Silver badge

          Re: Well that was an invisible problem

          @Deltics "in 1990 CAT_5_ cabling was still 10 years in the future !"

          Which is why I used my time machine to install it in my office in 1995.

        3. JJKing

          Re: Well that was an invisible problem

          Deltics, me thinks you are referring to CAT5e that was 10 years later; easily done getting confused about that, happens to me each morning, afternoon and night or whenever I get up now.

          I don't know why people need hubs in the early 90s, 10Base2 avoided the need for those fangled new devices. Just had to remember the 3, 4, 5 rule and no problems ............ until you had to diagnose a fault. Oh what fun that could be. :-) I still have, BNC connections, BNC T pieces (hands up anyone who didn't make shapes with them), terminators, cable and the bloody cable cutter and crimpers. I really should dump them but I guess I channel an inner hoarder in me.

          Always thought 10Base2 was a misnomer when the distance was 185mts (though I guess they couldn't call it 10Base185), when 10Base5 was actually 500mts. Didn't get to play with 10Base5 or the high tech Vampire Tap.

          1. herman Silver badge

            Re: Well that was an invisible problem

            Oh yeah, in the 1990s I ran 10base 2 through the CATV cabling in my home. It worked nicely.

      10. VikiAi

        Re: we'll use wireless as it'll be cheaper

        My networking training included a component on wireless which included practical exercises clearly demonstrating why wireless networking was good for stop-gap and difficult-to-cable-to solutions but wholly inadequate as a general replacement for cabling in any serious install.

      11. Trygve Henriksen

        Re: Well that was an invisible problem

        And back then, with HUBs you had the dreaded 30% utilisation. The moment you hit that limit, the number of packet collissions skyrocketed, and 100% was inevitable.

        We were only saved because we used UB Networks AccessOne kit back then, and there was a crude MAC address filter on the backplane.

        When we got our first Switch(slightly cheaper), we gave the servers a port each, then we pulled the controller boards out of the AccessOne kit, disabling the backplane and turning each card into separate Hubs, then patched one port on each to the remaining ports on the Switch.

        One Switch, 150 happy users...

      12. Killfalcon

        Re: Well that was an invisible problem

        I was once working in an office that was moving from having all the servers on-premise to having them all in a nice IBM blue barn somewhere on the other side of England.

        One day we came in and none of the IBM bods working on the migration could login. They'd accidentally migrated the servers that all the wireless connections ran through, so they got 120 miles of round trip lag on everything from their shiny laptops to their home servers and their logins had to fight through the same bottleneck at the end of it.

    2. Roland6 Silver badge

      Re: Well that was an invisible problem

      >and even some catastrophes that have already appeared.

      Not sure which catastrophes you might be referring to; the (Windows-based) London Ambulance Service Computer Aided Dispatch System Failure wasn't until 1992.

  2. The Original Steve

    The "Apprentice" phase

    Going out on a limb here, but for say the first... 10 years of your career in IT isn't this a S.O.P?

    Do something out of interest, break something, cancel it before anyone knows, grab a coffee and learn!

    Pretty much how I learnt the basics of my craft during the late 90's and early 00's!

    1. Khaptain Silver badge

      Re: The "Apprentice" phase

      Nothing has really changed since ;-)

    2. Giovani Tapini

      Re: The "Apprentice" phase

      We now call it a proof of concept instead of a quiet mistake though...

      1. Aristotles slow and dimwitted horse Silver badge

        Re: The "Apprentice" phase

        Or a CRP without the CR bit.

    3. I3N

      Re: The "Apprentice" phase - starting the "Obfuscate and Lay Low" daemon

      Yeah, like when early IT called up and said the Solaris stuff had been 'rooted' and leave it plugged in.

      Not on my watch ...

      IT called back up "did you unplug it"?

      "No, why do you ask?"

      Something about an IP address at a construction business in Japan and monitoring.

      "Sorry, looks like the system crashed and now won't restart. What did you do"?

      So much easier back then ...

    4. TomPhan

      Re: The "Apprentice" phase

      You become a master when not only do you stop what you were doing, but also take credit for making everything work again.

  3. Andraž 'ruskie' Levstik

    And thus he learned that they ever upgraded the clients - they'll need to upgrade the network.

    1. big_D Silver badge

      That is often the case, the developers and support get high end machines and the users get stuck on low-end stuff.

      The worst is when the developers use their development machine (fastest money can buy/the company can afford, to keep the compile times as short as possible) for testing. The software runs fine on the dev kit, but crawls like man who has been in the desert with no water for a week on the users' machines.

      We always kept a PC with the minimum spec around for testing.

      1. Korev Silver badge

        Or the testing for websites is done on a corporate LAN and not over a WAN or even home connection.

        It's not amazingly hard to simulate latency, but most people don't seem to bother...

      2. Norman Nescio

        Minimum specs PCs for testing

        Oh yes.

        A long time ago, a large financial institution with 100s of offices in many countries decided it wanted to upgrade an in-house bit of software fundamental to customer service.

        The architecture of the network was very centralised - a mahoosive pair of datacentres in one country with expensive frame-relay (I told you is was a long time ago) links, and even more expensive leased lines (yes, very long ago) to those benighted places that didn't have frame-relay yet.

        The replacement application was coded up on PCs by the application developers, who were all housed in a multi-story office in the same city as the data-centres, with a testing server in the same building as the developers.

        It had been decided that the first place to get the new application would be the country that was at the end of the longest, thinnest, most expensive leased lines as the frame-relay solution would be significantly cheaper. The application had passed all its functional tests, and a roll-out plan had been agreed. PCs had been loaded up with the new software, the notice had been given to the local telco to cancel the leased lines, the replacement frame-relay links had been ordered and installed. All systems go!

        The complaints flooded in. The application was unusable. Customer queues were frighteningly long. Telco suppliers were hauled over the coals for providing connections that were manifestly not working properly.


        The application so lovingly coded by the applications developers was written as a 'client-server' style application, with all the data held centrally. The application developers had been in the same building as the server - in fact, on the same (fast by then current standards) LAN. This meant that some fairly standard network efficient practices had not been followed - entire database tables were being transmitted from server to client. This worked well with the small tables on the test server on the some LAN, but not with production sized-tables being squirted across thin, long network connections.

        It took 18 months to re-code the application.

        Roll-out had to be halted, and the business reverted back to the old application. Upgrading the frame-relay links was a non-starter - even if the capacity could be obtained, it was far too expensive even for this financial institution, and it wouldn't solve the problem as the network latency also killed performance (a double whammy).

        So not only should you test applications on minimum spec PCs, you should also test them on minimum spec networks (you can get nice 'networks in a box' with configurable latency, capacity and error rate*), so you know your spiffy new applications will work in the boondocks. It's also advisable to use a comparable volume of test data to the production application to expose unindexed (i.e. sequential) searches, and table joins across the network).


        *Oddly enough, the large financial institution bought several of these.

        1. Snake Silver badge

          Re: Minimum specs PCs for testing

          @Norman Nescio

          It's more of a lesson that a large change to an important system should NEVER be rolled out en mass without a partial mice in place structural test. Tech, structural engineering, medical, publishing - it doesn't matter, if you are going to make large-scale changes to an existing system you *always* run IRL small-scale tests in order to examine actual results.

          Because real life and simulations almost never coincide.

          1. 's water music

            Re: The difference between theory and practice

            Because real life and simulations almost never coincide.

            In theory they are the same, in practice they are often not so.

        2. tip pc Silver badge

          Re: Minimum specs PCs for testing

          I witnessed a similar type issue except the culprit was mtu. The app worked fine on the LAN with mtu of 1500, but tanked over the frame relay WAN as the mtu was ~1400 or less. No idea why the app didn’t just use the os to split the packets instead of the fixed 1500. Too big mtu being sent meant the wan had to send 2 packets for every 1 LAN packet, effectively halving bandwidth that was only ~500kb anyway. The developers weren’t keen on changing things and it was impossible to up the mtu.

        3. Doctor Syntax Silver badge

          Re: Minimum specs PCs for testing

          "written as a 'client-server' style application, .. entire database tables were being transmitted from server to client"

          I'm not sure that deserves to be called client-server. There are a few things it does deserve to be called.

        4. Killfalcon

          Re: Minimum specs PCs for testing

          I do all my dev and test on the most basic laptop the company uses, so I know for a fact that anything I produce will work for anyone else here.

          Also I don't want to deal with the stress of our new laptop request process, but mostly it's that testing thing.

      3. Spanners Silver badge


        My experience has often been that people who make very little, if any, use of computers get the new and shiniest ones.

        Those get passed down when something newer and better comes along.

        One of the best things about multi-user operating systems was that everyone got a Wyse terminal whether they were the PHB or someone who actually did something useful!

        Nowadays, it is getting harder and harder to keep Macbooks away. Apple and Microsoft are helping by keeping interoperability as poor as possible. Just tell the boss that the best way to do stuff is to dual boot!

        1. JJKing

          Re: @big_D

          The Australian Council of Trade Unions were moving buildings circa 1998 and I was unlucky enough to be assigned the networking and hooking the machines up. One of the secretaries threw a hissy fit because she was getting a machine without a CD ROM drive in it. Ended up the second in command got the CD from his machine and had it put into hers and I bet it was never used.

          The real funny part of this venture was the workers that were renovating and fitting out the building went on strike over something. Union HQ hit by a workers strike. The far cup of the hardware supplier was worth the agro just for the strike knowledge..........and you bastards still owe me $375. Pricks

      4. Jamie Jones Silver badge

        To be fair, developers are more likely to make the most of higher specced kit.

        What used to get my goat was when developers were struggling on old kit, and the managers got all the high tech new stuff when all the did was open the odd email now and then...That's when they were actually in the office!

        1. big_D Silver badge

          @Jamie Jones, the point isn't what they use to develop on, it is what they test on!

          They should have decent kit to develop on, that goes without question. But if you have a userbase running on small Celeron or Pentium based mini-PCs with 8GB and the developer is using a Core i7 or Core i9 with 32GB, testing on the Core i7/i9 isn't going to tell you whether the program will be usable on a "real" user PC.

          1. Jamie Jones Silver badge
            Thumb Up

            big_D, yes, fair point. It's that reading your post reminded me of the time we struggled whilst the management blinged!

            On a more related tack, it reminds me of those big Flash sites in the 90's that took ages to load and were very slow. Presumably they were demontrated to the "suits" on a fast PC with the files delivered locally, whilst in the real world, most hardware was slower, and connections topped out at 56kbs, though in these cases, I'm sure the developer knew the situation...

            1. Ozumo

              Car manufacturer websites are the biggest offenders as far as I can see. I can only think their sites are so complex and slow in order to persuade you to go to the dealer (and dissuade you from making comparisons with competitors).

              1. Roland6 Silver badge

                >I can only think their sites are so complex and slow in order to persuade you to go to the dealer

                So you can sit there watching them struggle to access the same website...

          2. phuzz Silver badge

            "Celeron or Pentium based mini-PCs with 8GB"

            Oooh, fancy. Our end users get Atom based machines with 4GB or RAM and that's us being generous.

        2. Doctor Syntax Silver badge

          "developers were struggling on old kit, and the managers got all the high tech new stuff"

          Distributed build using the manglement boxes as build servers?

        3. JJKing

          1999 and a very large state-wide school IT rollout. In one middle size school (350 students) that was really struggling with and for hardware, the librarian went and had a 21" CRT monitor purchased for her use. The library software was written for 640 x 480 but could be run 800 x 600. Maybe she just wanted to keep the library warm during the winter but those monsters were so damned expensive then. I thought it was just a massive waste of $$$. Unfortunately it wasn't the last time I thought that and it wasn't the worst so I can only imagine some very pious non clergy catholic critters got some very nice backhanders.

      5. JJKing

        Read somewhere that Bill Gates used a low end machine to run his software so he would know what the paying customers would experience. Wonder if he really did because otherwise their software might run a hell of a lot better than it actually does.

        Everytime Intel released a faster processor, Microsoft would release new software to slow the sucker down, except in the following exception. I had a Pentium 166 non MMX instruction set with 16MB RAM and it would install Office 97 on Windows 98 in 43 minutes. Got a newer machine, PII 450, 256MB RAM and the first time I installed Office 97 it took 4.xx minutes. I thought I had cocked something up so reinstalled it, again in 4.xx minutes and it all worked as advertised. Application opening speeds were almost twice as fast. Word on the P166 was 11 seconds. On the PII 450 just 7 seconds. My next CPU, PIII 800 if I remember correctly took 4 seconds. It wasn't until I got my first SSD that I was that impressed by loading speeds again.

      6. StargateSg7

        "....That is often the case, the developers and support get high end machines and the users get stuck on low-end stuff...."


        This I fully understand to the Nth Degree! I recent exercise I did for a "relative" and their small business, I forced the company to install two 72 inch racks and had them install ten of the AsRock Gaming Motherboards with the 10 Gigabit network connector on it (i.e. Asrock Fatility mobos at $300 CAN each!) and with each mounted on a thick nylon kitchen cutting board that I bought at a local kitchen supply store.

        Since each board already is 10 Gbits networking AND can take up to a 32 core/64 thread AMD threadripper-2 chip in it we can upgrade the systems ANYTIME. Each board gets an 8-core 16 thread CPU (AMD Ryzen 1900x in TR4 socket at $350 CAN), a Vega-64 8-gig GPU card ($525 CAN), 32 gigs of system RAM ($370 CAN) and a one terabyte SSD Sata-3 ($270 CAN) drive and one 4 terabyte 7200 RPM storage drive ($150 CAN) which is more than enough for daily use and finally everyone got two LG Freesync 27 inch displays for $400 CAN each ($800 for both), a KVM box for keyboard/mouse/video connections to their desks and Windows 10 Enterprise bulk pricing.

        In bulk wholesale pricing, total cost was around $3000 per workstation-class computer and these are being use for daily CAD/CAM and ERP/SALES use with no network issues at all. Server is a TWO-CPU sever motherboard (Tyan) also racked on a simple cutting board running AMD EPYC 7251 16-Core 2.1GHz CPU's on Windows 2016 server doing NOTHING but Active Directory, DHCP/DNS, and Email services and daily backup/antivirus/anti-malware against all clients between midnight and 6 am. NOTHING ELSE gets done on those servers so they are lightly loaded!

        Everything else gets done at the client level since they have so many usable cores that I can assign TWO CPU cores FOR JUST background tasks as end-user widgets, client-machine midday and 6 pm daily backups of Email/Documents/CAD/Cam/Image files folder and a scheduled 1 pm and 7 pm email/anti-malware/anti-virus memory/user files scan. On these machines, everything takes only between 30 minutes to one hour on the backups and scans and the users never know because those tasks are running only in the background on specified and RESERVED cores. The user still has six cores/12 threads to play with in their CAD/CAM/ERP/Sales systems.

        They use Sonicwall firewall appliances and a single managed 10 GB switch so speed of data transfer is a non-issue and they CAP everyone's client internet and network access at 400 megabits per second so NO SINGLE USER chokes the entire network (i.e. user Quotas on Network Bandwidth). The other bandwidth left over is reserved for outside sales on VPN and for all scheduled daily backups and anti-virus/malware network scans.

        EVERYONE is now happy with this new scheme and I can simply add another 10 boards into two more racks and add another managed switch/router with a bridge in-between creating a sub-net which won't interfere with current users. In terms of maintenance, they are NOT reliant on a SINGLE server box running blade clients which could take down the ENTIRE operation! Everyone gets their OWN client motherboard and if one machine needs maintenance, a user can have their roaming user profile activated on one of the spare boards!

        Additions and changes are a BREEZE and easy to do without harming ANY OTHER user!

        So far I've been given "Two Thumbs Up" for a small scale network system that works for Workstation-class machines usage rather well!

        While it's a rather expensive option if you want to scale beyond 25 users, I would have no personal issue with using nothing but racked gamer motherboards and multiple subnets on 10 Gbit bridges between managed switches/routers. I estimated my system could EASILY be scaled to over 1000 motherboards/users before costs get truly outlandish and I probably have to call IBM in for a Z-series Mainframe price quote!

        I've heard of systems using TEN THOUSAND+ such racked motherboard setups using multiple sub-nets for scale-up and scale-out purposes!

  4. Jos V


    Don't want to sound pedantic, but in 1990 I think it was Cat-3, 10BaseT. Cat-4 and Cat-5 were introduced in 1991.

    I guess it would have worked if it said "it was the 90's".

    Only from the mid '90s did 100BaseT arrive as well.

    Impressed with a 386 downloading 20MB. I'm not sure my HDD was much bigger at that time :-)

    1. big_D Silver badge

      Re: Cat-5?

      We had 40MB drives on our kit at the time, although the maximum partition size was 32MB, ISTR.

      I had a Compaq Deskpro 386. By the time the renewals came around, the company had switched to Viglen and a colleague received a spanking new Viglen 486. Woooho! So much power!

      So we ran one of our dBASE IV databases on it, to compare it to my 386. For comparison, the 386 generated the monthly report in about 40 seconds. The Viglen 486 took 180 seconds!

      We ran some benchmarks on both. It turned out that, although the 486 processor was running rings around the 386, Viglen had cut costs by using the cheapest no-name disk controller they could get their hands on, which ran at about a quarter of the speed of the old Deskpro.

      1. Christoph

        Re: Cat-5?

        We had 40MB drives, but partitioned them as max 32 because that was all Norton Utilities (I think it was) could handle.

        1. Sceptic Tank Bronze badge

          Re: Cat-5?

          I think the 32MB partition limit was due to the partition table scheme used at the time. Had nothing to do with Norton Utilities -- more an IBM invention when HDD's had 5MB capacity.

        2. JJKing

          Re: Cat-5?

          Worked casual for a computer store who had installed 40MB HDDs partitioned to 32MB due to DOS 3.1 limitation. MS DOS 4.01 arrived on the scene and he mailed (NOTE, not emailed) out letters to all customers who had the 32MB HDD a cheap Upgrade to 40MB drive. As you have likely guessed, the Spawn of Satan just backed up their data then installed MS DOS 4.01, ran fdisk, removed the partition so the customer had their 32MB HDD replaced with a 40MB drive. At the time the drives were costing AUD$10 per MB so "getting" a 40MB drive for whatever he charged ($50 or $100) was a "bloody bargain mate" and he got to "keep" the old 32MB ones. His customers were so happy to get an extra 8MB and I just felt disgusted.

      2. BinkyTheMagicPaperclip

        Re: Cat-5?

        In 1990 DOS 4.0 had been out for a couple of years, and that supported larger than 32MB FAT partitions, as did specific (Compaq) versions of DOS 3.31, and DRDOS. Alternatively you could run OS/2 and have a 64GB HPFS partition, although the real limiting factor was the time and memory needed to check the disk when it wasn't shut down correctly.

        40MB wasn't that large - the Amstrad 286 I had at home had a 40MB drive, and it filled up remarkably quickly, ended up using one of the disk compression utilities.

        1. jake Silver badge

          Re: Cat-5?

          In 1990 I'm fairly certain my main PC had an old 80 meg, a newer 120 meg, and a new 200 meg drive. All Maxtor. The 4th IDE slot was a tape drive.

          1. big_D Silver badge

            Re: Cat-5?

            We were still "making do" with 32 or 40MB drives on most machines. The newer ones, with the slower controllers, were getting 80MB as standard, I think.

            At home, I had an Amiga with an A590 unit with an external 40MB SCSI unit (a "spare"* Apple Mac external drive from work).

            * Nobody missed it, so it was spare...

            1. Anonymous Coward
              Anonymous Coward

              Re: Cat-5?

              I think the 32 MB limit was Microsoft's fault. Didn't they think 32 MB was way more than you'd ever need?

              My first real computer (pre IMB PCs) of around 1980 or so could have a big 74 MB hard drive. It was an Ohio Scientific with dual 8" floppies. The hard drive was around $10,000 so was more than I could justify. They were really big dirives. I think they were at least 8" but might have been 14" disks.

              One of the two Basic interpreters was written by Microsoft so I fault them for all the troubles that PC's had with the 32 MB limit. They knew bigger hard drives existed.

              I read a story years later about Sky and Telescope magazine upgrading their subscription list computers to PCs and smiled when I saw they had been using two Ohio Scientifics with the big hard drives. In different buildings for redundancy. Upgrade was done just in time as they had to tape up the breaker/on off switch for the hard drive to keep it powered up during the final data transfer.

              1. jake Silver badge

                Re: Cat-5?

                74Megs in 1980 for $10,000? That would have been a steal ... I'm looking at an invoice for an 18Meg drive that set my customer back $4,200 in July of 1980. It was a North Star HD-18, plugged into the parallel port of a North Star Horizon to supplement the overloaded two year old stock 5Meg drive. The system ran a proprietary, home-built inventory and invoicing system for a local indy auto parts store in Mountain View, California. It worked quite well for about a decade, when I upgraded them to a Coherent based system, which was followed with a Slackware system about 10 years after that.

      3. N2 Silver badge

        Re: Cat-5?

        "Viglen had cut costs by using the cheapest no-name disk controller they could get their hands on"

        Sounds about par for the course, was it them or Tricky Dickies who took half the memory out, then encouraged customers to buy an 'upgrade' to improve performance?

        1. Anonymous Coward
          Anonymous Coward

          Re: Cat-5?

          Viglen... one time owned by Alan Michael Sugar... say no more...

    2. Dabooka Silver badge
      Thumb Up

      Re: Cat-5?

      Well even old Amstrad 1512s and 1640s had an (optional) 10 or 20MB HDD so most 386s exceeded that size but you're right, not by much. Maybe 40 or 80MB unless you were loaded / lucky and got a ~200MB or so. Or maybe slaved two together?

      Happy days of IDE....

    3. Nick Kew

      Time machine

      On a slightly similar note, 386 was far from shiny-new in 1990. So between two observations, we have a story set in 1987 or 88, and a story set in 1991 or later. 1990 must be a case of splitting the difference.

      Also not specified, was this anything resembling Internet protocols, or was it one of the entirely different networking protocols from the likes of Microsoft or Novell?

      1. BinkyTheMagicPaperclip

        Re: Time machine

        Probably wouldn't be Internet based. From an advert at the time

        'Salemaker Plus also runs on 3Com, Banyon, and other DOS-compatible LANs, as well as on Novell Netware'

        Banyon should be 'Banyan' (Vines) - never touched that. I'm not sure what '3Com' is unless they're using their own protocol.

        1. Philip Storry

          Re: Time machine

          In 1990, my money would be on either IPX/SPX or NetBIOS Frames (NetBEUI).

          If the office had a Novell Netware server doing file/print, then the former. If they had OS/2 doing that for them, then the latter. Not that this will be news to anyone who was there at the time, mind!

          I started my first job in 1995, and never saw a Banyan Vines network - although I often saw it in documentation as supported by products.

          After a bit of research I've found that 3COM had an old network protocol called 3+ that was based on XNS, but by the time of this story they'd thrown that out and joined Microsoft on the LAN Manager NetBIOS and IPX/SPX train. By the time I started work, 3COM was mostly associated with the hardware layer - network cards, hubs, and those newfangled switches...

          1. Dabooka Silver badge

            Re: Time machine

            Yep, I'd have to agree. Although an amateur kid with a dangerous level of enthusiasm, I only knew 3Com as hardware cards and hubs.

            My first experience of Novell Netware was many years later when the ortgainsation I was with went national with the databases (PeoplePlus if I recall) and got rid of legcay regional networks. That also saw a shift to 'high end' 486s and early Pentiums (all Dells) to replace the creaking fleet of various 286/386 and even a TI 486DX4(!) which was actually far too new to be consigned to the skip.

      2. Rich 11 Silver badge

        Re: Time machine

        Also not specified, was this anything resembling Internet protocols

        You could get minimal IP stacks for DOS and drivers for the common network cards in the early 90s, just as long as you had a Unix box to use to download them from an FTP site somewhere. They were fiddly to get going unless you had some networking experience and enough of a Unix background, 'cos the documentation was minimal to say the least. I mostly remember using my PC as a dumb terminal to a Unix server, courtesy of Kermit, prior to about 1994.

      3. TheCynic

        Re: Time machine

        My bet is with one of the inbetweens.

        Those are the thin-net years of Lan Manager (which was generally pants), the days before you had to make the decision of IPX/SPX Netbeui or TCP-ip depending on what server OS you had and variant of IP you were using - the pre cat5 years..

        1. Down not across Silver badge

          Re: Time machine

          Those are the thin-net years of Lan Manager (which was generally pants), the days before you had to make the decision of IPX/SPX Netbeui or TCP-ip depending on what server OS you had and variant of IP you were using - the pre cat5 years..

          Wot, no Pathworks? You missed out all the fun then.

          icon... because any of you who had the pleasure of Pathworks will need a few I think.

        2. JJKing

          Re: Time machine

          Don't forget that WFW 3.11 (hope I remembered the version correctly) could have TCP/IP installed on it.

          1. This post has been deleted by its author

          2. jake Silver badge

            Re: Time machine

            There was TCP/IP for MS-DOS (of sorts) in the mid 1980s. Look up "FTP Software" for a good start on what was going on at the time. It was crude, and somewhat limited, but once you have telnet, ftp, and email what else do you really need?

            1. Roland6 Silver badge

              Re: Time machine

              >but once you have telnet, ftp, and email what else do you really need?

              NFS, lpr/lpd, VT220/240 and (depending on environment) TN3270...

              I seem to remember that not all the MS-DOS networking packages played well with Win3.0/MS-Dos3.1.

              It would not surprise me if the company in the article used the "in-the-box" SMB over NetBios networking...

              1. Roland6 Silver badge

                Re: Time machine

                Forgot to mention, the VT220/240 and 3270 emulations had to be good enough to run user applications - not all terminal emulators were equal, many were just sufficient to run the remote host's command-line.

    4. jake Silver badge

      Re: Cat-5?

      In 1990 it would probably have been thinnet (10base2). Cheapernet was the order of the day until the price of switches/hubs droppedplummeted in the late 90s.

      1. big_D Silver badge

        Re: Cat-5?

        I had an IT director in 2012 who still insisted on calling switches hubs!

        1. jake Silver badge

          Re: Cat-5?

          I can name the CTO of a Fortune-100 who wouldn't let me replace the wall-wart for the switch in his office because I made the mistake of correctly answering a question from him.

          The question? "That is a switching power supply for that switch, isn't it?"

          Seems he "took a course" once.

    5. John Brown (no body) Silver badge

      Re: Cat-5?

      "Don't want to sound pedantic, but in 1990 I think it was Cat-3, 10BaseT. Cat-4 and Cat-5 were introduced in 1991."

      And likewise, it was either Windows 2.0, Windows/286 2.1 or Windows/386 2.0 or a very early release of Windows 3.0, so no built-in networking. As others have mentioned, hubs, not switches were likely the primary cause of the bottleneck and MS had nothing to do with either networking infrastructure or the client network stacks.

      Windows for Workgroups didn't appear till about 1992 and IIRC TCP/IP was an optional extra.

    6. Anonymous Coward
      Anonymous Coward

      Re: Cat-5?

      Yep, think it's an example of how stories get gradually embellished over years of telling. I'm sure the broad story is accurate (a guy with a new machine accidentally hogged the network), but in 1990:

      * Cat5 didn't exist

      * Windows wasn't really used

      * The latest nifty processor back then was the 486

    7. RobThBay

      Re: Cat-5?

      I was just about to question the Cat-5 claim as well.

    8. Roland6 Silver badge

      Re: Cat-5?

      >but in 1990 I think it was Cat-3...

      I tend to agree, however, during the late 1980's there was much innovation as people played around with Cat 3/twisted-pair to get better data rates (ie. 100Mbps+) and lower noise levels/signal attenuation. So given what was also happening with network adaptors - remember IEEE802.3 itself was undergoing revision to support higher data rates and the use of twisted pair (10BaseT was ratified in 1990) and other media, I would not be surprised, given this was a new installation and by a company with money, if both cable and network adaptors were based on Draft Standards, which were prone to cause problems if you mixed vendors kit...

      Personally, in my experience many companies were still using cheapernet (10Bas5) into the mid-1990's - I think the last time I used cheapernet was in 1998 (3Com PCMCIA NIC with dongle).

  5. Anonymous South African Coward Silver badge

    Should've gone with ARCnet instead...

  6. big_D Silver badge

    A little different...

    I came to work on a project, where the system was written in MS BASIC for CP/M (HP 125) and for MS-DOS (HP 150) and PC-DOS (IBM PC). The problem was, it was written by FORTRAN programmers and had been maintained for 5 years by COBOL programmers.

    I think I was the first person on the project who even read the programmer's manual for MS BASIC. It didn't use For...Next loops or While...Wend, but only

    10 A = 1

    20 ...

    30 A = A+1 : IF A < 20 THEN GO TO 10

    Trawling though the several thousand lines of code, "repairing" that alone speeded up the system.

    Then there was the updating, it read in CSV files, validated them and encoded the data into an output file and sent it to HQ (it was a data capture front-end for the financial reporting system). The program I worked on first used to take 4 hours to generate the output textfile, before sending to HQ.

    Testing it was a pain, so I went through the code again. The name of every single data-set that was being updated (several thousand) was displayed on the screen. I put a simple check in there that just put out every 100th dataset name. It dropped the processing time from 4 hours to 20 minutes!

    I got a 100% raise out of that little fix!

    The real nightmare of the system was that it was full of commented out code, which made the system even more unreadable. I thought I'd be clever and delete the commented out code, only for the hole thing to collapse in a heap... It was using computed gotos and jumping into the middle of a block of commented out code! Oh well, you can't have everything.

    1. big_D Silver badge

      Re: A little different...

      Oops, coding error... That should be

      30 IF A < 20 THEN GO TO 20

      Goto 10 would be an infinite loop! :-D

    2. Rich 11 Silver badge

      Re: A little different...

      Ah, computed gotos, how I miss thee. Not.

      1. Paul Cooper

        Re: A little different...

        But Shared Common was a really neat way of handling byte operations to convert formats!

      2. swm Silver badge

        Re: A little different...

        What about computed come from statements?

        1. Olivier2553

          Re: A little different...

          I am not sure that was the original paper I once read about COMEFROM programming, but here it is...

    3. Anonymous Coward
      Anonymous Coward

      Re: A little different...

      Where I work, we are, for real, using a BASIC program (BBC BASIC on Matrix Brandy for Linux) for processing DHCP logs.

      It was originally a bash script, but when that only processed 30% of a 70GB log overnight, it was rewritten in BASIC and a few other implementations, and the BASIC program did the entire thing in 4 minutes flat.

  7. This post has been deleted by its author

  8. Rudolph Hucker the Third

    I didn't get where I am today without starting with 80286 machines with 20MB hard disc drives, and a *one* Mb file seemed like something *huge*. If you were really lucky you go one of the special custom-built machines with (gasp) a 16MHz processor and a 1.44Mb 3.5" floppy disc drive. I'm pretty sure my bulging IT museum (in the attic) has some 8-bit coaxial Ethernet cards.

  9. Jenny with the Axe

    Reminds me of back in the late 80's/early 90's when I was "the IT person" at the small office where I worked, simply by virtue of being the only one not afraid of figuring things out. (Yes, I was

    I was on vacation for three weeks. When I came back, one of my coworkers asked me to look at his computer because it had been very slow for almost the whole time I was away. Now, in those days, processor speeds increased a lot more dramatically, so that some games became unplayable on the faster computers because they simply ran too fast for the player to have a chance to react. So a lot of models had a "turbo button" that you could use to switch between the processor's max speed and a speed more suitable for the games.

    Yes, you guessed. My coworker had managed to hit that button instead of the power button one day. It had been running at a quarter speed ever since, and he'd never dared to try to figure it out for himself.

    The really scary part is that it took me another ten years before I actually started working as a sysadmin, rather than doing whatever I was doing and also fixing everyone's computer...

    1. GlenP Silver badge

      So a lot of models had a "turbo button"

      The "turbo" button was, of course, exactly the opposite, it was a brake! Could never convince users of that though.

      1. Baldrickk

        it would help if it actually had a suitable name

        1. defiler

          Turbo display

          My first PC (moving from Acorn) was a P90 on an Intel Plato board (self-build job). The Turbo display was a 3-digit 9-segment display for showing the frequency. I set mine to switch between Foo and Bar.

          And then used the "reserved" jumper to hit 100MHz. RAW POWERRRR!!!

          PCI and 16MB of RAM too. I think it was 1993 - Magic Carpet was my first game on it, and I was the only person I knew who could play it at 640x480.

      2. John Brown (no body) Silver badge

        The "turbo" button was, of course, exactly the opposite, it was a brake! Could never convince users of that though.

        Ah yes, the button labelled Turbo that, when pressed, illuminated the Turbo LED to let you know the Turbo function has been activated. Genius. Not :-)

        1. VikiAi

          I liked the 'speed' indicators where the numbers displayed on the 7-segment LED pair in each mode ('turbo'/normal) was simply set with jumpers or DIP switch banks on the back. You could make your PC run at any speed up to 99MHz!

          Edit: I just looked in my store room and I still have some of those little boards rescued from EOL gear many decades ago!

    2. JJKing

      The customer had purchased a 286 or 386 (can't remember which) but had those stupid cases with the TURBO button and the LED display that showed the SPEED. He brought the machine back to the shop because it wasn't running at the fastest speed according to the display. No amount of explaining would convince him the button did nothing on this motherboard so it was find the "manual" for the case so the speed could be set correctly. In order to make him happy after his complaining, his machine even got a speed boost according to the display.

  10. Korev Silver badge


    When I was at university the central IT group upgraded the core in the datacentre to GigE. The increase in speed basically saturated everything else, including the DEC Alphas running Samba which caused pretty much everything in the university to crawl. I believe the techies had to work very hard to tune performance to acceptable levels before fixing properly it by throwing hardware at the problem.

  11. Steve Kerr


    Many years ago, on a production VAX, I created a small loop with lexical functions to get error messages and their associated meaning. One vax running at 100% CPU making everything run rather badly.

    I was working in Operations at the time, VMS support were a tad upset to understate it :)

    1. Sceptic Tank Bronze badge

      Re: Oooops

      What awful OS design! I hope whoever came up with that never decided to design another OS again. Oh, wait ....

      1. jeffdyer

        Re: Oooops

        VMS is a superb design, what are you like?

    2. big_D Silver badge

      Re: Oooops

      I was on a training course in Reading - VAX Advance Administration.

      The first day was a bit boring, so I wrote a little script... It got a listing of all logged in users and logged off everybody who wasn't me!

      Worked a treat. So I added some code to submit itself as a batch job at the end of its run, so it just constantly ran in the background. It was hillarious. Until I logged myself out.

      I had overseen one minor flaw in my dastardly plan: when you log onto the VAX, until it has parsed the username and password, the login attempt appears in the userlist as "<login>". Oops. Every login attempt was killed before it could be parsed, so I couldn't log back in.

      Even the instructor couldn't do anything. We all trapsed into the server room, but even the console couldn't log on. In the end, we had to hard reboot the system.

      At least the instructor saw the funny side of it and turned it into a learning experience for the whole class.

  12. jake Silver badge

    Silly NIC games ...

    “My new Intel 386 was so much faster than the 286 machines on the floor that it was grabbing every packet on the LAN,”

    Yeah, I ran across those poorly made NE2000 clones, too. It wasn't your processor, it was your NIC not playing well with others.

    I got two boxes of 20 NICs once. Installed 'em into 40 new "beige box" PCs, and set 'em up as two test networks of 20 PCs each. All went swimmingly. Until I connected the two networks, at which point both halves refused to work. Long and short of it, the two boxes of NICs duplicated 20 total MAC addresses. Yes, it took a while to figure out. Yes, I was more than slightly irritated with the vendor ...

    1. KittenHuffer Silver badge

      Re: Silly NIC games ...


      Statistically the chances of having 'split' the duplicated MAC addresses cleanly between the two networks is pretty damn remote! If I were you I'd have gone out and bought myself a Lootery ticket straight away!!!

      Unless the MAC addresses in the first box of 20 EXACTLY matched the MAC addresses in the second box of 20. In which case it becomes more of a WTF rather than a Wow!

      1. Robert Carnegie Silver badge

        Re: Silly NIC games ...

        Du you buy MAC addresses from a global authority, like IP addresses?

        Then it sounds like someone bought 20 addresses since they were only shipping up to 20 network cards in one multi-pack.

        Or, after manufacturing each 20, something with a counter on the production line got reset.

        Or, the manufacturer actually had 40 MAC addresses and you were just unlucky not to get one Box A and one Box B.

        1. big_D Silver badge

          Re: Silly NIC games ...

          The MAC address is unique. The manufacturer gets alloted his "prefix", which is the first 24-bits of the address, then every card they make is then given a unique address using the prefix + a serial number from the manufacturer for the second 24-bits of the address, so there should never be a conflict - if there is and both machines are on the network at the same time, it will cause problems, because the network protocols also use the MAC address.

          It sounds like the manufacturer of the cards was lazy.

          1. jake Silver badge

            Re: Silly NIC games ...

            The manufacturer of the cards wasn't lazy. The manufacturer of the cards was crooked. It was quite common back in the day. Still is, in some areas of computers and networking. Caveat emptor.

            1. The Oncoming Scorn Silver badge

              Re: Silly NIC games ...

              I remember being on a placement with Xerox (Late 80's) & one Engineer I was paired with told the story of how he drove in the pouring rain from Swindon - Wales, parked up at the first car park with a space, lugged his network diagnostic kit through town in the pouring rain, into the building, up 4 flights of stairs soaked to the skin.

              Sets up his kit & notes the effected equipment has the same MAC address as a piece of kit in London on their network.

              Swaps out card & lugs everything down the stairs & back to his car for the drive back to Swindon.....Did I mention it was pouring with rain?

            2. Doctor Syntax Silver badge

              Re: Silly NIC games ...

              "The manufacturer of the cards wasn't lazy. The manufacturer of the cards was crooked."

              It can be a fine dividing line.

          2. tip pc Silver badge

            Re: Silly NIC games ...

            Duplicate burnt in MAC’s do happen from time to time. I’ve never seen it but read stories about it especially in the early days. Seems to still be happening as recently as 2018


          3. Doctor Syntax Silver badge

            Re: Silly NIC games ...

            'The manufacturer gets alloted his "prefix", which is the first 24-bits of the address, then every card they make is then given a unique address using the prefix + a serial number from the manufacturer for the second 24-bits of the address'

            It might also be possible to change the MAC in S/W.

            I discovered that DECNET assumes the prefix will be DEC's own. We had installed DECNET emulator S/W for HP-UX. When we first fired it up it reset the HP server MAC to make it look like a DEC. There could have been a problem if it reset to another VAX on the network but you'd have to be very unlucky to have that happen. No we weren't unlucky like that. What did happen was that the change of MAC invalidated all the connected users' ARP caches. I can't remember how long it took but they did repopulate fairly quickly.

      2. BinkyTheMagicPaperclip

        Re: Silly NIC games ...

        It's happened a small number of times by crap clone manufacturers, as Jake says. MAC addresses are supposed to be unique, but if your manufacturing is sub par this is not the case.

        That's just one of the more egregious cases of poor kit, you really shouldn't look at the number of pieces of hardware that don't follow specifications unless you want nightmares. My favourite is probably the DVD drive that re-used a commonly used ATAPI sense command to mean 'upload firmware' and bricked the device (covered a few years back on elReg).

        1. Waseem Alkurdi

          Re: Silly NIC games ...

          My favourite is probably the DVD drive that re-used a commonly used ATAPI sense command to mean 'upload firmware'

          And Microsoft everything. And Apple EFI.

    2. fibrefool

      Re: Silly NIC games ...

      does anyone else remember Sun Microsystems and their Ethernet cards that were "aggressively compliant" with the CSMA/CD back-off spec?

      meant that on any mixed LAN the Sun machines would out-perform everything else in terms of network I/O.

  13. El blissett

    Tired El Reg Subeditor: Suspect this ought to be in the Who, Me? category and not Data Centre...

  14. Uplink

    Remind me of my childhood

    I was about 13 or 14. There was a programming competition at another school, and all the computers were booted from a Novell server. I never found out what the problem was, but my compulsive saving of my work kept bringing it the server down. When that happened, my work was saved, but all the others lost theirs. After a few crashes like that, we were basically begged to stop saving (I don't think they ever knew it was just me). It was weird, because it was the old days of DOS and Turbo Pascal, so it wasn't like I was saving seven YouTubes per second.

    1. Waseem Alkurdi

      Re: Remind me of my childhood

      Well, it's "Turbo" Pascal, isn't it?

  15. jeffdyer

    CAT5 spec wasn't release until 1991, so memory slightly astray here.

    I recall my first "PC" job involving a lettings agency with a coax cable network where just a careless touch with a foot would bring the whole network down, requiring all the database tables to be rebuilt. not fun.

    1. John Brown (no body) Silver badge

      "I recall my first "PC" job involving a lettings agency with a coax cable network where just a careless touch with a foot would bring the whole network down,"

      My own experience of that was that only the users upstream of the break lost their connection. Depending on the network server, eg Novel, the only way to get the affected users back on line after fixing the break was to re-boot the server so everyone else still connected had to log out anyway.

    2. big_D Silver badge

      We had a manager who would take his desktop with him, when he did a training course, T-adapter and all, which meant nobody could work until a spare was found.

  16. Anonymous Coward
    Anonymous Coward

    I did something similar but had the opposite outcome - noone reported an issue.

    it was then i realised the marketting department did feck all.

  17. AustinTX

    User Gets No Priority

    It's 2019, and my PC still freezes up HARD waiting for some network or disk activity to be responded to.

    1. ShaolinTurbo

      Re: User Gets No Priority

      It always makes me laugh, no matter how fast your computer is, you can bring it to its knees by ejecting the CD drive.

      1. Waseem Alkurdi

        Re: User Gets No Priority

        you can bring it to its knees by ejecting the CD drive.

        There has to be a CD in there though, and to make shit worse, use the software Eject function (ATAPI command) instead of the button.

  18. daflibble

    Oh I had one of these only a couple of years ago.

    The Deputy head of the 6th form college I worked for was teaching in a lab classroom when all laptops\desktops were reduced to a crawl after I kicked of a 2TB file transfer from a DR Backup Exec server to a New DR file server. There were a good 100 end point connected to the edge switch stack trying to login with roaming profiles at the time my file transfer was saturating the link. The lab was next to the DR room I was in so I heard the uproar at the issues and quickly terminated the file copy and popped into the room to check everything out. NP all sorted now, I'll investigate and let you know later if I find a root cause.

    Turned out a predecessor in my sysadmin role had run out of MTRJ connectors to patch the Backup Exec server into the core switch (as was the habit at this establishment) and had patched it into an edge switch stack with Cat 6 instead. Server had 2x 1Gbps links back to an edge switch which only had 1Gbps uplink to core switch.

    On a positive note the core switch upgrade that happened the next summer had no issues getting approval for all the edge stacks uplinks to be upgraded to redundant bonded links. Needless to say I fixed the server patching later and when I finally left that job they had proper Top Of Rack switching for servers with high speed links back to an upgraded core switch infrastructure that no longer used MTRJ fiber connections. They'll not be breaking things that way again without serious mistakes by onsite staff for some time I hope.

  19. ShaolinTurbo


    I wanna know what kind of picture he had that was 20Mb back in those days :D

    1. Zarno
      Thumb Up

      Re: 20Mb

      I'd joke it was the Lenna test image, but that was significantly smaller than 20M.

      I remember when we were using that image in college for a digital imaging course that involved convolution, filtering, fft, transforms, etc etc.

      If memory serves, I nonchalantly asked the professor if we would be using an "un-crop" convolution kernel to recover the lost image data... Got quite a laugh from that one.

      I nearly did Paris, but didn't want to sully Lenna...

  20. Anonymous Coward
    Anonymous Coward

    Early 90s, I was installing a networked backup system at a client. Since there was a UNICOS agent, the sysadmin said "let's try it on the Cray". Install, configure, start a backup, watch the blinking LEDs on the tape drives. Beautiful. 5 minutes later, a bunch of angry VMS-ers came in, asking who was using 100% of the network. Oops.

  21. Chronos

    20MB pic...

    ...or it didn't happen!

  22. martinusher Silver badge

    Maybe that 386 would have been better used as the file server

    You don't mention what sort of network software you were running at that time; I'd guess from the way one file copy brought the server to its knees that it probably wasn't Netware.

    You don't tell us much about the actual network. Since its 1990 the network is likely to be at best 10BaseT running half duplex through hubs using Cat3 phone wire. (It could even be something earlier than 10BaseT such as Starlan). This might generate problems if you've got a lot of people trying to share it, because of collisions -- an individual user (the person setting this thing up) would see decent performance but a sales floor's worth might collapse the thing. Even so, if you could monitor the traffic you'd be surprised at just how little you were loading the network.

  23. Twanky

    More powerful?

    I worked at a company that had an IBM 4381 for 'business computing' and some microVAX IIs for 'R&D computing'. I worked on the R&D IT side of things. One of the microVAXes had the SAS package for statistics analysis and we had serial (LAT) attached LN03 laser printers and 6 pen HP plotters for printed output. To test the plotters we used to run a SAS script to generate a 'cowboy hat' plot.

    The company got taken over and some consolidation was imposed; the two 'computing' functions were to be merged and the 'toy' microVAXes done away with. SAS for VM was installed on the 4381 and we migrated everything over from the VAX environment to VM - with various REXX scripts to ease the transition for the users (for example by making RSCS look a bit like VAXmail). To test the system I ran the 'cowboy hat' SAS script - and got a worried call from the IBMers 'DP manager' asking what the hell was the going on. The interactive/intermittent load was nothing like the predictable batch processing load he was used to.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2021