It won't help.
The universe will just build a better idiot.
It always does.
It's how we got here. :-(
159 posts • joined 28 Nov 2007
Let's assume that there's some battery-backed RAM or non-volatile SRAM in the device, and it's used to store settings.
Now let's assume that the later version of the software has additional functionality that either:
a) changes the data structure.
b) stores values that are valid for new or improved features, but would be invalid for old firmware.
When you roll back to a previous firmware, this could cause problems. Well written software will hopefully ignore invalid values and revert to defaults. If data structures are invalid, that may be more serious - it could cause very odd problems.
This is not an insurmountable problem, and good engineering can help mitigate it. But there's always going to be one smartarse who decides to revert from the very latest firmware for a device to the very first - and if the time period for that covers a couple of years, and several versions, is it really so simple to know that it'll work? Especially if a lot of new features have been added and that storage area now looks quite different...
Should Bose (or anyone else) really be testing such extreme downgrades? Testing a rollback by one version makes sense, but multiple versions seems harder to justify...
I doubt they've even bothered doing much testing for reverting firmware. Why should they? It's not a commonly expected user procedure, and the preferred way to fix any issues with a firmware upgrade should be to issue a new version with the fix.
So this statement seems perfectly reasonable to me, despite its somewhat "blanket legal boilerplate" nature.
We thought we were fine, because we had decent aircon in the server room and nobody else could control it. But then we had a hot summer.
Very hot. Specifically, the 2006 heatwave hot. https://en.wikipedia.org/wiki/2006_European_heat_wave#United_Kingdom
And it turns out our aircon had a serious flaw. The exhaust port was in direct sunlight, and at the wrong angle. The warm wind and the warm air rising from the (black, naturally) roof isn't blowing heat away, it's blowing it back down the exhaust pipe...
Yep, our server room aircon kept cutting out due to overheating, despite the room being freezing. It just kept tripping the internal sensors on the units. We were very puzzled. We were also somewhat frazzled as we kept having to come in early, spend long days nursing minimal systems along in a room cooled with one inadequate portable air conditioner. Clusters ran on only one node. Some servers were powered up only on demand.
I've just realised that in some ways we spent that week as a crappy physical version of AWS or Azure. Damn, we should have patented the concepts! ;-)
Each night everything except email and network/AD servers were powered down, to try to cool the room as much as possible for the next day. Staff were advised that systems were strictly 08:00 - 18:00, due to the emergency. I suspect most were grateful for an excuse to leave early and get some sun!
After a little over a week the exhaust port was temporarily fixed, and later in the year a more permanent fix was put in place. But I now have much more respect for HVAC engineers and the work that they do, because it seems that there definitely some cowboys out there!
Most applications I've used that were "cross-platform" felt like a second-class citizen, unless they were web-based.
It's hard to get the chroming and feature integration right across multiple platforms. So I'd rather that Microsoft focused on doing a good job on just one or two.
I'm tempted to say that they could lead a charge for some kind of universal markup. Reach out to Google with Android, and to Apple, and see if they can get something done there. But the big problem is that both of them are now committed to declarative UI systems (Jetpack Compose and Swift UI), so for them a standardised markup might be seen as a step backwards compared to their current efforts.
But without the support of the platform vendors, any cross-platform UI will always be lag behind the platform itself, and risk going against native conventions. Basically, the same problems that Java had with its UI toolkits...
So given that history and current politics suggests any attempt at cross-platform UI is highly likely to fail, I'm rather sceptical. But I'd love to be proved wrong.
You should at least warn us if you're going to use language like P*w*rB**ld*r in an article.
Some healthy robust Anglo-Saxon is of course expected on El Reg, but we surely have SOME standards here? I mean, some people have had to work with P*w*rB**ld*r, and are still scarred.
(And others have had to work with products built by in-house P*w*rB**ld*r developers, and are more broken than scarred...)
For a while now, I've been saying that they want to kill the offline/local binary versions of Office programs.
These programs have issues. Their codebase is very old - parts date back to the mid-90s. This came out a decade ago during the OOXML specification debacle, when it became clear that Word has compatibility modes with names like "autoSpaceLikeWord95". No explanation was immediately forthcoming on what that meant though.
Which means that somewhere in Word there's an entire code branch for spacing that can be activated, but only exists for compatibility purposes.
Let's be honest, that's a problem. Ignoring the bloat, there's the security aspect. A bunch of code nobody is touching, fewer people understand as they leave/retire each year, but can still be activated by an OOXML document despite being 25+ year old code...
Word, Excel and PowerPoint are overdue a rewrite. And if you're going to rewrite it, you may as well make it a web based application. Local "cached" versions can still be provided - look at Teams as an example.
Whether we want it or not, this rewrite is happening. And the desktop apps will be matched by the web apps, then left behind. It's a stealth rewrite.
I actually think it's probably a good thing in the long run. And long overdue. But I don't expect it to be popular...
When the graphics card died, I replaced it with a budget job.
The original card was a beast. It was close to top of the line in 2010, requiring two power leads and being double height. I don't recall the cost, but I think it was hundreds of pounds.
The replacement was about £90. Several years had passed, and what I got was basically a revised version of that same beast. Same number of compute units, similar amount of RAM - but half the physical size, only requiring one power lead, and running much cooler.
I was already happy with the graphical performance of my games. I'm struggling to think of any major graphical advance since 2010 that I simply must have. So any non-budget card will probably be fine.
Heck, half the budget cards are probably fine by now too!
Back in the 90's, and around the turn of the millennium, every step forward was huge. Let's put it in terms of games. Command Keen, Wolfenstien 3D, DOOM, Quake, Quake II, Quake III. Each of them is noticeably superior to the previous game in terms of graphics.
Since 2010, it's been incremental. The rate of progress has slowed. Which is not necessarily a bad thing. I'm still playing Borderlands 2, and it still looks great. It was launched in 2012.
Reducing the footprint of software would be nice, but the fact is that the hardware has been sufficient for quite a while now...
The old adage is to reduce, re-use and recycle - in that order.
In terms of reduce - my desktop machine is an old Core i7-2700K (I think) in a big tower case which still manages just fine. Bought in 2010, and delivery was delayed due to the motherboard being affected by the Sandy Bridge southbridge chipset bug. (Remember that?)
It's a bit of a Trigger's Broom today, having had a new power supply, new graphics card, replacement RAM and an upgrade to an SSD. All except the SSD were replacements due to failures, but the big tower case means maintenance is quick and easy. It might need replacing soon - but it'll have served me for 10 years, which means ten years in which I haven't bought a new PC. Or even felt like I needed to.
For re-use, it's my laptop. It was more for budget reasons than anything else that I bought a second hand Thinkpad. They're reliable and durable, so are excellent candidates for that. Again, performance is just fine and it meets my needs amply. I put Ubuntu on it, all the hardware (except for the fingerprint reader - which I wasn't going to use anyway) was supported without issues.
I'm sure that the Windows 10 Refurb Edition installation that was on it would also have been OK. But I do have more reservations about running Windows as a sustainable OS on older hardware. Linux just works - no need for manufacturer's drivers. And that's where Windows falls down IMO. I remember installing Windows 7 onto my desktop tower 10 years ago. Windows failed to find almost all the hardware - it booted into a low res, had no sound, no network, nothing. Ubuntu found everything but one of the network adapters (the built in one on the motherboard). I didn't even realise that the motherboard had Bluetooth support until I saw the icon next to the clock in Ubuntu! Then I had to reboot into Windows and spend an hour or two installing drivers for the motherboard - most of the time spent rebooting after each driver install, of course.
It's no doubt better now. A decade has passed. But as the Sonos issue shows, companies want to sell you new hardware. So sunsetting driver support for newer versions of Windows is going to continue to be a thing. Ironically, hardware support in Linux is now becoming superior to hardware support for Windows, especially if you want to still use old hardware.
Until we can convince vendors to have longer support periods, anyone attempting to reduce/re-use is probably better off moving to Linux.
This article has stuck with me since I first read it well over a decade ago:
It shows a mature, confident development process that understands the risks and chooses to minimise them.
For example, if they find a bug they don't just fix it. They check what kind of bug it is, and if it's one they've not encountered before (certain types of arithmetic for example) they then check the whole code base for the same issue.
This is why the Shuttle never had a software problem that killed people. Culturally, they took it seriously.
By contrast, the management at Boeing evidently have a different culture. One of cost cutting and what could charitably be called "personal development".
If they encounter a bug, they're probably wondering whether they should fix it or just rewrite in Node.Js. The latter buys time, and looks good on the CV when the inevitable failure happens anyway...
This would be amusing, if people's lives weren't on the line.
Back in the 90's, I worked for a company who had a major tobacco seller as a client.
The tobacco company's offices in London simply had packets of free fags lying around. Not packs of 20 - the actual strips of packs of 20. Everyone in that office smoked. And smoked a lot. Because they weren't paying for it (financially, anyway).
We had staff who refused to go back to that site. Everything was covered in a patina of smoke and tar. EVERYTHING. Your fingertips were yellowed after a visit. Your clothes would need a thorough wash. Some people even reported wanting to shower after a visit, so pervasive was the smoke and tar. It was simply disgusting.
We were only supporting the email system, but we spoke to the company who supported the PCs occasionally. Apparently, the PCs had a lifespan of around a year before the tar buildup killed them. They'd given up trying to clean and resurrect them, not because it wasn't possible, but because it was simply too time consuming and too disgusting for the person trying to do it.
I've never smoked, so I'm biased against the habit. But even our MD - who liked a fag and often had a pipe going whilst in the office - thought that their office was a bit too smokey.
It was a site I dreaded. When we got remote support capabilities and I could simply hop onto the server without visiting, I was incredibly happy.
Undocumented Internals was superb.
And don't forget Ralf Brown's Interrupt List. https://en.wikipedia.org/wiki/Ralf_Brown%27s_Interrupt_List
A treasure trove of information, much of it about what other programs were doing - helpful in ensuring anything you were writing wouldn't conflict (too badly) with anything else.
Or so that you could do things like send commands to disk cache TSRs, and so forth...
Microsoft's prices are certainly a bit insane.
I think that QuickBASIC 4.5 cost me ~£130 back in the early 90's. As a hobbyist, it was perfect. I should stress that this is the full QuickBASIC that could produce compiled .exe files, not the cut-down interpreted IDE that shipped with MS-DOS 5 and higher.
The alternatives were PowerBASIC (which I eventually moved to) or... Turbo Pascal, I guess?
Actually, the real alternatives were things like the C compilers, which I think cost more like £300 and upwards. You could start cheap with Borland or Microsoft C, but if you wanted something like Watcom you were going to have a very light wallet afterwards.
So what that purchase of QuickBASIC 4.5 got me was the ability to write little programs that I could share with friends without requiring any runtime or installation. (This was DOS, though, so that's not saying much.)
A decent dev environment costs money. I understand that. Compilers or runtimes take time and effort to develop. But free software has been going for over 30 years, and has accumulated millions of man hours of work - it's at least 90% there.
I just checked the prices of a Visual Studio Professional subscription. $1,199 for the first year, $799 renewal after that. I understand that you get a lot there - Azure credit, dev/test licenses for some Microsoft software - but that's still a very steep price.
By contrast, a couple of years ago I treated myself on my birthday to a subscription for all the Jetbrains products. All of them - IntelliJ IDEA, GoLand, Rider, PyCharm, Datagrip, and more - and it cost me £200 a year. It went down to £159 for the second year, and will be £119 a year this year and onwards. That's a lot of IDEs, and a lot of expertise.
If Microsoft wants to attract new developers, they need something that's down at the £250 mark.
Visual Studio Code doesn't cut it, by the way. It's great for PowerShell, and not too bad for C# - right up until you want to compile to an executable. Getting Visual Studio Code to do decent compilation is a bit of a pain.
Certainly more of a pain than anything from JetBrains.
If Microsoft want more developers using C#, they need to drop their enterprise-style pricing and make Visual Studio much more attractive. I know that there's a Community Edition, but the cost of the jump from free to non-free is incredibly high, it's no wonder everyone just goes off and uses something else...
Second this. Midnight Commander is incredibly useful, and seems to have crept on to all my Linux systems somehow...
When I was using DOS, I preferred XTree Gold - but have grown to prefer Midnight Commander over the various XTree clones that *NIX has seen over the years.
I'm not sure why, other than the two pane paradigm is incredibly useful...
You suggest not showing items not in the document.
Isn't that throwing out discoverability?
I'd agree with greying out items not in the document. But just not showing them? How am I supposed to find out that I can navigate by, as you say, OLE objects, if they never show up until I have them?
Remember, this pane isn't necessarily open all the time. So when it is, it needs to convey a lot of information quickly...
I fundamentally disagree with your approach because you remove discoverability. It's as bad as, if not worse than, the "hide menus" approach that was discredited when we tried it in Office 97 and 2000.
I agree - Office has hit the IBM stage for decision makers...
Word has lost me more time and work than any other program I've ever used. But I trust LibreOffice Writer.
I'm reformatting and editing a long document at the moment. 400 pages in, out of ~2000. (That's an approximation for how many pages will be there when it's finished - it's a document that originated as an OCR job, with all the errors and dodgy formatting that entails. At the moment, I have another 3000+ pages to get through.)
I wouldn't say that saving or opening the document is fast. But it is reliable. And I'd not want to lose 400 pages of effort.
I would never - and I mean NEVER - trust Word with this job. Once I get near 100 pages in Word, I start to get twitchy...
Also, the Navigator pane is awesome for this kind of large document!
I find LibreOffice far more usable than Office.
In particular the navigator is a big boon when editing long documents in Writer. Being able to name tables and images is a big boon over how Office handles that.
Speaking more generally, the side panes are a far better use of space than the Ribbon on modern monitors.
Also, one person's clutter is another person's invaluable feature. I use some of the recently added (in version 6.3?) typography features occasionally, and am very grateful for them. To many, those features may just be an extraneous button that leads to a confusing dialogue box.
I guess it comes down to individual use cases and preferences.
Frankly, I spend about as much time hunting around for infrequently used features on the Ribbon at work as I might spend hunting through the menus at home on LibreOffice.
Which leads me to suspect that this is one of those hard problems that doesn't have a good solution available. You either have the features and a crap interface, or no features and an easy interface...
(I haven't experienced the flicker you mention - although I don't doubt it happens.)
Yep, there's a reason I don't rely on O365 for my personal needs.
I have LibreOffice for the big documents (> 10 pages) and the free tier of Google Drive for convenience. I use a third party tool to download the Google Drive stuff and convert it to ODF, just in case Google have some kind of accident or change of business plan.
But then, I've learned my lessons from almost 25 years in the industry. I'm hardly a typical customer. ;-)
I've never been that impressed by Excel's charting.
But then, I was spoilt by Quattro Pro's chart engine - which makes Excel look pretty pathetic. Also, my personal experience - just anecdote, of course - is that tweaking graph options is the second easiest way to crash Excel. (The first being to load a very large file.)
I'm not so sure that options are the answer to why Office is so popular. If so, Quattro Pro would be the default instead of Excel by now.
Also reporting tools like Crystal Reports, SRSS and so forth would be dead right now. And there'd be little interest in tools like R, Grafana etc.
Desktop office suites only need to be "good enough", in the wider view of things.
The answer is much more likely to be a combination of things, including integration with Windows/BackOffice, billing, and so forth.
"LibreOffice then has not changed the software world, but it has perhaps made it a better place."
When I first start Office on my work desktop, I'm asked which format I want to save my documents in. That's a pretty significant change for Microsoft. Remember, they're a company that once destroyed Netscape simply because they might be a threat...
I have far fewer issues opening OOXML documents than I did a decade ago. Interoperability is still best served with a format like PDF, but we're in a much better place than we once were.
I think LibreOffice's impact is far greater in the personal computing market. I do know people with individual or family O365 subscriptions. But I still remember when Office was around four hundred bucks a CPU. By comparison, when you look at the OneDrive storage and the always up-to-date nature of Office 365, it's quite a bargain.
And I'm pretty sure it's not just GSuite that's driven that price down. Most folks I know who are casual users have LibreOffice installed by a relative, because it's "good enough" for their light use. That's got to have had some kind of impact at Redmond.
Thank you for your reply. And Japan?
Because I genuinely hadn't looked up anything before then. I couldn't, because I didn't know which goods would be picked.
Let's ignore unicycles, because you must be joking there. Hardly a huge benefit for our society.
Apples and coffee - I googled "Japan tariff <product>", and it looks like Japan has tariffs for both. (The first hit for apples was a news article on a trade deal with the US, lowering tariffs. The one on coffee showed a table with a 20% import tariff on the type of coffee you'd picked.)
I did the same for Russia, and found that it's banned the import of apples due to counter-sanctions. So I apologise for including them on the list - I'd forgotten they were under sanctions.
But the fact that I asked for MULTIPLE countries and you're only quoting the ones that you found a 0% tariff - and ignoring issues of sanctions - suggests that you're not arguing in good faith. You're cherry picking your data to try to support your predetermined conclusion.
We are not, as you say, all consumers. We are also all participants in our own economy. Would British orchard owners be able to survive a 0% import tariff on apples? I certainly have no wish to sell them out. I don't know any orchard owners, but that won't stop me from asking if a 0% tariff is bad for them. That's called being a decent citizen.
You have not convinced me. You cherry picked data, and therefore have lost all credibility on this issue.
There's many things in your comment that I think are wrong, but I'll focus on just one:
"For example the EU is a huge isolationist block of high tariffs and refusal to accept outside products."
To help me understand, can you give me three examples of goods with high tariffs? Also, please prove that the tariffs are high by showing the comparative tariffs of the following countries for those same goods: USA, Japan, Russia, South Africa, India, China, Australia.
(Please quote your sources.)
I'm genuinely curious. I hear this said a lot, but I never see any proof. Yes, the EU is protectionist. So are almost all countries and trading blocks in the world.
So I'm unsure what the point of leaving such protections behind is. These figures would help me greatly. Thanks!
This "majority vote" - would that the the 2016 Referendum that saw the largest electoral crimes this century? The one where Vote Leave purposefully funnelled $675,000 to other campaigns in order to bypass spending limits?
That's almost a 10% overspend, which is pretty unheard of in our democracy. And it was done in the final three days, used to fund millions of Facebook ads - so it's not like we can say "Well, it had no effect".
The referendum result shouldn't be respected. Not unless you approve of electoral crime.
And it would be very odd to be going around wanting a democracy for which electoral crime is a cornerstone.
If it weren't advisory, the referendum would have been re-run - just as a council or constituency election would be if such crimes had been found. But it was advisory. So we have a simple choice - approve of electoral crime and "respect the result", or not approve of electoral crime and reject the result.
The possibility of No Deal did not "bring the EU to the table".
Asking them to tweak the deal got them to the table. May didn't ask because she had her deal, but she couldn't get it approved in Parliament. Johnson took that as a sign it needed to be changed.
The fact that it couldn't get through Parliament is what got the deal changed, not the ravings of the ERG.
Oh, and the "new" deal is just the old deal, but worse.
We'll replace the execs with a simple randomiser that returns Yes or No.
We'll replace the Fortnite players with bots.
If, in six months time, the companies are still there and making money, then the executives are overpaid and pointless.
If, in six months time, people are still watching the streams in the same numbers, then the Fortnite players are overpaid and pointless.
Full disclosure: I'd like to place all of my money on executives being overpaid, and Fortnite players not being overpaid. Providing I can find someone dim enough to give me good odds, that is...
I never said trade without deals was a unicorn. You're putting words into my mouth. What I said was that countries still do deals. I was implying that deals are preferable.
The Leave campaigners made that same point - deals are preferable to not having deals. They said as much in 2016, when they said we'd get great deals.
Now they're saying that no deal is what they promised.
These two are mutually exclusive. They were either lying in 2016, or are lying now.
(With the exception of those like David Davis, who have the unenviable but believable excuse of ignorance.)
That's the unicorn. The whole project is one single unicorn. Not individual trade patterns - the whole of Brexit.
So no, not unicorn enough for me.
Here's a suggestion for what you can say in a few years' time though: "I thought I was being a patriot. It turns out they were lying to me about that too."
So, I gave it a chance. I opened it up, and oh look! The economic section is written by Patrick Minford.
The Patrick Minford who said that leaving the EU with a deal would mean we'd have to wind up our manufacturing sector? Shurely shome mishtake?
So I figured I'd jump right ahead to that, and see if he's willing to admit this again. It's a good test of how honest this document is.
Nope. No mention of it.
Lot of breathless words about how everyone else trades just fine - although a failure to admit that they trade WITH DEALS. Lots of talk about getting rid of undemocratic EU regulations, no talk about how democratic WTO regulations are(n't).
Some brief talk of "short term economic disruption", which is a nice way of glossing over his previous more academic efforts in which he admits the real harm Brexit would do. Definitely nothing about winding down entire sectors of our economy. Minimal use of actual figures, preferring to use rhetoric.
Lots of talk about growth, because growth sounds good and confuses those not familiar with economics. If you sold Senegal three toilet roll holders versus last year's one, that's triple digit growth! So you're perfectly justified in saying we should leave the EU, where we sell hundreds of thousands but have minimal growth - just a solid, reliable and profitable market. Let's rush to trade with Senegal on WTO terms!
We've occasionally seen better misuse of "growth" in the plethora of storage technology press releases that El Reg routinely eviscerates, but only very occasionally. This is masterful misdirection.
Frankly, that document is not a good place to start.
It's in no way a serious study.
It's fan fiction.
Nothing more, nothing less. Just fan fiction for the Brexit My Little Unicorn contingent.
It works fine, and has never lost my data.
That's beautiful to me.
By comparison, Office - especially Word - is an ugly, ugly thing.
(I have lost more work to Word than to any other program I've ever used. And I spent 15 years of my career working with Lotus Notes. So that's an impressive and worrying statement to have to make about any software.)
And here we see the software industry reaping what it has sown.
We call our developers "rock stars" and "ninjas". Despite that fact that many of them are just writing an if or switch block that decides which function in someone else's library should be called.
But nobody seems to call their QA staff flattering names, despite the fact that they're just as important in the process.
QA staff should be highly regarded. They should have some technical understanding, so that they can help better describe errors.
There are typically two extremes for how QA should be run - the first is that they should just be literate and able to communicate, but need no technical ability. That QA is a kind of user testing. The other extreme is that QA should be technical - or able to be. That they should do more than just running through scripts, but should also be doing things like fuzz testing.
I tend towards the latter. The more you allow the developers to have responsibility for testing (beyond unit tests), the more that implementing the testing - or responding to its results - gets delayed in favour of implementing features. It's just the usual politics. If an organisation has a QA team, it should move as much testing into that team as is possible, in order to avoid the inevitable perpetual sweeping under carpets of issues by the development team. And that means having a more technical, capable QA team.
But we call our developers flattering names, and strip the QA budget whenever it's convenient to.
So is it any wonder that Uncle Sam's not convinced QA testing isn't a skilled job? The industry doesn't seem convinced either...
It seems like they've thrown this together in a ramshackle, hurried manner.
Perhaps we should register a new business named "General Stores); DROP TABLE tblCustomers;---", and send in a timely VAT return? We'd be doing them a favour, really...
(We'd need a few such businesses, obviously. Got to cover all the options. All suggestions welcomed.)
It may have been a consultant - or consultancy - that left them with Oracle, rather than a decision they consciously made.
I've seen consultants turn up and start recommending that their CRM or SAP system "is best on Oracle" despite there being no Oracle expertise in the organisation and developer/DBA experience being all Microsoft SQL Server.
The product they're putting in will work on SQL Server, but there's suddenly a lot of political pressure to use Oracle because the expensive consultant says so.
Get the consultant drunk enough, or provoke an unguarded moment, and you'll often find that there's a relationship between them and Oracle - they're probably getting a nice commission fee for each install.
Whether this still goes on today, I don't know. But the fact that there's an ecosystem of consultants chasing Oracle's commission crumbs makes me think that this is what probably happened to the City of Sunrise Firefighter's Pension Fund...
It's pronounced to rhyme with "lorry", but I get that a lot... ;-)
(I wrote this up a while ago for submission to El Reg, but was never quite happy with it. All names changed to protect the allegedly innocent.)
My first job was in the mid 90's, working for a big company with serious political dysfunctions. One of the best demonstrations of those issues is something I refer to as "That time I was told to steal a £14,500 switch"...
I was working as tech support in a building that was full of outsourced telephone support desks for some very big IT names. One of my colleagues - let's call him Dave - had just moved from working on one of those helpdesks into a project management role. We got an email from Dave saying that a new network switch had arrived, and could we please locate it and configure it?
(In the mid 90's and a network switch was an exotic bit of kit. These were the heady days of the new 100Mbit "Fast" ethernet. Switching wasn't a feature on hubs as it is today, it was a function for dedicated hardware. And in this case, it was a 12-port 10/100 3COM switch which, including taxes and delivery, cost a little over £14,500.)
Dave explained that a new helpdesk was going to go live, and there were concerns over the performance of the database server that handled scheduling of hardware engineer visits. That was millions of pounds of business each year, and therefore probably the most valuable server in the building - possibly in our division. Analysis had shown that the server's performance was fine, but it was homed on a network with at least 100 clients and yet more traffic from a WAN uplink. Network congestion was most likely the issue.
My boss dispatched me to find the switch. It wasn't at Dave's desk. It wasn't with reception/facilities, who said that they had delivered it to Dave's desk. Dave had only recently changed jobs, so it was likely their information was out of date - I went to check his old desk...
And it had been there.
But it was now with Terry.
Terry was a man of initiative, and had decided that as this had been delivered to Dave's old desk the switch therefore belonged to Terry's department. A short but futile conversation left me certain I wasn't going to leave with the switch, as Terry had repurposed it as "something I can put on my CV". (He didn't quite phrase it that way, but the meaning was clear to us both.)
To complicate things, Terry's department was a flagship project. Big name client, used as a case study, on the tour route for all visiting potential customers - they had serious political clout in the company.
Dave was on leave, and this was 1996 so he didn't have a mobile phone. But after some calling around we managed to get hold of him, and he confirmed that the switch was ordered under his new budget code. He was very unhappy to hear that "his" switch has been poached. It was made clear that there was a deadline for setting up this new helpdesk - his main concern was that Terry might plug the switch into his department's network (they managed their own IT to some degree), and that would make it hard to get back due to their political capital.
My boss assured Dave that this would be handled. We hung up the phone, and I was ordered to go and steal the switch.
Not exactly how I'd planned my day.
It was approaching lunchtime, so I slid round to a helpdesk adjacent to Terry's, and began to very slowly diagnose a non-existent fault on a PC. The moment Terry went to lunch, I pounced. Swiftly repacking the switch and disconnecting it from a serial port, it was soon retrieved. Now we had to decide what to do with it. My boss decided to lock himself in our small office, and read the manual. I was sent out to distract Terry, and do all our pending jobs in the process. On my way out of the door, I grabbed the empty box.
"What are you doing with that?", my boss asked.
"Decoy" was my response.
The ground floor server room was a repurposed meeting room - so it had glass windows. I dashed in, sat the box on the workbench, and then left - making sure to lock the door as always.
I spent much of the afternoon running around the building in as unpredictable a pattern as possible. I kept dropping into conversation that we were more busy than usual, and I had to go to $department next - knowing full well that I was going elsewhere. On returning to one helpdesk, I heard that Terry was looking for me. Eventually I bumped into him, and found out that he too had been busy - he knew the switch was in the ground floor server room. Eager to help, I went to fetch the key - but never returned, having been diverted by a faulty computer on the way. Anyone who's done desktop support will know the kinds of distractions that can drag you somewhere unexpected. That afternoon, I made sure that they all did.
At six in the evening, I dropped in to our office. My boss was still reading the manual. I was sure that Terry would have gone home by now - his helpdesk closed at five - but I headed back out on the distraction trail anyway. At seven thirty, I got paged (remember pagers?) and returned to our tiny office to hear the plan my boss had come up with. Then we went home.
The next day, shortly past nine, Terry dropped by our office.
"I want my switch."
"It's not yours."
"It's ours, we're a flagship desk, and I want it."
My boss adopted a soft, conciliatory tone. "OK, let's go and fetch it."
We walked to the server room, unlocked the door, and ushered him in.
"There it is. Help yourself."
Terry was both livid and crestfallen at the same time.
My boss hadn't just read the manual the previous day, but had also written and uploaded a configuration for a switch he'd never seen before.
We'd been in since before six, and had racked and cabled it and cut all services across to it - a WAN uplink for the building, a link for each of the local hub stacks (remember 3Com 100Mbits backplane connectors?), a link each for the Exchange, IIS and File/Print servers... And a link for Holly, the multi-million pound database server.
A single network cable whose traffic was worth more money than most people will earn in their entire career. If Terry wanted his switch, all he had to do was unplug that cable.
Terry left without his switch.
That long day and following early start was worth it. Not just for the satisfaction of a job well done, but in other ways. For example, one of the helpdesks ran Doom/Quake servers at lunchtime to help relieve employee stress, and apparently the switch made a noticeable difference to their performance. I was gifted many, many free beers for that.
And finally, I should note that Dave showed great promise as a project manager.
He took all the credit for our work.
In 1990, my money would be on either IPX/SPX or NetBIOS Frames (NetBEUI).
If the office had a Novell Netware server doing file/print, then the former. If they had OS/2 doing that for them, then the latter. Not that this will be news to anyone who was there at the time, mind!
I started my first job in 1995, and never saw a Banyan Vines network - although I often saw it in documentation as supported by products.
After a bit of research I've found that 3COM had an old network protocol called 3+ that was based on XNS, but by the time of this story they'd thrown that out and joined Microsoft on the LAN Manager NetBIOS and IPX/SPX train. By the time I started work, 3COM was mostly associated with the hardware layer - network cards, hubs, and those newfangled switches...
Loath though I am to defend Microsoft, this really wasn't their issue.
The important thing to remember here, which isn't mentioned in the article, is that in 1990 switching wasn't really a thing for the average network. It would have been hubs, broadcasting every packet to every machine, with the network card simply ignoring anything that's not for its own MAC address.
To put that into context, I remember the first switch I ever saw. It was 1996 (IIRC). It was a dedicated 1U rackmountable unit from 3COM that had twelve 10/100Mb ports and cost around £14,500.
Yes, that's fourteen and a half grand.
I remember it well, mostly because I was ordered to steal it from a rival department. (It's a long story. Maybe some other time.)
Again, for context, our hub stacks were 3COM 24 port 10Mbs units, with the 100Mbs backplane connectors grouping them lumps of 4, and a dropdown cable between each group that glowed a soft red when the teams started playing DOOM at lunchtime...
When we implemented the switch we removed the dropdown cables and plugged each stack into the switch itself, along with our primary SQL Server, the domain controllers, the IIS server (because "intranet" was the latest buzzword), the Exchange server and the WAN link. That really eased up network traffic both for WAN and local users, and was regarded as £14,500 quid well spent.
These days, if you buy a network device that costs more than about £40 it'll have switching built in, and the scenario described here could never happen on that network. Only the cheapest of kit, or wireless networks (for obvious reasons), have no switching capabilities.
Of course, everyone probably has stories about managers decreeing that "we won't pay for cabling that new floor we've expanded into, we'll use wireless as it'll be cheaper" and then the network grinding to a halt every day between 08:30 and 10:00 as everyone logs on and Windows pulls down their profiles... ;-)
That'll be the closest we'd get to this story these days.
The number of reported flaws isn't a great metric. It's just part of how security should be evaluated.
For example, Windows has far more security issues reported, yet it's still used. And the same can be said of many Linux distributions.
A key difference is that Drupal is just a CMS, and therefore a smaller project - which could lead to an expectation of a lower number of incidents. Balancing that, as a CMS Drupal is under constant attack because it runs on some pretty valuable sites.
What really matters is how security issues are handled. Drupal seem to have a good, responsive security team that has a good handle on it. And in later versions they've tried to prioritise security in their development processes, which is also a good sign.
(Disclaimer: I use Drupal for my own personal website, but not in any professional capacity.)
A different Jet. There were two streams of Jet - Jet Red was the Access database, and Jet Blue was the enterprise variant used in Active Directory and Exchange Server (amongst other products).
Jet Blue became ESE (Extensible Storage Engine), and is very different to Jet Red - in that it's actually reliable and half decent.
Fun fact - sharing a database engine is why Small Business Server died. The AD team and Exchange team used different versions of Jet Blue/ESE. Neither liked the idea of being forced to upgrade to a later version of it because of the other team, and it made support difficult as patching one product might break the other.
This is why it's not at all supported to install Exchange on your AD controllers - it will likely result in issues with your mail databases or - worse - your AD database.
I'm a Debian kinda person, but I know that RHEL uses XFS as its default filesystem for recent versions - so this seems like a fairly dumb move. And OpenSUSE seems to use XFS for /home in recent versions.
They should at least support both ext4 and XFS on that basis alone.
The xattrs reason is plainly not true, as there's a bunch of filesystems that support xattrs perfectly well. One interesting comment I saw on Reddit seems to have a possible answer:
Basically Dropbox may have used a particular attribute as an identifier. That attribute is static on ext4, but may change on XFS. If that's the case then this is nothing to do with xattrs, and everything to do with a bad assumption on the part of Dropbox's development team. (I'm guessing they use it to determine whether a file is the same but changed versus a completely new file which replaced the old one.) They assumed all filesystems would behave like ext4, and now they're finding that this isn't the case and there are some edge cases they didn't expect.
If this is the case then rather than fix the problem they created, they've decided just to shift the blame and drop customers who they failed...
I suspect that, if done well, Notes/Domino will have a decent niche in the future for companies that don't want to go cloud.
Microsoft is very committed to O365, and that means that future versions of Exchange will both lag behind in features, and perhaps even not get them at all. Some of those features will be obscure back-end ones - but when it comes to things like web access, it might be more obvious.
For example, if O365 gets a new web interface optimised for tablet displays, do you really want to bet that it's going to land in the next Exchange patch? More likely you'll have to wait six months to a year - and then there's the delays in actually applying that patch to your own infrastructure, such as testing and change control.
Notes/Domino might actually become a sensible choice for on-premises because of that. And even if it doesn't, hopefully it'll give Microsoft a reason to compete for on-premises business, rather than simply drive everyone to the cloud.
Having spent fifteen years working with both Notes and Exchange (from 1998-2014 - didn't quite get sixteen years!) I'm a little surprised to read this.
I'd venture to suggest that any admin who said that isn't a Notes admin, but instead a Windows admin who was forced to work with Notes and had no training in it - or willingness to learn.
The major problems are usually ID files and a lack of AD integration. So yes, user administration has a couple of challenges - but later versions of Notes have an ID Vault which helps a lot with that. And even earlier versions (6 onwards?) had password recovery, which also helps a lot - just generate a recovery key and start using it, and your helpdesk will be able to reset passwords much more easily.
User administration around ID files was certainly the weak point.
But the server itself was solid, capable and had a lot of great options. Its handling of storage, mail routing, and replication was generally much better than the competition. For example in a multiple site, WAN-linked environment moving users around is a snip - and very reliable compared to Exchange, which often fails repeatedly on bigger mailboxes. (I've actually had to resort to exporting a mailbox and transferring it as a PST using Robocopy, then moving it re-importing the data!)
I'd definitely agree that at the SMB level Exchange is better due to its integration with AD. But frankly, that market is being ceded to the cloud anyway. However, as your system grows in scale, Notes definitely starts to be much more attractive to administer. It's not without some faults, but I've seen far worse products - and I much preferred administering large Domino environments to large Exchange ones.
I'm more surprised that you found people who liked the user experience - that was what I usually got complaints about!
Microsoft had noticed how embedded Notes was becoming in enterprises, and modelled the Outlook/Exchange combo on the Notes behemoth. But Microsoft, weirdly, never took advantage of the sprawling engine it had created, with its immense flexibility for categorising data and creating custom views on it. Only a tiny proportion of the power in Exchange/Outlook is ever used.
Exchange/Outlook certainly never lived up to the power that was promised in terms of Public Folders and custom forms. A lot of that is down to what I'd call The Microsoft Developer Problem.
Microsoft tends to develop programs with tools it already has - until recently they had a strong streak of Not Invented Here. This tends to lead to products that can be described as "designed by developers for ease of development", with less concern for users or administrators than those groups might like.
By contrast Lotus Notes was designed for collaboration and customisation from the ground up, and many of the decisions taken were taken (and boy will this be controversial!) for ease of use for the average user.
Heck, Lotus Notes didn't get a dedicated design client until version 5, if I recall correctly. You could build a new database in the same client you used to access your email and databases, and assuming you have a friendly Administrator who's willing to put it on a server you can be up and running within a day or two... and as a Lotus Notes Administrator back in the day, I can confirm that I did see enthusiastic business colleagues outside of IT bring me their little databases that they'd developed to help their team work!
By contrast, Microsoft reached for what it had. The database was ESE, which is OK but didn't lend itself to the same kind of unstructured storage because it still has some notion of tables. That's a fundamental restriction when compared with Notes' proto-NoSQL approach.
Similarly, forms had to be designed using a Visual Basic client - which is kind of overkill and rather intimidating to the average user - and the distribution mechanism for Exchange Forms was complicated and annoying, even for administrators.
Exchange/Notes was much more capable in some ways - one of the flagship demonstrations at launch was a graphical chess game that sent moves via email, which Notes would have difficulty doing. But these complexities meant that creating a simple holiday approvals system or a sales opportunities tracker was an order of magnitude more work than for Lotus Notes.
It's hard not to conclude that Exchange was built with what was lying around, rather than looking at what people actually needed.
Ultimately none of this mattered as Groupware seems to have been a fad - albeit a decade long one. Integrating workflows within your email platform was effectively killed by the ability to send someone a hyperlink to a web page. Which, when you consider Notes had DocLinks from the start, is kind of ironic - apparently everything else Notes did was overkill.
The world moved to dumb email clients, and chose to move workflow and other specialised features into dedicated applications that send notifications.
This Microsoft Developer Problem extends to many of their products. Skype for Business (or whatever it's called this week) requires an SQL Server to store people's status. Their STATUS! That's overkill, right there - but companies dutifully add the databases to an SQL Server cluster so that their IM solution is highly available. SharePoint has a list of prerequisites that makes you wonder if its true purpose is collaboration or selling Windows Server licences. This design pattern runs through most of Microsoft's platforms.
So whilst in theory Outlook could do what Evernote does, in practice I have little faith in Microsoft's ability to deliver that. Their platform and tool choices would likely have made it difficult to port to other platforms, and cumbersome in use. And this was before Microsoft "opened up", so you know that there would have been a (poor) Windows Mobile client and nothing else...
By comparison, Evernote's technological choices are simple. They use SQLite on the client side. I can't easily discern what they use on the server side, but I'm guessing MySQL and Apache/nginx. I'm also guessing that it's mostly just a simple schema, with perhaps a type, tag, colour and so forth then a blob that gets full text search. The one thing we can be sure of is that they didn't have their own technology lying around, so they had no incentive to choose anything but that which was most suitable for whatever problem they had at the time.
And this is even before we get into the question of why Microsoft then chose to not develop the Notes feature of Outlook for fifteen years. Maybe they thought it was OK? Maybe they simply didn't want to put development resource onto that feature when they could instead be focusing on the Ribbon, or Sharepoint integration?
Evernote has no such problems with focus, because they have just one product. The closest they'll get is juggling the priority of business versus personal account features.
That's why Evernote has succeeded where Microsoft has failed. They're free to make choices that benefit the customer, rather than fit into a corporate platform strategy.
(Also, I think Microsoft's Evernote competitor is really OneNote. But this is already a very long post, so I'll let others talk about that.)
Biting the hand that feeds IT © 1998–2020