* Posts by BinkyTheMagicPaperclip

1300 publicly visible posts • joined 11 May 2012


Microsoft to kill off third-party printer drivers in Windows


Re: Paperless office

They have. Offices are now *mostly* paperless. It's months since I had to print anything personally, and the last thing I had to print for work was a shipping label about 2.5 years ago.

However there are occasions on when paper still wins. It can be read and annotated without power.

IBM Software tells workers: Get back to the office three days a week


It will of course depend on the individual house, insulation, and personal requirements, but in my fairly modern house the difference is not actually that large. You're not moving around much in most WFH jobs, and it can be quite cheap to keep one room warm.

The increase in electricity I did find notable, but it's still far below the commute cost. With extra energy requirements, and accounting for a SIM for 4G Internet backup, it's still at or below the commute cost for a week.

ArcaOS 5.1 gives vintage OS/2 a UEFI facelift for the 21st century


Re: OS/2 and IBM?

I'd say it's a great deal more complex than that. IBM made several large mistakes

1) Trying to recapture the PC market and make lots of money doing so

1a) Trying to capture *everything* by going PowerPC

2) Concentrating solely on business

3) Continually half arsing OS/2 architecture

4) Targeting the 286, driver support, etc

OS/2's competitor was not Windows NT, it was primarily Windows 2.x-3.x with a smattering of NT.

I don't have the time today to do a fully thought out post I suspect very few people will read or discuss, but in short :

Memory requirements were a real thing in the early days of OS/2 and NT. 16 bit Windows for all its faults side stepped this.

Application and driver support were a real issue.

OS/2's 16 bit architecture is to say the least, odd. There are reasons for this. Unfortunately when 32 bit OS/2 was created IBM did not architect for the future like NT did, they implemented a mostly 16 bit kernel with a user land that was in part 32 bit (it became more 32 bit in later releases) that was equally as odd.

If IBM had re-architected OS/2 properly, ditching the synchronous message queue, simplifying the driver model, probably also implementing some win32 support it may have lasted longer. It's very debatable how achievable this is in retrospect.

Application support was and is always the largest issue. Windows captured it first, I'd say it was lost as early as 91-93, in 91 IBM were still floundering with OS/2 1.3. There were some innovative programs created for OS/2, but in the main the large players either stayed on Windows or created half arsed versions, often later than the Windows equivalent (see : e.g. Corel Draw which I understand was a good app, but half the programs were win16, and Corel Draw 2.5 OS/2 when released was then behind Corel Draw 3.0 for Windows).

OS/2's driver model was a mess, and the install program was awful. Architecting this properly from the start might have helped.

I don't think NT by itself was as large a threat as people thought. In the long term an NT architecture OS was always going to win, but the need for multiple users, proper services and so on was less critical until the late nineties. By that stage OS/2 had already lost. NT was memory hungry, had few native (32 bit) apps, lacklustre DOS support, a very basic interface, and wasn't very good at running games. OK as a server OS for the time, no good for consumers.

It was '95 that ate OS/2's lunch, and the writing was on the wall well before then with win32s. '95 was good enough, and ran the latest software. OS/2's Windows compatibility ended at v1.25a of Win32s.

It's a hot mess of an operating system. A load of interesting (for the time) but not entirely finished features, wrapped in a package that penalises as it rewards you.


The question here is 'why'. OS/2 uses very unusual memory layouts for compatibility reasons and its code layout (kernel with a lot of 16 bit code, mostly 32 bit userland).

I can't remember what access Arca Noae have to the OS/2 source code - if the answer is 'none' (enhancements all through published interfaces) perhaps they're stuck but it would probably be enough for a restricted set of programs to be able to allocate and manage PAE memory as you mentioned through new interfaces. Most programs don't need that much support, and realistically OS/2 is unlikely to greatly grow its number of available programs now.


Re: Finding it completely painless so far

I didn't do anything special in this case - it has a Primary HPFS partition, then an extended partition with logical HFPS and JFS drives in it. Following that is a primary A5 FreeBSD partition - I think I created it with the FreeBSD install program rather than doing so in OS/2 and fiddling with partition identifiers.

Finally there's a 2GB primary FAT16 partition I created after trying to run VistaPro under a DOS VDM and finding it threw up its hands at installing to larger partitions (the other way around it is to set up the requester, share a drive, and twiddle bits so it only shows 2GB but I couldn't be bothered doing that).

Is reaching out to Arca Noae for support an option, or is it an unsupported configuration?


ish. Google for 'Dooble OS/2' - probably your best bet at the moment. Otter should be coming later.

Unfortunately it's been affected by the Ukraine war (Otter), and Rust support for building Firefox.

Firefox under OS/2 does work well, but it's out of date.


Finding it completely painless so far

I'd been running a couple of copies of 5.0.x but decided to go for a 5.1 upgrade. Burned a CD, booted from it (on a Dell Precision T7400), selected upgrade, let it reboot a few times - and done!

Currently running an MBR based disk with a combination of the Airboot boot manager, HPFS, JFS, and a FreeBSD partition - didn't find that tricky to do. However I do have another machine with Warp 4, DOS, Linux, and OpenBSD, and a 486 with OS/2 1.3, 2.1, 3.0, PCDOS, and NT 3.51 so it can safely be said I have history with doing multi boot.

I'm impressed at what Arca Noae have done, but it's still an enhanced mid nineties OS whichever way you look at it. I've got multi monitor working with SNAP graphics, but it doesn't play well with some apps (they just see one large desktop), and modern ArcaOS does not play particularly well with SCSI. Frankly having SMP, JFS, NVMe, UEFI, and GPT is a worthwhile trade off!

I did use OS/2 right up until 1999, but was determined not to go into the new millennium running OS/2. It was great for about five years (93-97/98), but the lack of game support, and Python really started to bite, so I jumped ship first to NT 4 running Object Desktop NT, and then to Windows 2000.

Microsoft calls time on ancient TLS in Windows, breaking own stuff in the process


Re: This will be fun

A browser is a shell that provides networking and a user interface to one or more web page renderers.

For Internet Explorer it handled various revisions of IE standards.

For Edge Chromium it has at least three engines within it : Chrome, Edge, and all the revisions of Internet Explorer.

The only supported method of viewing Internet Explorer pages is to use Edge or Edge Chromium in Internet Explorer mode. Internet Explorer the browser has been out of support for some time, and it's therefore important to differentiate the support of a browser itself, and which web pages it will render properly.


For once, probably not going to be an issue for me

TLS 1.0 was an issue because various embedded kit only supported 1.0 (and not properly at that).

Most code moved across to 1.2 fairly easily, but some legacy ones had issues and a firewall exception until very recently, I believed that's stamped out now.

Everything is now at a minimum of 1.2. Anything I've seen capable of handling TLS 1.1 can also handle 1.2. 1.3 is probably the next break point, I think some of the newer embedded kit hasn't seen the need to go beyond 1.2, and hopefully firmware will be updated soon. Windows and SQL server have been upgraded, and it's years since everything moved off 2008.


Re: This will be fun

IE the browser is out of support, IE the rendering engine is very much still in support, and can be enabled either via options within the browser, or with more control using an Enterprise Mode Site List and a group policy.

Take a wild guess why I know chapter and verse on this :(

Want tech cred? Learn how to email like a pro


Re: reshuffle

It really isn't - I haven't used Thunderbird for some time, but probably will start doing so once I move e-mail provider, it made quite a good job at it.

Unfortunately Outlook's implementation is poor, because 'conversations' are not properly threaded, especially when parts of the e-mail trail include different participants to other parts. It also means attachments do not show as present in the trail unless the whole trail is expanded and navigated, a real issue.


Re: Its all about *efficient* communication...

There are circumstances where a phone call is more efficient than e-mail, but they're pretty minimal, and the occasions where they're superior are usually catered for by instant messaging.

The most common reason is that one of the people can't type at a reasonable speed, which can be an accessibility issue.

The other reason is that a scheduled call means people are forced to answer in a timely manner. This is not specifically an inherent advantage of a phone call.

If someone is writing 'a book' then usually by definition it is a topic ill suited for a real time discussion. Lengthy and complex responses are not appropriate for a phone call, as they typically rely on one or more recipients effectively wasting their time whilst responses are outlined, clarified, or investigated.

E-mail is used for several reasons :

To provide a record of what was communicated, phone calls being rarely recorded. Which is why even after a phone call no-one with any sense will take actions unless the points are confirmed - in an e-mail.

It's searchable, and can be categorised

It can be forwarded

It is not an instant communication technology, although many people try to use it in that way

It allows for a considered response that can be reviewed repeatedly by the participants

It's easy to add attachments, and it's easy to find messages with attachments

Multi platform, highly accessible with appropriate clients


I know what you did next summer: Microsoft to kill off Xbox 360 Store


Not unexpected, but the inability to purchase DLC is a pain

I had a bit of a last minute purchase rush earlier this year when the Wii U eShop closed for new purchases. The one thing that is rather annoying is the inability to purchase DLC, so whilst you've already purchased the game, or subsequently obtained it on physical media you may be missing notable sections of content.

I'm pretty happy I obtained everything I reasonably wanted to, but it is a problem for anyone looking to collect in future years, piracy is the only option to obtain some DLC.

To be fair to Microsoft, backwards compatibility of 360 games on an XBox One is good enough that I've not bothered considering a 360 for the few games that need a real 360. The number of XBox Live only games appears to be even fewer, there's Bangai-O HD : Missile Fury which is a little too bullet hell for me, and Prince of Persia classic later reached other platforms. Will have to do a search to see if it's worth bothering buying a system prior to the shutdown.

80% of execs regret calling employees back to the office


Re: unpopular opinion: no, WFH and WFO are not the same.

Muddling through is technical debt like many other bodge jobs. With a tight knit clued up team that's been present since project inception documentation can be initially omitted, but it's not a good idea.

It's still technical debt, the only time no documentation is a valid option is when the project has a defined (short) lifespan, you 'know' the same team will be present for all of it, and people's memories won't start to fade. Funny how projects always last longer than you think, though, isn't it?

If you leave it until you 'have' to produce documentation, it's too late. That's not 'deferring the effort', it's leaving it until you have no other option.

At the leaving it to the last minute stage, it's very probable that knowledgeable members of the team have left and are uncontactable, memories have faded, and the level of support and funding that was available early in the project's lifespan is now less than it was.

It is ultimately a business decision - larger upfront cost, smaller expense later, or smaller upfront cost, much larger cost later. At the late stage it's likely to cost you in customer goodwill, too.


Documentation, it's not optional!

Anyone who says the team working or osmosis method of passing on knowledge is definitely the best method needs to have a long hard look at themselves.

Yes, I'm sure there's a small minority of situations that work better in person, but we're IT workers, discussing topics on an IT site. We're supposed to make this work!

I've been in both situations, and for the 'grabbing someone for osmosis' method

Has minor advantages that you can 'see' if someone is busy (but that's fairly equivalent to an online status, and in both cases they may tell you go away)

If they're next to you, they may proactively notice you need help

There can be some advantages purely in being in the same room as someone for human interaction

The 'water cooler' situation. Personally I've found this to be extremely rare, and just as prevalent or better online, your mileage may vary.

I'm struggling to find more than that. Disadvantages are legion :

Explanation is usually oral, you need to implement and remember, or document immediately.

Just because osmosis *can* happen, does not mean it does

If a knowledge holder leaves, their knowledge leaves with them. Been there, done that, had to learn from scratch myself on numerous occasions.

Once *you* leave, knowledge leaves with you, you need to replicate the osmosis training to others whilst you're working. Each new person doing the same training..

For properly organised training (which is not specifically WFH related, but greatly helps for it)

There is some existing documentation

Knowledge gaps are determined based on support cases, customer demand, and new features

Time should be allocated for writing documentation. This is an integral part of working, not something thrown together in five minutes.

This *will* impact on other work tasks

There are both documents, and procedures. Procedures should ideally include logging and saving of data

It's also iterative - so if the documentation or procedure is created and is unclear, it is clarified or training is provided, until it can be handled.

This ideally only needs to be done once (reality is different, but that's the ideal).

Management needs to accept that as mentioned creating this takes time, but also some situations are simply not applicable for a standard procedure, they require background experience.

It's also true, that regardless of the knowledge transfer process. Some people are simply better at working out issues from the information available. Whether in person, or online, some people will progress a request based on a minimal amount of information. Others will fail to progress, despite a series of clear unambiguous steps. The trick is to maximise what can be achieved starting from a lower level of experience.

Note also, that whatever your level of experience, you *will* forget things after a while unless you're using them every day. Self written documentation helps you too when you forget, and having to codify knowledge into a procedure can involve learning new technologies (viz recently-ish I created a procedure I would personally have used awk to resolve in PowerShell instead, having to learn how to achieve the same result. Mostly, Powershell was an advantage (although the absence of NF in Powershell needed to be worked around), and it looks much less like line noise)


Re: unpopular opinion: no, WFH and WFO are not the same.

Utter rubbish, based on a poor structure.

'Just ask someone in person' method of osmosis relies on a team that isn't distributed, and is continually in the office. It also leads to duplicated effort, a lack of documentation, and a loss of knowledge over time.

What do you do?

You have regular meetings, online chats, and those with knowledge document systems and create procedures to help those without knowledge. Knowledge gaps are identified and escalated.

It also means things are documented *properly* if you're doing your job, rather than keeping it in your head and parts of knowledge eventually failing.

The one thing I would highlight is unrealistic expectations of what it requires to do this. If this is implemented properly it means two things

1) closure rates in a tiered support structure will go down the higher the tier, because successful knowledge transfer means easier cases are now closed rapidly in a lower tier

2) the time it takes to create documentation and a proper procedure is non trivial. I repeatedly see the attitude of 'just throw it together in 45 minutes'. A load of easily broken shonky SQL does not compare to a stored procedure that performs error checks, logs data before and after, who changed the data, which case it related to, a reason etc

There is a large difference between something that works for someone with extensive system experience, where they can adjust things on the fly, and a turnkey procedure that handles edge cases and warns about errors.


Re: unpopular opinion: no, WFH and WFO are not the same.

Or perhaps your team is globally distributed and 'overheard conversations' occur during twice daily standups and online chat, or scheduled meetings, because you've decided to ensure everyone is in the loop instead of leave it to the whim of 'overheard conversations'

Naturally WFH does increase utility bills, however even with extra electricity, gas, wear and tear on equipment, and if you're responsible : backup Internet and possibly UPS it is *still* considerably cheaper than commuting by a long, long way (I'd estimate the monthly cost at up to a week's commute cost).

I'm not a fan of wearing headphones just to shut out noisy colleagues, and WFH it's nice and quiet. If I want music speakers are used, why bother with headphones? For phone calls there are headphones, but that's the same for home and office, because work moved off VoIP in favour of Skype/Teams a long time ago.

Get 'em while you can: Intel begins purging NUCs from inventory


No loss

Most NUCs have the virtue of only being relatively small. They're rarely fanless, aren't incredibly fast, or noticeably inexpensive.

On the other hand, a second hand Wyse 5070 off ebay (cost, under a 100 quid with power adapter) is fanless (if you choose the non extended edition with an expansion slot and a fan), as fast as a pretty decent PC from 2009, and perfectly adequate for web browsing, video playback (although I've not tried to a 4K monitor), productivity, and some (old, the GPU is very weak) gaming. I stuck in an extra 16GB memory to take it up to 20GB, a 1TB M.2 SSD, and it's been flawless as a daily browsing box running FreeBSD. Total cost, under 200 quid, although I did also re-use a Displayport to HDMI adapter. It's Displayport+ so it shouldn't even need an active adapter.

I tried for months with a Pi 4. For a desktop (running Raspbian) it was sluggish and required frequent reboots due to graphical corruption. So glad to see the back of it.

RIP Bram Moolenaar: Coding world mourns Vim creator


Brilliant program

Still use it most days. Vi is essential learning simply because any Unix with sense (so not all Linux) bundles vi or vim (and hopefully you don't need to resort to ed).

Search and replace, regular expressions, then really useful commands such as :sort u or XML validation.

Bram will certainly be missed



Re: Spoiler alert - game solution

Try offerings from Wadjet Eye games, particularly Blackwell and Unavowed. They have great stories and the puzzles are usually simple (except a very poor puzzle in the first Blackwell game where you have to talk to the same person three times before they'll admit they know something. It was only their second game, and this wasn't repeated)

Middleweight champ MX Linux 23 delivers knockout punch


There are concerns with sudo due to its size and configuration complexity but the general principle is sound.

Try doas on FreeBSD and OpenBSD instead.


Glad it has some documentation

They've created a user manual with a load of information, and forums which will be searchable, immediately placing it way above Devuan.

I'm sure Devuan ultimately works but its documentation is absolutely dire. 'Ask on IRC' rarely works well, and the fallbacks options are looking at Google or seeing if the documentation of Arch Linux (usually excellent) covers your situation.

Soft-reboot in systemd 254 sounds a lot like Windows' Fast Startup


Re: Of course, hibernation predates systemd (been around a long time in Linux)

If I'm reading the documentation correctly (and it's not something I've tried, so implementation may vary) whilst hibernation does need a swap partition or file, Linux doesn't have to use it for swapping.

The documentation also notes that hibernation can work with swap size smaller than memory. Which does make sense both because memory is usually not fully committed, and there will be duplication, discardable memory, and cache. All which can be potentially junked.

Windows uses a hibernation file rather than using a pagefile, and I've found it to be effective and fast (I always hibernate my work laptop at the end of the day). It did take until at least the second Windows 10 revision before it was stable, however.


If you're 'restarting' without restarting the kernel, it's not re-initialising hardware on a warm boot, which can have a difference to a full cold boot initialisation.

It's also faster. Could potentially be useful for certain hardware configurations, or virtual machines?

I do wonder if the increased speed is truly worth the extra complexity, though.


Doesn't seem like a great idea?

I thought this was just a merge of /usr/sbin and /usr/bin, but it seems to move /bin and /sbin to /usr/bin and /usr/sbin.

Apparently Solaris has done this already - I don't think it's a good idea there either.

There are still good reasons to have separate root and usr filesystems and eventually the system *will* have a boot failure and /usr won't be mounted. My understanding (and I see there is disagreement on this) is that /sbin is statically linked, so I know when there's a single user prompt and nothing else I can run the utilities in /sbin to recover the system. The utilities in /bin, possibly not.

I can see the sense of /usr/bin and /usr/local/bin, but in practice I never really saw the point of /usr/sbin and /usr/bin. If your system is booted to the point it can mount /usr, the libraries are probably also accessible, so who cares if user executables are in /usr/bin or /usr/sbin? Use another utility to find out program dependency.

In this case, if /usr is separate and corrupts itself (or worse if it *isn't* separate but is still corrupt) what happens at boot? Is there a minimal busybox shell that enables some interaction, or is it a case of boot from removable media?

Arc: A radical fresh take on the web browser


Intriguing subject line, then oh look, based on Chrome. NEXT!

Sorry, not interested.

I'd be interested in a modern revamp of Gopher that simplifies the web and avoids some of the modern browser issues.

However, Chrome? The same Google Chrome currently generating controversy about the 'this is all theoretical, but we're already baking WEI into the code base anyway to restrict the open web' ?

Not interested. Target an open solution instead.

Google's browser security plan slammed as dangerous, terrible, DRM for websites


Re: Tracking...

It's called a bookmark, *optional* synchronisation between browsers, and then reloading to the point you got up to.

Three signs that Wayland is becoming the favored way to get a GUI on Linux


Re: Wayland is the future, but only with a lot of boring work and a redesign

It works - with some effort. If you have the right drivers (generally not a problem these days), the right compositor (probably not an issue if you choose one of the popular ones), and don't need remote access (there are some ways to do this, I don't know the specifics).

We have to accept that although X works well this is in spite of, not because of, its design. There is so much cruft in X it's unreal, and its basic design has a lot of flaws. Years of bodges and accepted conventions means the end user experience is generally good.

Wayland generally has a less broken design, has traction from the major players, and a web browser and productivity suite will in general work Just Fine. However there are a lot of edge cases and modern functionality that are still not covered. At some point it's a good idea to try moving to it, so its future direction can be influenced, and the rough edges minimised.


Wayland is the future, but only with a lot of boring work and a redesign

There's a lot wrong with X, despite the fact it works.

There's a lot right about Wayland, despite the fact it doesn't work.

That's a little unfair, now is the time to get into Wayland and attempt to influence its direction before it becomes impossible to change. However, there's many things wrong with it :

It's Linux centric, and most of the discussion on the Wayland mailing list assumes a Linux architecture. Unix is more than just Linux and this is frequently ignored.

There are multiple compositors. Windows and other platforms have one compositor, meaning functionality can be enhanced vastly more easily.

If the compositor is based on one of the base standards such as the wlroots library or Weston (probably wlroots, because despite Weston being the 'reference compositor' it doesn't work on much other than Linux).

Let me repeat that again : the *reference* compositor is effectively Linux only. Fuckwits. There's a FreeBSD port in progress and version 8 (i.e. old) supposedly runs on NetBSD.

If functionality is added to a reference compositor library, such as a new protocol addition in wlroots, it does not automatically flow down to compositors based on it. This means that the FreeBSD suggested compositor, Hikari, still doesn't have the protocol extension for sensibly managing multiple monitors, three years after it was written. If you're on FreeBSD, use labwc instead.

On X, the xorg.conf file was standard. On Wayland each compositor has a different config file format. This frequently omits functionality, such as the ability to set up multiple monitors exactly how you want. The functionality is in Weston, for instance, but that's effectively Linux only so you're stuffed on other platforms and need to run programs after the compositor has started to do what you want.

Network transparency is compositor specific, not part of the reference compositor. Colour management hasn't been tied down yet. Lots of other things are in flux.

Wayland has been around for over a decade now and is missing some very basic functionality.

Due to the compositor design, power is being concentrated in the hands of the large desktop environments rather than small window managers. This is also an issue with X - installing an application that expects the ability to create icons, on a window manager that doesn't support that, leads to error messages and other issues. However with Wayland the issues are more noticeable.

If the Wayland ecosystem has any sense they'll coalesce around one compositor that functionality can be bolted on to, and that is tested on multiple platforms for every release. Oh look, a flying pig.

Bizarre backup taught techie to dumb things down for the boss


Bring back OS/2

It had a shredder. Once you dragged files to it they did not come back out.

In version 2.0 the shredder would shred pretty much anything, even the objects you're not supposed to shred. This got patched.

Free Wednesday gift for you lucky lot: Extra mouse button!


I'm ashamed to say I didn't know or had forgotten about the browser functions!

It's not entirely accurate - middle button title bar click appears to only open a tab on Firefox, not Chromium or Edge, but the other functions work. Thanks for that, it's very useful!

This is despite the fact my usual browsing box runs FreeBSD, so middle button pasting is the norm. I probably should refresh myself on shortcuts.

The title bar click to send it to the back must be window manager/compositor specific. It does nothing here on the Wayland labwc compositor.

In the If I Ruled The World camp, the old CUA shortcuts of ctrl insert, shift insert, and shift delete would work everywhere.

Brit broadband subscribers caught between crappy connections and price hikes


POTS is disappearing for everyone in the UK at the end of 2025, so no-one will be able to use their existing analogue equipment directly to the master socket, it will need adapters.

I suppose there's nothing stopping a product that supports extensions from adapting to VoIP, but frankly it's probably easier to use a number of wireless VoIP phones/run VoIP on your mobile, or stick an adapter on DECT.

If you want reliable, in decreasing order of expense try Andrews & Arnold, Zen Internet, or possibly Plusnet (BT, but better than basic BT). Although I haven't heard much about Plusnet recently.


Re: 4G or ADSL backup

That is unfortunate! In that case there's always the possibility of a separate FTTP connection, and hoping any issues don't occur at the same point with both FTTP connections. That's going to be pricey in any case.

A 4G sim seems to be between 6-10 pounds a month, it's a pity you can't really have a pay as you go that doesn't expire, but it's still about as much as a single day's train journey into the office so I might as well pay it.

I'm presuming you can't put up a 4G aerial to improve your reception? These certainly exist for narrowboats, I'm presuming it's also possible to install them in a house.


4G or ADSL backup

If you want a truly reliable connection, a separate Internet provider is a necessity. I'm with Andrews & Arnold who are at the top end of pricing in the consumer space, offer many business services, and are pretty technical. Overall this has been trouble free, but even so there are occasional dropouts for a couple of minutes, and one recently for nine hours - that was a BT major service outage.

Whilst A&A are good, and have consistently high speeds when the connection is up, they and practically all other providers are ultimately at the mercy of the infrastructure. If your Internet connection is that important, and in these days of home working it often is, we all need to be looking at backups.

When I had the recent connection outage I got around this with a mobile hotspot, and there are also many mifi devices on the market. Some routers have automatic cellular backup, my Zyxel VMG3925-B10B supports '3G' dongles but will happily handle 4G options. Following the outage it has a ZTE MF833U1 4G dongle plugged into it which 'just worked' (*) to automatically failover and restore, although the resold 3 SIM it was bundled with only supplies a 3G connection in my area so I need to replace it with an O2/Vodafone/EE SIM which do support 4G.

(*) It 'just worked' after I disabled the bridging config I had on the router - previously I used PPPoE passthrough to my firewall. The router will understandably only provide seamless failover and restore when you're using a standard NAT config. I'm presuming IPV6 will failover fine too, but haven't tested this yet.


Re: fibre

I understand El Reg's style guide is now unfortunately 'American English first', which does seem odd when the subject of the article is the UK

What it takes to keep an enterprise 'Frankenkernel' alive


Just because it's hard doesn't mean you're in the right

I do sympathise to some extent, but you can't have your cake and eat it. Want to do as you wish? Rewrite/with permission relicense all necessary code.

This is also still a considerable level below what Microsoft does with Windows. If you read Raymond Chen's Old New Thing blog you'll know Windows has custom installer code built in to shim 32 bit programs that still shipped with 16 bit installers, custom memory allocators to work around broken memory allocation, and many other shims to allow for incorrectly coded but popular applications.

Also, Windows has a driver model that doesn't involve sticking everything into the kernel, and moving functionality where possible up to userland/ring 3. Although I'll admit I'm not up to date on how much progress Linux has made moving driver functionality outside kernel space.

Alternatively, the OpenBSD route could be taken : security first, (almost) no compromises. If the change breaks applications, the applications have to fix it. In the end it works but can involve a lot of short to medium term pain.

Mummy and Daddy Musk think Elon's cage fight against Zuck is a terrible idea


We all know it'll never happen

Musk is a professional troll. He'll say whatever gets headlines, but it'll never happen.

Still someone (undoubtedly Musk) will come out of this looking like an idiot, so it's a win/win.

Bosses face losing 'key' workers after forcing a return to office


Are percentages are additive or not?

I'm presuming happy, motivated, and excited (total of 88%) aren't out of 100% but instead are probably the same third of people who want to be in the office, leaving a huge 70% who don't want to be.

Working from home is one of the best things work ever did. They get more out of me, better coverage (start later, finish later). Just had a report from someone else in the company who has caring requirements where WFH has enabled them to provide far more than would be possible if mandated attendance was a thing.

You can't push initiatives such as good work/life balance, prioritising mental health, encouraging exercise - and then force changes that actively work against that.

At least in my case my team are globally distributed. There are rare occasions where in person has an advantage, but modern technology has excellent interaction tools.

Five billion phones are dead in drawers – carriers want to mine them


Re: Mandate by law mobes have to last for five years

Yes : the government. No-one else is going to do it.

It's entirely possible, although it would hit some companies income. Consumers aren't going to make even the most trivial change that could preserve rare earth metals and marginally slow the ongoing climate crisis.

Consumers would get over it. It'll still make phone calls and run apps. People might even be grateful their phones are reliable.


Mandate by law mobes have to last for five years

Five years of security patches. Five years of app support. Minimum.

If you can't guarantee the patches end to end (OEM drivers, OS support, etc) you don't get to release a phone

If the newest release of your app doesn't work on a given standard from release date minus five years, you don't get to release it.

You wouldn't put up with this on a computer, but because of mobile contracts, fashion, and poor build quality/support, for some reason we put up with it with mobiles.

Now if you'll excuse me, I'm off to finish moving data off my expensive boutique Fxtec Pro 1 - where the charging port is now all but dead after three years usage and can't realistically be repaired. It's being transplanted into my almost three year old Unihertz Titan where security patches were dropped, the wireless is awful, and the manufacturer continues to not abide by the GPL. Still, there's finally a community LineageOS build for it I've managed to shoehorn on so I'll no doubt use it for a few weeks before getting fed up with the call quality and wireless and buy yet another phone to contribute further to land fill.

Any decent phones out there? Not Apple. *Five* years security patches support. Call quality you can hear. Usable camera. Wireless charging that doesn't destroy the battery so the USB port isn't continually in use. Source code released so there's a possibility of community LineageOS in thte future. Don't care how trendy it is, I want something reliable that will work for years.

Rocky Linux claims to have found 'path forward' from CentOS source purge


It's just Linux vs BSD, again

This is just another 'cake and eat it' pigeons coming home to roost moment. I think RedHat have a point about the direction of open source, but not in their stance.

When Liam says making it easy for competitors to copy work isn't what FOSS was ever about, what you really mean is it's never been what *Linux* is about.

Whilst using BSD source comes with some moral pressure to contribute back to the ecosystem it is not and never was necessary. It's never an issue to get hold of the source.

What the large Linux providers actually want is :

To make money from the software they produce

To do so on the backs of other people's work, some of which was provided for free under the expectation people would not make money off their work directly or indirectly

For other people not to be able to make money off the providers' work, despite the fact they have already done the same thing

As a sub point, when they manage to commercialise some of it, for the large cloud providers not to break their selling model

To still have other people carry on to provide them free labour

I can see this may attract a large amount of criticism and down votes, but consider that Red Hat and others, despite their code contributions, have driven and continue to drive Linux in a direction beneficial to them, not to the Unix community as a whole, or arguably even the Linux community as a whole.

There is nothing, except a huge amount of cost and effort, from stopping Red Hat slowly moving away from the GPL. That is what BSD did with their AT&T encumbered versions.

I don't doubt that Red Hat contribute a large amount of code and are on balance a major benefit to the Linux community. However, someone taking the benefit of another's work for commercial gain when that wasn't expected is the same thing whether it's Red Hat taking other Linux contributions and considerably enhancing it, or another party taking Red Hat's work and making few changes.

Just because you're working hard on it doesn't mean you're in the right.

Now, I do think RedHat have a point about funding. Too many open source products are inadequately funded, and if this continues they disappear.

However, RedHat and many others have no high ground here as can be seen with historic issues such as OpenSSL's Heartbleed. They're all relying on poorly funded and sometimes inadequately tested projects, and hoping they're sufficient to build their projects upon.

Gen Z and Millennials don't know what their colleagues are talking about half the time


Re: You are not out of pocket!

Out of pocket means that you have spent your own money, not yet been reimbursed, and are therefore 'left out of pocket'. It may or may not be refunded later.

Users accuse Intuit of 'heavy-handed' support changes on QuickBooks for Desktop


Re: More Information Needed........

As AC notes, relying on WINE for software is folly. WINE should only be used for games, and in some cases transitioning to native (non Windows) software. If it works for your use case you should be *very* careful with upgrading versions.

There's been a lot of work on WINE but it is still very game focused (because that's an area non Windows PC platforms are not good at) and more tied to hardware configurations than you would want.

If the program you want to run is mass market and doesn't use obscure APIs you may well be lucky. If it's using Enterprise APIs or more vertical markets (and accountancy packages definitely fall into that definition) there may be whole classes of APIs missing.

The WINE documentation for Arch Linux (and Arch's documentation in general) is pretty decent, and worth reading even if you're not running Arch, or are trying WINE on a non Linux platform.

Take WINE's AppDB compatibility list with a large dollop of salt, too. Testers rarely do an A/B comparison, so the software may 'work' on WINE but be much smoother on Windows.

Microsoft Windows edges closer to SMB security signing fully required by default


Re: Love the contractions on here.

It usually is announced, just not shouted from the rooftops.

By definition improving security will break things if there are devices that only support down level insecure protocols.

As to 'never any support for actually using them properly' it's called 'reconfigure/firmware upgrade your old device' (not Microsoft's job), or more probably 'buy something newer'. Blame your manufacturer, not Microsoft.

Just as TLS1.0 has largely gone away, at some point you need to move on, and as someone mentioned about low powered ARM boxes (and certainly with old TLS boxes too) the only recourse is a hardware upgrade.

The issue is that people 'want' security, and they also don't want to pay anything, or expend any effort to do so.

Leaked Kyndryl files show 55 was average age of laid-off US workers


Re: Unfortunately...

I can understand your viewpoint, but you're missing a few important things

1) You're severely undercharging. 300 quid a day barely works out as good permie rates (once you factor in all taxes and employee benefits) never mind contracting. You're not either not providing enough money to grow/protect your business, or you're severely under paying your employees. I was being charged out at 650 quid a day not far off thirty years ago and that wasn't excessive. To quote my ex boss 'get big, get niche, or get out'. As a business if you can't charge heavily (but not insultingly) in a niche, you need to change. The businesses you work for are unlikely to be nice, so don't undersell yourself.

2) The talk about younger keen 'work all the hours' employees vs older ones is a false equivalence. Yes, been there, done that, realised it was abuse and wised up. I'll still do overtime when absolutely necessary, but not when it's an excuse not to resource properly or more importantly architect a system.

Systems going down outside business hours shouldn't be a thing. If it reoccurs your architecture is broken or insufficiently resourced.

A lot of getting older is about wising up. If you're being treated well, offered training, and the things you are working on are well resourced all is good - drop any of those and people will either walk or reduce their effort.

3) People have *always* wanted a solid, fast response. This hasn't changed and is one way a small agile independent firm can win. Larger firms could manage this as well, but the issue is as per 2) they often do not resource teams and technology properly, and they don't train properly.

Leading on from this, effectiveness as an employee or willingness to learn is not necessarily age related. Yes, when you're young there's generally more enthusiasm and energy before the rest of life gets in the way, but staff that are obsessed with moving on or always choosing the next trendy technology (hello AI, hello new coding techniques) instead of *addressing the actual problem* is age agnostic. I've had to pick up the pieces from both inexperienced younger staff (which is fine), and older staff who really should know better but unfortunately managed to create things in their own little bubble without consideration for others.

Windows XP activation algorithm cracked, keygen now works on Linux


There are again a limited number of games and drivers that work better in 98 than 2000 or later, but overall an NT based OS will be vastly more stable and easy to keep running.

Asahi Linux developer warns the one true way is Wayland


Re: Nope

RDP has been around since 1995 so I'm not sure I'd hold up X11 as the saviour here.

Whilst X11 works over TCP it has some noticeable shortcomings, the most obvious one being that if the network connection drops the application dies. The last time I looked (which admittedly was quite a long time ago) all the solutions to this were proprietary and a tad flaky.


Re: Nope

It's been some time since 'commands' were sent - i.e. drawing lines, placing fonts etc with complex drivers in the days of yore. Now it's more efficient to send pixel data.

You're also talking at a level below application development - no-one has talked raw X protocol for years, that's down to the underlying libraries.


It's worth reading the Mastodon thread in full

It basically boils down to 'Wayland is pretty much there, but you're going to have to do some work and may have to change your drivers'. It's not yet mature but without people attempting to switch it probably will never be there.

The problem is that if you have an edge case or complex configuration Wayland will not be a solution out of the box, and some of the commentators have a hard time dealing with real life constraints to ideological purity. For instance, the idea the resolution should not change, which doesn't sit well with games and full screen video playback.

Going to look at moving this FreeBSD box to Wayland - AMD and Intel should support it, and Nvidia support is now in testing for more recent cards (most cards younger than about 6-7 years old).

I do like X, but remoting can be a pain, and a non mainstream WM is used (I prefer cwm) it really does not play well with poorly written pieces of software that assume the existence of a full desktop environment, complete with dbus and APIs to add desktop icons.

There is, as ever, a 'Unix is not just Linux' problem. Wayland isn't in a bad shape in FreeBSD, is there to some extent in NetBSD, and due to lack of resource, not a priority for OpenBSD at the moment.

When it comes to Linux distros, one person's molehill is another's mountain


The question is how much you want to compromise

My most recent Linux effort was running inside a FreeBSD VM with hardware passthrough. First off I tried Devuan as I'm not a fan of systemd, however although I'm not inexperienced in Unix Devuan is definitely for the more familiar Linux user - all 'documentation' is by asking on IRC channels.

I've tried that before on multiple platforms, it's not a solution!

Solution was Manjaro as it's specifically billed as working well with open source and proprietary drivers. I did have to run tasks in a specific order to get it to do what I wanted, but at least documentation and searches for information are easier. The downside was that I had to commit to systemd, but it's only running in a VM for testing!

I've also tried and enjoyed Salix, but that's also awkward due to its use of LILO, which doesn't play well running with Xen or anything that requires a customised initrd. There are workarounds for that.

Personally I'm trying to move to FreeBSD to de-emphasize Windows. For a day to day driver FreeBSD is fine for web browsing, Libre Office, vscode etc. The sticking points are games, VR, and a limited selection of Windows apps.

For an average user it probably is easier just to use Windows. It's been compatible and stable for years, but of course they like trying to extract as much data as possible and limit hardware compatibility. Linux gets up my nose as it's some way from a classic Unix in operation and often features software that is designed for Linux with no consideration for other Unixes.

Having said that, whilst some things about FreeBSD are fantastic (ZFS largely 'just works'), both Wine and virtualisation capabilities are severely behind Linux, and FreeBSD documentation is lacking compared to Arch Linux for instance. It doesn't have the laser focus and hassle free manageability of OpenBSD either.