* Posts by BinkyTheMagicPaperclip

1825 publicly visible posts • joined 11 May 2012

GhostBSD to ditch Xorg for XLibre as Red Hat's Wayland crusade leaves X11 fans out in the cold

BinkyTheMagicPaperclip Silver badge

Re: I've been using Wayland on FreeBSD for over a year, but some things are still an issue

Fair question, I have used xrandr in the past but I tend to default to using xorg.conf to configure anything from the start.

Yes, it works, thank you!

xrandr --output HDMI-1 --mode 1600x1200 --left-of DP-2

xrandr --output DP-2 --primary --mode 2560x1440

If I don't use xrandr or an xorg.conf it defaults to one mirrored screen and an 1152x864 resolution, which seems very odd indeed. Wayland here works 'fine' in that it correctly determines the maximum resolution, refresh rate, and that there are two monitors but it does sometimes get the positioning wrong and needs to be corrected.

BinkyTheMagicPaperclip Silver badge

Re: I'm reading a disaster

Go on then, I could do with a laugh whilst you're still on crack.

Off you go, find some 'proprietary programs without source code' in an OpenBSD install. Note that firmware for graphics cards, wireless adapters, or CPU microcode updates do not count - it's not running on the CPU itself.

Then if you're certain you're correct, e-mail a discussion list and tell Theo OpenBSD has proprietary software, whilst I go and fetch my popcorn.

BinkyTheMagicPaperclip Silver badge

I've been using Wayland on FreeBSD for over a year, but some things are still an issue

This week I decided to try getting Steam working on my fanless (Dell/Wyse Optiplex 3000 thin client) system, and after fiddling yes, I can get turn of the century games working OK on it[2]

One problem though : it's broken on Wayland, complete no go.

Killed Wayland, started Xorg [1], and it was other than [1] a complete pleasure. Able to use cwm again which has *sane defaults* over labwc that I've been using that.. doesn't. Yes, I'm pretty certain I can get labwc to do what I want but it is a pain, and I don't really want to put 're-implement cwm as a Wayland compositor' on my project list. Wayland is good enough, but it doesn't really make me happy.

I've been trying Wayland because it's likely to be The Future, and I'd rather give new things a go rather than become an old fart, but I have to say there is still room for improvement. There are too many programs that either don't work with Wayland, or claim to, but then you check and there's an XWayland session in the background and it requires determined effort and fiddly config to run a pure system.

[1] This admittedly was a huge pain. Maybe on Linux users multiple monitors are fine, but I wasted a couple of hours fiddling with xorg.conf which crucially needed multiple Option "Monitor-<connector>" sections under Device, Monitor sections with LeftOf and RightOf, a ModeLine and crucially an Option "PreferredMode" (which I missed several times). This is one thing I didn't need to mess around with under Wayland.

[2] I say 'OK' it required various fiddling, then you need to use wine/proton otherwise saving the game crashes it, and Wine-Proton 9.0 needed to be installed inside Steam itself.

The big FOSS vendors don't eat their own dogfood – they pay for proprietary groupware

BinkyTheMagicPaperclip Silver badge

The cheapest decent option for VR is the Meta Quest, you'll be paying a significant premium for anything else.

As far as I'm aware the main Linux VR options are HTC (which is generally regarded as inferior to other options and expensive), and the Index (which is no longer sold). Plus from listening to people's experience of VR and Linux what I hear is that it 'works' but Windows is generally better, which I wouldn't describe as 'fine'.

If you have the Vision, that costs a grand. Whilst I'm not a particular fan of Meta, and bought a PSVR2 and PC adapter specifically to get a headset not tied to them, a Vision is almost *three times* the cost of a Meta Quest 3S.

I am looking forward to seeing the Frame as a Linux first headset, but we're not quite there yet.

BinkyTheMagicPaperclip Silver badge

There is zero need *for you*. I'm glad you can run things entirely on FOSS.

For most other people, things are improving, but sometimes your interests dictate Windows is the way to go. e.g. VR is vaguely possible on Linux, but most of the time you'll be using Windows and/or paying a *significant* hardware premium. This may change when the Valve Frame comes out in 1H26 - be nice to see that happen.

I already use FreeBSD for most of my non work productivity, and am moving my main desktop to it, but it will still require both Linux and Windows VMs to run all the software I still wish to run. There's a limit to how masochistic I wish to be.

BinkyTheMagicPaperclip Silver badge

There should be a degree of pragmatism here

Ultimately the open source company needs to make money, there's limited resources, so fight the battles you can win.

The open source companies should be helping each other. That means paying for software, trialing it internally until it's usable, and then moving on to using it with external customers. With a clear conscience they should be able to say 'Yes, we're using this proprietary product now because the open source options are not capable enough,but we're contributing/funding projects and will move when they're mature' and actually have progress reports each year.

Teams isn't fantastic, but Jitsi is utterly awful. Tried it a while back, it's so embarrassingly poor compared to Zoom or Teams, I'd be mortified if I had to suggest it to anyone.

However, that's not an excuse not to try. If an open source company isn't at least trying to use an open source office suite, e-mail, and other productivity it's a failure of a company. It's not serving its users, and it's not serving the wider open source/Unix community (Unix is not just Linux). This does, as doublelayer suggests above, have to be coupled with contributing or paying the creator of the software so it actually improves.

I'm going to agree with Throatwarbler too, though. Groupware is not an easy option. There's lots of components that are possible via open source, but bringing them together as a usable whole and in a sensibly self or third party hostable service is a different matter. At that point there's not a lot of difference competing with commercial alternatives, except you have the disadvantage that if the groupware is polished enough a third party can out compete you in business by just taking your technology and *boy* does a subset of open source companies and fans not like the implications of that.

BinkyTheMagicPaperclip Silver badge

Re: Too much ideology makes Liam unproductive

I presume all the downvotes are about opinions on Linux, but you're absolutely right that Skype and Windows Phone were solid products. For Windows Phone despite the OS being solid, Microsoft refused to hold their nerve and continue throwing money after it in the same way they managed for XBox. They needed to fund all the apps and develop the ecosystem.

Even now, after many updates, Teams is not as feature complete as Skype for Business. SfB integrated with Outlook, meaning there was *one place* to search all your messages. It obeyed Windows standards properly. Its video performance was good enough.

Then Teams arrived. Not properly coded as a Windows program, it took months if not years before it started to follow the desktop/window theming. You can still only search within Teams itself. If a document is opened inside Teams it hides the entire remainder of the Window. Moronic design for people that can only do one task at a time and to chase after Zoom.

Not sure if dropping Edge for Edge Chromium is a good idea - probably not. Whereas it's a huge money sink to develop your own rendering engine Edge Chromium already has *three* engines it needs to maintain : Chrome (OK that's Google), Edge, and Internet Explorer (which is still embedded and necessary in Enterprise Mode for some sites[1]). In the long run it'll save money - provided Google play nice, but should Microsoft really bet on that?

[1] Although thank goodness for the area I work in we finally, *finally* got rid of our IE websites a couple of years ago. For the company as a whole it's almost entirely gone, but still present for a minority of customers..

Reviving a CIDCO MailStation – the last Z80 computer

BinkyTheMagicPaperclip Silver badge

9 inches is perhaps a little small for a GUI, although it had the advantage that the early non XL/Lisa Macs were fairly luggable.

The Kaypro series had a 9" monitor, and it's perfectly usable for 80x24 column text. Even today my model 10 is sharp and clear.

It's also a little unfair to compare a 'proper business computer' to a home computer, as the early home computers tended to use RF or composite at best. Good enough for games, not so great for spreadsheets.

Containers, cloud, blockchain, AI – it's all the same old BS, says veteran Red Hatter

BinkyTheMagicPaperclip Silver badge

Re: Remote desktop

There are other options - there's all the HTC Vive stuff, but that doesn't have a great price performance. There's the PSVR2 with PC adapter (it works well, but make sure you're using a Bluetooth adapter on the supported list, I'd recommend an Asus one). There's also the upcoming Valve Frame.

At the Microsoft end, WMR has been discontinued, but there is a WMR driver for Steam so it *shouldn't* need any Microsoft account at all.

If by 'tied hard' you mean 'requires Windows', then it is possible to get some of it working on Linux but you will be make life harder for yourself.

Perhaps best to wait out the Valve Frame and see how good it is.

BinkyTheMagicPaperclip Silver badge

You might have a point if it was about learning, but it pretty much never is. It's almost always about making money by shoehorning technology into a situation where it's either not suited, or isn't the ideal situation. 'LLMs' where an algorithm would be easier. NFTs /blockchain where in pretty much every situation a database would be better. Not to mention trying to hand wave away the social and environmental impact.

The alternative is the joy of CV padding, where an over complex solution is implemented because it gets the recipient kudos, something to stick on their CV, and a possible pay rise, even though it's not a good long term solution, or particularly efficient. Then the implementers move on, and some other schmuck has to fix their 'solution' properly.

Once the hype cycle ends, and the technology is used for the areas it's designed for, things are generally OK. Although then you'll get idiots saying a technology is useless because it hasn't taken over the world, ignoring the reality it works just fine in its specific niche.

BinkyTheMagicPaperclip Silver badge

Re: A counter-example?

I think it's a problem when a service is delivered as a container, and that is the only way to deploy it. It's an issue when the design is so fragile that's necessary.

On the other hand, I've been fiddling around with jails under FreeBSD and they provide several advantages. It could be argued that jails are unnecessary if services were better designed - in particular being able to easily support multiple instances, and config files. However that's not the situation, so I have a number of different jails each running unbound and/or other services, each with a separate IP address and all running on a standard port.

It makes it much easier to manage, and the jail and all the services and configuration inside it can be easily brought up and down, so yes, I can see the advantages of some containerisation.

If I *had* to run a service that uses 15,000 python or other web components which are managed automagically by a dependencies manager it also seems safer to stick it in a container. Yes, obviously there are ways of sandboxing Python etc too, but that's also another level of hassle. Of course my preference is not to run a service where you can't easily monitor and check all the dependencies.

BinkyTheMagicPaperclip Silver badge

Remote desktop

Hit in the early 00s, suddenly everything had to be remote. It was a relatively short hype cycle before things settled down, it started to be used properly, and the hype died.

The funniest part was some architectures where an application sent a huge print file up a not particularly fast broadband connection to the print server, which then sent the data back down the line to the local printer..

Second Life/Metaverse. More a consumer thing, but there was the first wave when everyone would supposedly spend all their time in a virtual environment, and it turns out the public don't actually want a permanently online avatar based cyberpunk existence.

Then VR hits, starts to get some actual traction beyond earlier experiments, and Meta think they can do Second Life but in VR, and that everyone will want to interact that way. Turns out again, they don't, and people fundamentally don't want to strap equipment to their head all the time.

MPs ask who's responsible when AI crashes the UK finance system

BinkyTheMagicPaperclip Silver badge

Re: There must be clarity on who is responsible:

It's not developers' jobs to do that. Knowing whether it is ethically sound is debatable, unless you're literally coding a baby eating machine.

Financially sound? You have *got* to be kidding. If an employee should resign if they think the company is making financially unsound decisions, most people would have to quit. Especially since you could quite convincingly argue that the way to fix a company is to spend more money on staff, etc. Where do you draw the line?

Leadership supposedly sets the company's direction : if it goes wrong, it is their responsibility.

Majority of CEOs report zero payoff from AI splurge

BinkyTheMagicPaperclip Silver badge

Isolated projects *should* show value

If the technology doesn't depend on economies of scale (and AI does not), and you can't make money or reduce effort on a cherry picked, well defined task, then the technology isn't worth it.

I've not been using LLMs, but I'm fairly certain they're not completely useless based on friends and articles. Nevertheless there's a lot of evidence that there's sufficient disadvantages to using it, that overall it isn't enough of an improvement based on the cost, energy, and environmental impact.

We're not in the 'early stages' a few years in, the tech curve is heading for 'the wheels fall off the bus' stage, on to the rapid descent before 'it's used for what it's actually useful for, and what will make money'. The wheels haven't fallen off yet, but the bearings are making an unpleasant rattle. Sell the bus now.

Neither is it working in the consumer space, if they want to raise prices and increase adverts. Please add adverts and stick prices up, it'll starkly illustrate that consumers will play around when it's free, but once they have to pay it'll be dropped like a hot potato.

UK regulators swarm X after Grok generated nudes from photos

BinkyTheMagicPaperclip Silver badge

Re: Storm in a teacup

So have a whole lot of false positives then, it's better than the alternative.

UK to spend £23M on AI to tell benefit claimants where to go

BinkyTheMagicPaperclip Silver badge

Not a chance this will be any good

I have a friend who handles DWP cases - it is complex and they are paid barely above minimum wage for something that in the private sector would lead to them earning time and a half to double time if there was any justice[1].

If an experienced, intelligent person finds it difficult to handle all the minutia of benefits what chance do you think an hallucination machine has?

Plus, quite obviously, the government will not want to advertise the maximum amount of benefit anyone can possibly receive, and how they could work around the rules. Which is what people actually want to know..

The architect of Universal Credit was interviewed on Radio 4, and when asked why it didn't work properly bluntly said that the scheme was designed to need more money than the government was putting into it.

It'll still need someone to actually manage the claim, and the claim processors will still be limited by sub standard internal systems. This'll probably last until the LLM advice has serious consequences, and it gets shoved up to an MP complaint.

[1] This is my tip for anyone that needs a decent consultant. Put out an advert implicitly targeting DWP staff. Pay doesn't have to be fantastic, but you'll need to provide them a stellar pension (often a major reason people stay in the civil service). Whittle down the personable candidates, and you'll have a detail oriented employee that gets the job done. No, not all staff are great, but there are some talented people out there who'd shift given the right benefit.

Humongous 52-inch Dell monitor will make you feel like king of the internet with four screens in one

BinkyTheMagicPaperclip Silver badge

Lacking in connectivity

For the U5226KW the number of ports looks OK, but their functionality is lacking. I can't be bothered looking up the HDMI specs but I'll believe it does what they say.

Displayport 1.4 though? The only way you're running the panel at full resolution and 60fps, never mind 120fps, is with stream compression - there simply isn't enough bandwidth otherwise.

Why is a monitor this new not supporting DisplayPort 2.0? Even the Thunderbolt port with alt mode is also limited to DisplayPort 1.4.

I think I'll stick to my main monitor being 1440p (therefore not needing scaling), and having a mixture of cheap second hand off ebay portrait and landscape monitors next to it.

I've gone down the route of trying to rely on monitor firmware and all their connections, and now have DisplayPort and HDMI switches (plus an MST hub) to achieve what I want instead - it's a much more stable setup.

If I needed to upgrade the attractions would be OLED and HDR, not higher resolution.

Claude devs complain about surprise usage limits, Anthropic blames expiring bonus

BinkyTheMagicPaperclip Silver badge

There's *a* market, but as to a self sustaining one.. Not that I'm particularly into LLMs, but many people I know dabble with it on the free product, and as soon as the credits run out just switch to the free credits on another LLM.

If you must use an LLM, host it locally, use the money on a fancy 'AI' laptop some vendors are desperate to sell. For once AMD have been innovative here, with the Strix Halo laptops featuring unified memory, so you can use up to 128GB memory to run LLMs without spending vastly more on Nvidia hardware. It's still not cheap, but it's justifiable to keen amateurs with a couple of grand of spare cash.

Who could have thought that a poorly regulated market, driven by unrealistic valuations, where the end users don't get to control the service would try to stiff their users? For an industry where some vendors are being sued in a class action for illegally training their models on copyrighted data this is such a major surprise [1]

[1] My apologies to anyone with their sarcasm meter turned on. Fuses blowing are not my problem.

IPv6 just turned 30 and still hasn’t taken over the world, but don't call it a failure

BinkyTheMagicPaperclip Silver badge

Re: Backwards compatibility

Yeah, but it wouldn't be, would it? The result would be the same either way : the minimum possible effort will be expended to get things to work without unmanageable support costs.

The 'surplus' effort will typically be spent making someone money, it won't be spent making it better.

If service providers cared there would be easy to consult details on the ports and protocols used by their services, so that end users don't waste their time and effort using the service. Unfortunately such detail is the exception, not the rule.

BinkyTheMagicPaperclip Silver badge

Re: IPv4 NAT and Privacy

That's a fairly poor example to be honest - Windows is one platform that is quite well documented on the server side. Unless, of course, you meant 'any possible program that can run under Windows' which would be a bit silly.

Windows networking protocols aren't the problem. Programs that rely on RPC ('open every UDP port from 20000 through 65535') or insist on uPnP being enabled are the issue.

My personal network is locked down quite hard. No queries to external DNS or NTP other than via the internal servers. Only the ports required to provide services are open. It's broken a number of things, including streaming services.

Do these well funded services, used by hundreds of millions of people, have a reference as to what ports they use? Do they fuck. Instead they just say 'failed to connect' and provide zero detail on what is required. You also wouldn't believe (or probably you would) the sheer number of servers Google tries to connect to for gmail IMAP, it's literally dozens.

BinkyTheMagicPaperclip Silver badge

Re: Substituting multicast for broadcast…

Whilst you're not wrong, and I would hesitate to call many consumer routers a 'proper firewall', pretty much even the cheapest, most awful ADSL router out there has *some* firewall capabilities in addition to NAT.

I'd be amazed if anything created in the current century lacks some firewall ability.

BinkyTheMagicPaperclip Silver badge

Re: "so I need to work out how to map locally allocated addresses to a bank of external addresses"

Cheers, I'll add it to the list of things to look at.

Allocating IPV6 addresses internally is not difficult. Allocating based on a fixed range provided by an ISP via PPPoE or other address allocation schemes is again not too tricky.

Having, on the other hand, your local address stay the same and seamlessly route to an ISP address that may change due to network failover seems an entirely different thing entirely.

BinkyTheMagicPaperclip Silver badge

No-one wants to use it because it's a huge pain

IPV4 is not only supported by absolutely everything, but is extremely well documented, with proxies and programs to curb its worst excesses. True, it has disadvantages, but the workarounds are generally good enough.

On the other hand IPV6's problems are legion :

Not enough services support it. This is chicken and egg, but whereas e.g. VoIP generally supports IPV6 (and should, because using it on IPV4 is nasty), e.g. Playstation consoles only really support IPV4 but require huge numbers of ports to be open[1]

IPV6 firewalling tends to be less than wonderful

IPV6 address assignment and translation is complex. Not all providers and software support all the methods, so you need to know several and how to configure them.

Complexity would be manageable if the documentation is good - it isn't. Documentation is spread around all everywhere, and isn't particularly complete, even for operating systems such as OpenBSD which have had IPV6 support for a very long time. On the other hand, the IPV4 documentation is excellent.

I've been wanting to sort a cellular failover for my network. For IPV4 it's simplicity itself, yes it relies on NAT, but basically deliver both addresses via PPPoE - one to fibre, one to a cellular router. Stick them in an OpenBSD trunk interface. Firewall to the trunk, NATing to the address dynamically.

IPV6? Well, first there's no NAT, so I need to work out how to map locally allocated addresses to a bank of external addresses, but the cellular router won't deliver a range of addresses over IPV6 using PPPoE, so I need to use another method. There's no easy guides on the Internet to do this, and this is for fun not for work, so I very quickly reach the 'lose the will to live' point.

It's on the list of things to sort in 2026, but it really should be easier than this. If I'm wrong, and there's a great website or book that really explains it properly then please let me know, but I have looked and it really is not obvious.

[1] apparently The PS5 supports IPV6 for some functions, and support was added but not really documented for later PS4 firmware. However, it only supports dual stack, not pure IPV6, and it's not very clear if this supports the PlayStation store etc, which is really all I want..

Microsoft CEO Satya Nadella becomes AI influencer, asks us all to move beyond slop

BinkyTheMagicPaperclip Silver badge

Well, he's right about one thing

"The choices we make about where we apply our scarce energy, compute, and talent resources will matter"

Absolutely, if you're being serious about the environment the answer is 'not in AI'. If you're being serious about not polluting local communities the answer is 'not in AI'.

If you actually want people to learn, rather than regurgitate without understanding context the answer is 'not in AI'

The remainder appears to be a huge number of buzzwords that boil down to 'please buy our AI!'. No thanks. There are some uses for it, but it's wildly overblown, and for most people an improved search engine and spending the same amount of money on community and tooling would generate better results.

If they stopped spending all their time on AI, didn't try to extract every bit of personal data they can, and made local accounts and local hosting and control of your data paramount who knows, I might be less determined to switch to Unix.

Memory is running out, and so are excuses for software bloat

BinkyTheMagicPaperclip Silver badge

Re: OS/2

There's certainly not a lot of security in OS/2, and much though I liked it, even if IBM hadn't made so many mistakes it would have died eventually unless its architecture changed to be more like NT or Unix including proper multiuser.

I would be truly amazed if installing on 4MB and upgrading to 8MB made any substantial difference to Warp 3. The only difference I'm aware of is disk cache allocation, which if it uses 'D' on FAT is dynamic, and for HPFS it's limited to 2MB cache anyway. IBM did a fair bit of optimisation, including coalescing multiple DLLs into one, but like you I used OS/2 since 2.1 and had 8MB RAM because it was very clear that was the usable minimum.

As to disk space, I think we need to remember that sometimes BIOS limitations affected the size of the OS/2 (or NT 3.51) boot partition[1] to be contained within the first 1024 cylinders, or around 504MB. That was *still* perfectly sufficient to run the entire OS with networking. True, you'd want to install large applications on another drive, but it wasn't a huge issue.

[1] OS/2's boot manager would not let you install OS/2 if it broke BIOS limits. On the other hand NT 3.51 would let you do whatever you wanted - and then just fail to boot.

The most durable tech is boring, old, and everywhere

BinkyTheMagicPaperclip Silver badge

Thanks for that. I've never really touched mainframes but am old enough to know about EBCDIC and understand about translating character sets. I hope they didn't bang their heads against the wall too long, because it wasn't even slightly unexpected to me..

Ten mistakes marred firewall upgrade at Australian telco, contributing to two deaths

BinkyTheMagicPaperclip Silver badge

That's not what the report says..

This is trying to throw network engineers under the bus. What the report actually says, multiple times if the detail is read, is that it's ultimately a management failure.

Yes, the network engineers made mistakes. That's not the important part - humans *will* make mistakes. Why didn't they go to the meetings? Is it, perhaps, because they were overloaded with work by management?

Once the network engineers had made mistakes the support staff had no knowledge of the outage, which is a clear and unambiguous management failure. Issues were not escalated properly - again, a clear management failure.

I can't be bothered looking in too much detail of this report because it also seems flawed. It notes there have been a number of changes at the company - tada, management failure again!

The 'second mistake' re locking out the firewall lacks detail. Is it correct or not? No idea - no detail. The 'third mistake' is not a mistake as the report admits itself, it's an implication of the instructions issued to Nokia. When the result of the instructions given is a consequence of what was requested this is not an error, even when the results are catastrophic

The 'fourth mistake' is again not a mistake. Just because it's a firewall change does not magically mean the procedure is the same - maybe it is, maybe it isn't - who knows, not enough detail.

Then 'The reason why the work was split into two parts and spread across two nights is not clear.'. This is an independent report which is supposed to contain the detail make the reason clear.

If a change is an important one - and as this is a life and death situation it certainly was - it needs to operate under the assumption that humans will make mistakes at multiple levels, and there should be a defense in depth approach.

I don't operate in this area, and work are certainly not perfect, but when we have a change request it is telegraphed well in advance. The support desk receive the dates and the release notes. There is post implementation support. If despite the testing it causes problems it gets escalated there, and rollbacks have occurred as a result of this.

Smartphones face a memory cost crunch – and buyers aren't in the mood

BinkyTheMagicPaperclip Silver badge

Maybe this will start to encourage the smart phone creators to stop taking the mickey

I got a Pixel 9 Pro for a 'mere' 800 quid instead of well over a grand, and the only real reason I went for that was that it supported GrapheneOS and the Clicks keyboard case[1]

No removable battery. No headphone jack. No expandable storage, which is surely the largest rip off ever. AI features I don't want (and fortunately are thrown away by GrapheneOS). Slim phone, but with a large bulge for the camera so it might as well be thicker all round, and by the time I've stuck it in a keyboard case it's a little bit of a brick anyway.

Really sick of the attempt to enforce a couple of year refresh cycles, manufacturers that don't support security patches for years, and hoping that now I've moved to GrapheneOS and away from boutique phones I'll actually get a phone that lasts for several years complete with security fixes.

If this squeezes Google's margins I'm not going to cry many tears.

[1] Also, at that point the price differential between a standard 9 and a Pro wasn't that large, and I was already throwing too much money away, so why not chuck away even more to get a better camera?

New Jolla phone and Sailfish 5 offer a break from iOS-Android monotony

BinkyTheMagicPaperclip Silver badge

Re: Needs wireless charging, too expensive

Maybe I've just been unlucky - with the FXTec Pro 1 it's a known issue. The port is on a separate power board, but as they have no spares that's not very useful! With the Unihertz Titan I don't think there's a systemic issue, but after a few years the port is becoming unreliable. I'd rather not run the risk[1]

I bought a Anker 10W charger to charge the Pixel. It's not particularly fast, but is reliable and doesn't heat the phone much. The only issues are that the phone needs to be laid sideways, and the portion of the case that supports wireless charging isn't huge. You need to double check it isn't rapidly charging and then not charging as that can drain the battery, but provided it's in the correct place it's pretty convenient.

No disagreement my preference for non swiping is certainly personal, but as Sailfish is so swipe oriented it's a barrier to me personally using it. Also, you *should* be able to drive an Android phone via physical keyboard shortcuts, but Clicks didn't spend even a couple of seconds thinking of that when designing their case, a third party remapper may be needed to take full advantage.

I can't comment on the GrapheneOS team, it is a pity if they don't play nicely with others. All I can say is that I've had issues with all other third party ROMs I've used, or boutique phones rebooting at inopportune times because of Android or changing cells/wireless state, and that so far the only people who appear to have got it right are the large companies.

That's not to say I particularly love Google. I resent the high price, the non swappable battery, the lack of a headphone jack, the overpriced flash storage that can't be expanded by SD card, and shoehorning AI into their default OS install. However Pixel/Clicks/GrapheneOS is still an improvement over the other keyboard phones I have tried, and my choices are minimal.

[1] This does of course assume that the phone does wireless charging properly, which the Unihertz Titan does not. It's well known for causing battery damage if you charge wirelessly as it has no limiter. Don't touch Unihertz with a bargepole, they are a bunch of GPL violating chancers who don't give a stuff once they have your money.

BinkyTheMagicPaperclip Silver badge

Oh god, was that SRAM card vendor Best-Electronics California? Absolute nightmare to attempt to use, I gave up. Even outdated websites in the late 90s were easier to get on with than them.

Now it looks like the only option that doesn't need an entire organ for purchase is via various Chinese vendors on ebay. I have *one* 512KB SRAM card that I've used to get CP/M on an NC200 or NC100 (it does work, but there's caveats with each of the three different CP/M types for the NC series), any UK based SRAM card vendor expects well north of 100 notes for a card.

BinkyTheMagicPaperclip Silver badge

Needs wireless charging, too expensive

The removable battery is nice, but if they're going for that it really needs a headphone port. The real problem, however, is that it doesn't compete well against GrapheneOS.

I've had numerous boutique or custom ROM based phones and my conclusion learned the hard way is they need a large company to support them. When I had a Blackberry Priv it had a number of issues, but Blackberry were large enough to fix them, and they genuinely cared about the end user experience[1], well at least until they dropped the security patch support two years in and made the phone a useless brick.

USB-A ports are supposed to support 10,000 insertions, but I've had multiple phones die due to the charging port failing. Phones *need* wireless charging to maintain longevity.

Tried Sailfish back some time between 2012-2015 on my Xperia Pro, found I absolutely hate swipe based interfaces[2], and you're always going to be scratching around for apps.

In the end I got a Pixel 9 Pro on a large discount, added a Clicks keyboard case[3], and installed GrapheneOS. It's the first third party ROM that has been hassle free. All the other ones, or boutique company ROMs have had issues.

Sailfish is not open source. It's nice the hardware is a bit more open, but I'd rather spend an equivalent amount - or much less (the only reason I bought a Pixel was for the keyboard case support), and have it supported by a large company.

[1] Blackberry actually thought about what an end user wanted to do! How they would use the keyboard, what apps they would run, that e-mail was important. This has been completely absent with the FXTec Pro 1 and the Unihertz Titan I've had since, where the end user experience is at best half arsed.

[2] Unfortunately you can't enable a non swiping interface on Android and have all apps work with virtual/physical keyboards, extremely irritating.

[3] It works but is utterly half arsed. Considering the entire purpose of the product is to be a keyboard Clicks don't give a flying fuck for things such as keyboard customisation, and when you bring this up their attitude is 'it's only designed for the US market'. Talk about myopic. Sadly it appears to be the only real viable option for a portrait Android phone with a keyboard[4]

[4] I have tried landscape and square format Android keyboard phones. It is a lost cause, I have tried for *years*. If you want Android software to work, you need a portrait phone.

BOFH: If another meeting is scheduled, someone is going to have a scheduled accident

BinkyTheMagicPaperclip Silver badge

Re: Fantastic as usual :). kzzzt.

There's partly that, but also when it's only character based it *has* to work otherwise it doesn't sell. When it's GUI based there's the choice between mouse and keyboard interaction, and it tends to favour using both as that's the default with a GUI, rather than making solely keyboard based a first class citizen.

BinkyTheMagicPaperclip Silver badge

Re: Fantastic as usual :). kzzzt.

There's definitely cycles.

As to character terminals vs a GUI or web. I don't think character terminals are inherently easier and faster, assuming the GUI is a decent one it's perfectly fast enough to handle everything the character terminal did and more. The problem as you indicate is that non character based systems are often poorly designed to be driven entirely by a keyboard, and any use of a mouse whatsoever fundamentally requires moving your hand (although I note there's a Kickstarter going at the moment which uses a Bluetooth ring on your finger called Prolo to perform mouse actions or similar).

Even a web browser can be very fast to use as long as the site isn't overburdened with complexity, but far too often it is.

I note that Windows' CUA heritage is falling apart too. Windows still supports shift insert, ctrl insert, and shift delete from CUA in addition to the horrid Wordstar oriented X, C, or V, but the Windows designers have forgotten when introducing paste without formatting as Ctrl Shift V, that they should *also* have logically included Ctrl Shift Insert, but it does nothing in Word for instance.

BinkyTheMagicPaperclip Silver badge

Fantastic as usual :). kzzzt.

The trick is being selective about your crusty old fartism.

Virtualisation[1], emulation, flash storage, graphical remote access that works without hassle, Unicode[2], broadband, FPGAs, VR, and cheap powerful embedded SOCs - all fantastic

Mobiles. Gosh, lots of things going on there. Such a mess too, but an improvement over what we had before. Probably.

'AI'/LLMs - probably useful, come back when the world changing hype has died down and it's truly optional in products, yeah?

Containers and fifteen thousand unaudited random dependency components written in Javascript or Python pulled down randomly from the Internet each time. Ummm, I can see it's useful for rapid prototyping, but *so many* potential integration, backup, and security issues.

Mandatory cloud based subscription services on over complicated architecture when self hosting is eminently possible : go away

[1] yeah, yeah, I know for Proper Old Farts who were running IBM VM back in the 70s or Big Iron after that it's not new, and I used V86 mode under OS/2 in the 90s for DOS boxes etc, but really it was the 00s before VT-x made it all usable.

[2] If only more products supported it. It's embarrassing when modern products don't support a well designed *thirty year old* standard

Death to one-time text codes: Passkeys are the new hotness in MFA

BinkyTheMagicPaperclip Silver badge

I'll trust one account to rule them all when companies actually fix their systems

I have an old Apple Developer account. This royally fecks up using any other Apple service, iTunes threw a fit. Crumbled and tried to buy discounted Apple TV+ for Black Friday for a few months at a fiver a month. Again it lets me log in, then refuses to do anything.

Gave up, registered under a different address.

Now, of course, I can log in to Apple TV+, and it will stream adverts prior to a programme, but no programmes at all. Does it give any indication as to what is lacking on the network other than 'it isn't working'? That would be too easy!

Companies consistently fail to achieve the minimum standard of properly integrated systems, and proper documentation.

One of the minor reasons I moved away from Google Mail was its finicky but useless protection from you daring to log on somewhere different than usual.

Classic MacOS for non-Apple PowerPC kit rediscovered

BinkyTheMagicPaperclip Silver badge

CHRP alternatives were all better than Apple kit

Depending on your viewpoint, but the alternatives both undercut Apple at the low end, and innovated more at the higher end. My understanding was that 8.0 wasn't very different to 7.6 but due to the clone manufacturers having iron clad contracts permitting them access to all 7.x releases, a sudden renumbering of the next 7.x point release stopped that dead. I've got 7.x, 8.x, and 9.x running on a 4400/200 here (plus the almost entirely useless BeOS PPC). There's a few games and applications that offer more interesting/different options on earlier OS releases.

On the non Apple front I do also have a quite rare PReP 43P, designed to run AIX. It may also be possible to get OS/2 PowerPC running on it with a lot of fiddling, but it really is alpha quality software, doesn't boot from a SCSI CDROM by default, and for IDE CD support the firmware only supports *one* specific model of Mitsumi CD drive. On the whole whilst booting from CD would be a novelty on a PC in 1995, this is a workstation that's considerably more hassle than any equivalent PC.

FreeBSD 15 trims legacy fat and revamps how OS is built

BinkyTheMagicPaperclip Silver badge

Also, thanks Liam, desktop-installer is new to me

First I've heard of it. Running labwc here, as I wanted a cwm equivalent under Wayland, and that's the closest functional compositor I could find that works under FreeBSD.

As mentioned above, VSCode does exist in freshports, although the port breaks from time to time which is a tad annoying.

Can't say I'm bothered about a Discord application - just run it in a browser. Remember that if you pass -P to Firefox it's possible to select different profiles, so they can be isolated either by purpose and by extension via network configuration (I have one browser straight onto the Internet, and another using a VPN via a SOCKS proxy. Browser themes then become actively useful, as I can set the theme to the flag of the country I'm connecting to the VPN).

I haven't got particularly strong feelings about how packages/base should be maintained other than to say that being able to easily wipe out your OS isn't a good idea, and that upgrading should be designed to be as quick and easy as using OpenBSD where it's as simple as a syspatch, sysupgrade, and pkg_add -u [1]plus there's no danger of bricking your system which there has been (or at least, stopping the console from working in certain configurations) when I had the temerity to try and skip over a minor release in FreeBSD without doing a pkg upgrade/update first.

[1] Someone previously said paraphrased 'but how do you know what the user wants to do?'. Really it is not difficult. If you're on RELEASE, upgrade to the next release. If you're on STABLE, it should go to the next stable. If it's on CURRENT, it should upgrade to the next CURRENT. It shouldn't be necessary to upgrade *any* package just to get base working.

Note also, that just like many Linux distributions, recovery when things go wrong is absolutely awful and often involves booting from install media and typing opaque CLI commands. Guess who hasn't needed to do that for years? Windows. Yes, Microsoft are a multi billion dollar corporation, but also FreeBSD has ZFS and could conceivably offer easier rollback or recovery.

BinkyTheMagicPaperclip Silver badge

Probably sticking with 14.3 for a while on the desktop, server may go to 15.0

I need GPU passthrough on the desktop, because FreeBSD - nice though it is at times, simply can't run a lot of Windows and Linux software[1][2]. Ran 15.0 and tried passthrough, it sort of worked on boot, but then I failed to get it working in Ubuntu, Mint, or Debian and I'm seeing a couple of funky bhyve behaviours I don't remember seeing in 14.x. I'll stick with 14.3 and apply the Corvin K patches until 15.0 is as usable.

The one difference I've noted so far is that it's trivial to set up a mirrored install, and a mirrored 'ZFS on root' setup will by default mirror swap using gmirror[3], but everything else (except the EFI partition) using ZFS, which is great because mirrored ZFS swap can apparently have issues in low memory situations, and setting up this configuration in 14.x needed to be done manually last time I tried.

Thinking about it, I might blow away and recreate my file/app server in 15.0 as it's currently on 13.5, is using mirrored ZFS swap I need to change, and I don't need to run desktop software on it.

Also pondering a second hand workstation so I can follow 16 CURRENT, and follow 15.0's third party patches for GPU passthrough.

[1] Don't get me wrong. 90% of the time FreeBSD does what I want - Firefox, Libreoffice, some scanning apps (although the OCR software I've tried so far is abysmal), and various terminal stuff. However the other 10% of the time is an issue.

[2] WINE does work under FreeBSD, but is unfortunately an order of magnitude behind Linux in terms of API coverage

[3] This breaks kernel dumps if it's important to you

Pebble, the e-ink smartwatch that refuses to die, just went fully open source

BinkyTheMagicPaperclip Silver badge

Re: Tempting! Bit square though..

I'm not seeing any great advantages here! Even my fairly low end 2016 car has Bluetooth, and can pair to a phone. I'd have to check if it actually displays the caller though, it's very basic and is the generation before Android Auto - which certainly would do what you want.

BinkyTheMagicPaperclip Silver badge

Tempting! Bit square though..

Haven't worn a watch for years, just use a phone instead.

Have to say if I bought a smart watch it'd probably be a Garmin. They have round bezels and physical buttons. Plus enough features to tell me what I already know - that if I want better running times I need to push myself harder.

Despite being very geeky by nature it just seems a step too far even if supporting open source hardware is tempting. Shades of having an actual watch form factor Game and Watch in the 80s when of course I was completely cool, and wouldn't want to return to that to overshade my former glory *cough*.

Is there anything it is really compelling for, when you already have a phone in your pocket?

You are likely to be eaten by the MIT license: Microsoft frees Zork source

BinkyTheMagicPaperclip Silver badge

There are other adventures too!

Worth checking out the yearly xyzzy awards for the best interactive fiction. The original Infocom adventures remain obtainable easily via the Internet, or some of them from GOG, and are still fun. I really need to finish completing Enchanter (which I enjoy far more than Zork) without accidentally releasing The Evil One and receiving a score of -1 'Menace to humanity' (I was quite proud of that).

Inform 7 for modern Z machine based games is.. ok. Personally I don't get on with the 'natural language' because you end up having to use very specific language to drive it. Instead I use Inform 6 (which Inform 7 compiles down to anyway), but also the PunyInform library which if you're careful about the limits also enables creating adventures for old 8 bit systems in addition to modern ones. If you're running a Z80 based system, Vezza is an excellent Z machine interpreter, and you can e.g. run pretty modern interactive fiction games on an Amstrad PCW.

AI nudification site fined £55K for skipping age checks

BinkyTheMagicPaperclip Silver badge

Re: blackmail incoming

Or in fact Discord where they said your image was *not* stored, and then, um, the third party *did* store your image.

It's also worth looking at what the regulators actually do. People complain about Ofgem, but their remit is to keep the energy market stable, it's *not* to keep bills low (unless you're in a vulnerable group). Complain to your MP, and it all needs to be paid for somehow.

Although, yes, personally I'd rather the water industry remained privatised, we'd all had slightly higher bills, and we actually had new reservoirs in the last thirty years.

BinkyTheMagicPaperclip Silver badge

Bet the fines will go nowhere

Had a quick search, suspecting that Itai Tech Ltd would be an overseas company and Ofcom could go whistle for the fine.

Actually there is a UK based Itai Tech Ltd! However it is also in a state of 'Active proposal to strike off' according to Companies House, so Ofcom can still go whistle once they're wound up, I imagine.

Ubuntu 25.10's Rusty sudo holes quickly welded shut

BinkyTheMagicPaperclip Silver badge

Re: sendmail.cf

Very wise. I wrote very custom offline mail routing rules in Sendmail under SCO Unix. Once.

Never again, thank you.

More recently I had a fiddle with exim filter rules after moving e-mail accounts. That's an awful lot better, but not exactly casual user friendly.

Retail giant Kingfisher rejects SAP ERP upgrade plan

BinkyTheMagicPaperclip Silver badge

Re: Its about time customers said "No" when vendors try to force everyone into SAAS relationships

Whilst in general I agree it really does depend on the service's purpose. If it's a word processor - fine, you could get away with using Wordstar from the 1980s for a lot of purposes, WordPerfect from slightly later if you need diagrams and tables. You can effectively argue things haven't moved forward enough/much, especially as Word still doesn't support tables in tables after 30+ years of development.

If the product is driven by a changing legislative landscape and customer requirements that don't stay the same over time, complete with hundreds or thousands of edge cases, interfaces to other systems, and interactions that need to result in a reliable result there's no real alternative to a subscription. In the 90s pre web it'd just be annual support and manual updates, rather than a web app or automatic updates, but the effect is the same.

BinkyTheMagicPaperclip Silver badge

I see your viewpoint, but no. By that stage we were already established. I'm not aware of any specific feedback that would have influenced the product, otherwise I would have mentioned it!

It helped bring in money. I'm sure as with every customer they will have highlighted interesting edge cases. The strategy back in those days was 'take the code base, adapt it to customer specifications, and worry about the future later'. That becomes a problem when you don't have a sensible mainline code base with nice neat branches, instead there's a sprawling mess of divergent code bases, and if it's an implementation that's not following the general direction of the product, any modification to bring it towards that becomes very expensive.

As the technical debt was slowly dealt with and everything brought back to One True Mainline (it mostly is now, except for the ones that aren't) they had the choice of moving to the mainline version (extra cost if they wish to keep their customisation), moving to a different product (more expensive), or leaving as Business deemed the support/development cost over income to be insufficient to continue supporting their bespoke implementation[1]. Especially so as after being taken over things were decidedly more corporate and structured - an improvement in a few areas, a disadvantage in many others.

Most of these customers got a very good deal and well over a decade of a supported environment, and advance notice of literal years that they had to make a choice - they did not do so badly. Many moved to the follow on product, or alternative products in the company group.

[1] You may reasonable ask the difference between this and SAP at this point. The differences are small vs a very large company, and the fact there is no such thing as a cheap SAP installation.

BinkyTheMagicPaperclip Silver badge

At one point we dealt with the now departed Kwik Save as a customer, a common negotiating point was 'we're going to have to sell a lot of tins of beans to pay for that'.

Retailers absolutely will throw lots of money at you *if* and only if you can show the value returned is substantially greater than their outlay. Business 101.

They also, shockingly, expect professionalism, the product to work, and support that does what it says on the tin. Who would have thought?

Unreasonable customers usually aren't too difficult to deal with except in the short term, anyone trying to exceed their contract will eventually go back to negotiation at a business level and change requests to fix their requirements - or to be asked to leave. It's the *reasonable* customers who expect the product to do exactly what it should, when it fundamentally doesn't, that are the problem - or when the contracts team gets eaten alive by retailers who are very good at negotiating and the contracts team or management look only at the headline rate, and not at the ongoing cost.

The only exception are smaller companies who frankly received a bargain, and then in future years are unwilling to pay a reasonable market rate, not one influenced by unnecessary cloud architectures or poor business decisions. in those cases, well, they'd better try and find another company who will sell a solution at below the going rate, pony up the market rate cash, or move to a standardised product with no customisation.

BinkyTheMagicPaperclip Silver badge

It's not rocket science

Similar issues at work. Much of my area gained popularity because we provided a decent product at a reasonable price with a lot of per customer customisations.

This is expensive to maintain, however, and unless you're very strict with your development process (and we weren't, making the business over clean architecture trade off) it leads to sprawling technical debt. But hey, the company prospered because of it, so not a bad business decision.

Then we got taken over aaaaand innovation stopped. Then the next step was trying to push customers to the newer platform which was more expensive, and overall less functional (did some things much better, others somewhat worse). 'Surprisingly' this is not a selling point.

If you want the big companies you're going to have to support customisation, and put in the resource to support it. They will take some pain when the platform evolves, as they're not completely unreasonable, but a huge price spike which boils down to 'we want more money and less hassle' will just lead to them looking elsewhere.

There seems to be the repeated expectation/hope by various management that there is a Sunshine and Unicorns marketplace where companies will pay through the nose for whatever you provide. Providing solutions at this level is inherently messy to your Ideal Architecture (although a degree of that is normally because providers under quote and don't factor in proper training and documentation, which are mandatory for long term support and platform stability)

If you're going to effectively ask the customer to re-buy their entire solution, the likelihood is you'll do it repeatedly. Why bother sticking with a company like that, when you can try another company that may be more competent - or at least more reasonably priced.

EU's reforms of GDPR, AI slated by privacy activists for 'playing into Big Tech’s hands'

BinkyTheMagicPaperclip Silver badge

Might as well keep it now

Largely the GDPR isn't that useful - the responsible companies comply with it, and the irresponsible ones flout the law and have very few consequences.

However, seeing as most decent companies have changed all their systems to cope with it, might as well keep it now.

Additionally it will *really annoy Zuckerberg, Musk, Altman, and a whole load of other tech bros* as they have to dedicate legal and technical resources, after they break the rules for the fifth time.

For that reason alone, we should keep it.

Here's one way to cut support ticket volume… send them to another company entirely

BinkyTheMagicPaperclip Silver badge

Not really a surprise

We've had more than one customer think anything vaguely web or network based means the request should be sent to us, and they quickly get redirected to their internal support or ISP.

Fortunately there's no 'AI' here yet, other than an initial automated response e-mail, but it's probably just a matter of time. There's enough mistakes made in responses using humans, I shudder to think what will happen if an LLM starts to construct confident sounding but wrong answers.

If a human tells the customer to delete all their data for the last month and it'll be OK, they can get sacked. How are you going to tell off an LLM when it's backed into a corner and spews out any old rubbish?