Re: The Financial Times piece put the deadline at 2027.
But isn't this only for 'official' systems? Doesn't affect consumer/retail stuff, unless I'm misunderstanding it.
111 publicly visible posts • joined 24 Apr 2020
Amazing how long installing these updates takes, even on a Pi5. I ran this one the other night and it was a good 20 mins to install the updated kernel and all the rest, having last updated around 5-6 weeks ago. I guess it's the really slow SD storage causing this, as the CPU didn't seem to be sweating much (and the fan didn't run).
My smart thermostat is now similarly, and intentionally, in dumb mode. It was good for a while, but a classic case of good hardware ruined by cr*p software. The app kept crashing and messing everything up, so the stat's magic "mothership" box was decommissioned and it now operates as a manually programmable stat, nothing more. Good enough for us - better, in fact, given the above stories!
Do you have a DACpro board? I've just added one to my Pi4, and only just started playing with it. Sounds great so far. Now I just need a bigger SD card, and then I can stick my entire music collection (even in FLAC format) into a reasonably high-fidelity portable jukebox machine the size of a pack of butter, ready to plug into any audio system!
He just doesn't like small numbers >20, so the major number gets incremented for no reason other than that. Linux essentially hasn't had any "major" big bang versions since 2.6 really - it's just been a continuous, steady evolution since then, with the version number incrementing according to how Linus feels, rather than any connection to major features.
Wow, just think - if the tax system wasn't so insanely, unfathomably complex and utterly hellish, this whole game would be a lot easier. HMRC could run with far fewer people, a far smaller helpline, time and money saved, and happiness all round! But na, obviously that will never happen....
20%? Extraordinary - in all the places I've been, it's never even got close to that. In my current place, where I've been for 6 years, through lots of staff turnover, we've had ONE out of a total of 15 members of the IT team. It's an engineering company, and still the entire projects, engineering, controls and IT teams remain 100% male now. Every single one. We have a long, long way to go....
Interesting perspective - but yeah, sounds about right. Given that we had NT in 1993, I think Linux has improved rather more dramatically than Windows since then. Although even then, desktop Linux peaked some time around 2010, and hasn't really improved much since. It's just constant, constant re-invention of the same things, again and again. The 3D compiz/beryl desktop I had in, what, 2006, could do stuff than a modern Ubuntu desktop still can't! Because it hasn't been reinvented yet, but I'm sure it won't be long...
Oh dear - shame, because the UniFi kit is really, really good. I've just upgraded my whole home network, using a Ubiquiti PoE switch, a confusingly-named "Cloud Key" console (meaning your data and camera feeds are stored locally on the 1TB device, /not/ in the cloud), a couple of cameras and a couple of Wi-Fi APs. Doorbell to follow next. It's brilliant: easy to use, very nicely designed, works out of the beautifully-packaged box (they even include a tiny spirit level to use when fixing the wall-mount plates!), gets regular software / functionality updates, and requires no subscription of course.
I can access it remotely, only because I setup a VPN to my home magic box (and I have static IP), then I point the app or browser at the private IP, either on my laptop of phone. I'm very happy with the setup, and presume I'd be immune to this security snafu, since I'm not using their cloud-based accounts, but the fully on-my-prem behind-a-VPN setup.
Something to do with ABI version compatibility or something? They always use the major kernel version with .0, then the -XY is their ABI release number. But yes, at first glance it would be more useful if they actually included the patch version instead of always being .0, but I guess that would mess up some of their build tooling in some way.
Stable kernels in "not actually that stable" shocker!
When you look at the sheer number of patches arriving in each stable point release (sometimes many hundreds, in a cycle involving several updates per month), you do have to wonder. Greg does a lot of amazing work, but clearly the maintainer burden is too high to really manage these days, so not everything gets the vetting it should. And the old, fairly strict rules about what should be included in stable seem to have been quietly forgotten. Just pour in lots of nice patches from the newest -rc release!
When backporting to older LTS kernels, as opposed to the latest stable kernel, this seems to have become too risky due to divergence?
Google's online and cloud services are highly suspect. We've already had incidents of serious data loss in their Cloud Storage service (no such thing ever happened with Amazon S3), and we even had our entire cloud account locked out for a few days, due to an unspecified "violation of their T&Cs" and usage policies. I suspect we'll be heading back to AWS in the near future.
On a personal level, Google Photos is incredibly erratic and unreliable, with the opposite problem to that described here: stuff coming back from the dead! The number of times my 15GB free storage has nearly filled up, because stuff deleted ages ago suddenly returns - it's infuriating. When you try to create space by downloading blocks of photos for offline management, it fails at least 50% of the time due to an unspecified "network error". Google's online tools are frankly painful, but my only alternative is to go and get an iPhone, and I'm not quite ready for that....
I remember, circa 2004, buying the latest Sun UltraSPARC number-crunching server for our scientific research institute. It had 12 dual-core CPUs and a then-insane 48GB of RAM, plus a still-useful-now 3TB of FibreChannel high speed drives. And it cost about the same as a 3-bed house, at the time (after the educational discount!). Now, less than 20 years on, you could easily smoke that with a custom-build PC in the <£1.5k bracket.
WireGuard's magic endpoint roaming is really its best killer feature. We use it extensively now, mainly because of that, and the general ease of configuration compared to the horrors of IPsec. But the ecosystem of tools around it is still very immature, which is a shame. We are building our own 'dashboard' to view and manage connected clients across multiple VPNs, but it's not production ready yet...
We're in a city of ~130k, and they've fibre'd half of it, but don't seem to be bothering with the other half. A couple of square miles of fibre desert, although we can get VM's coax service at up to 900Mbps, supposedly. CityFibre and Openreach both seem to be wondering when to do the rest.
For now, we manage with FTTC on 70/20ish, from A&A, and it's perfectly adequate 98% of the time. When fibre comes (my router is ready!) I'll go for the cheapest FTTP tariff available, which will be the 160Mbps symmetric - and I'll actually save £8/month compared to now. Meanwhile, I've upgraded the internal network with some shiny UniFi kit, and my (new) laptop is reporting a raw bitrate of 1.2Gbps today, which is nice. It does have almost a perfect line-of-sight to the AP, approx 5 metres away, and I'm using 80MHz channels. At least the Wi-Fi won't be a bottleneck!
Same - I have a lovely decentralised setup now, using the excellent UniFi kit. A Draytek router acts as an FTTC/DSL modem, and is ready for FTTP (ethernet WAN port) when it finally arrives in 2058. Then a nice UniFi POE switch powering a 'cloud key' hub, and some cameras and WiFi APs, plus a VOIP phone base. Soon an RPi with POE hat as well, probably. All works beautifully and the whole network (apart from the router in the garage) runs off one plug.
Because they are required to? Because power sites are required to transmit live operational metering data to the grid? Because contracts demand full visibility of behaviour, performance, alerts, etc? Running energy systems, and much else, without a live internet connection is not realistic in this day and age.
Not convinced - for us, Wayland still seems to have fundamental issues, like not actually working. The number of times we've had apocalyptic desktop weirdness happening, second or third screens going AWOL, or unexplained 'treacle wading', it seems to have Wayland always. Flip to X11 and everything's happy and normal again. This is on recent Ubuntu versions. I'm guessing in another 10 years it'll be fully ready for primetime.
Interesting, yesterday I booted an Ubuntu 23.10 nightly build, on a shiny new Intel i5 laptop, and it still kicks into X11 by default, even now.
I stopped doing this years ago, after finding the process way too much faff, and it becoming impossible to build a kernel that would even boot, on Ubuntu. I'm now running Liquorix, which works fine, but perhaps what I might try next is custom-building, using their config as a starting point and stripping out more unnecessary stuff. Interesting that the vmlinuz image, and the modules dir, are much smaller with the Liquorix kernel, compared to the 'canned' Ubuntu kernels.
Would love to see how quickly a kernel compiles on my 12-thread Ryzen, compared with the old single-core Athlon 64 I used to use, way back in the day!
I've done it for years, since my son was born. Though on 80% pay, not 100% - am I a sucker? But companies large and small are quite open to the idea now, so it's not really been an issue. I love it, and it brings life back into balance again. The cost is minimal (assuming 80%) during the childcare years anyway, and I use my day off to do all my chores and appointments, and extra things like helping with a code club at the school. Never going back to 5 days!
We had a related organisation do a cybersecurity audit on us recently, and they want us to install a magic 'security gateway' to magically improve everything for our office network (which already has all the 'usual' firewall stuff). Fortigate would be one of the potential options for this. Ho hum, I'll sit smugly and delay implementing the recommendation a little longer, then...
...would you want to? Win11 may be slightly less eye-bleedingly ugly than Win10, but it's a bug-ridden mess. There are some quite serious bugs in the initial setup wizard that can make it take 2-3 attempts to even setup a new machine and get to a usable desktop. This has bitten us a few times recently, with brand new factory-fresh machines. Never had any such issues with Win10, hideously ugly though it was.
Good move - Lightning was getting rather old and cranky anyway, and USB-C (esp on the USB4 standard) is hugely superior in capability. Now, can we standardise video connectors please? Half my life at work seems to involve users trying to find random weird cables to cope with everything from VGA to DVI, HDMI (micro, mini, standard), mini DP, standard DP, and DP over Thunderbolt...