Cardboard box? Luxury!
(Sorry. I couldn't resist. I've never wanted to be a lion tamer. I've always wanted to be a systems analyst.)
399 publicly visible posts • joined 23 Mar 2017
250 kW per rack? That's madness! Especially if one considers the use to which all those Kilwatts are going to be put: transcoding video files showing the morning coffee, making yet more private personal data avialbale to real time advertisement bidding, and worse.
On the bright side: the heat put into a reasonable cooling liquid can be used for heating purposes where hot air mostly can not. I've recently visited a Tier IV Gold DC with liquid cooling they'd built themselves. The owner listed following pros: heat output power coefficient of roughly 0.7 (see below why the servers actually consume less than designed power), much cheaper servers (they run HP Moonshots and got a deal with HP in which they don't need to buy all the fans, which cost a ton and make a considerable dent in the final bill, and the fans in the moonshots are some 20 W each, which is why they get to the 0.7 when they don't run the fans), all the kit can run at higher temperatures with same reliablity (he said something like 35 °C in liquid compares to 17 °C air), which in turn makes the removed heat usable without a huge heat pump. Their installed heat output is around 300 kW and they use it to heat a public outdoor pool nearby. This arrangement adds around 4 months of open season time every year to the pool.
Of course, I can even get the whole world under lock and key, by putting myself in the cage.
What I was pointing at is the Wintel creep and where it's headed with systemd.
My personal belief is there will always be token plurality available, just like with Firefox, which reportedly lives mostly off Google's money. It appears to be cheaper than monopoly lawsuits of the Microsoft Internet Explorer fashion.
Finally the promise of the future is here. I'm so happy this is at long last available. Everything will be nicely locked to two or three hardware vendors, a nice OS duo- or triopoly, all the AMZN content at our fingertips, bots are going to be filtered out, no more captchas - remote attestation from a approved edge providers through approved carriers and approved last mile providers through approved and signed OS and one true good browser right to the CPU core, where the digital signature will live right next to the reliable and secure out of band management system and then back again, everything signed and secure and reliable (apart from the signed and approved backdoors for secure and relieable and accountable government agencies).
I'm sure corporate Windows users are going to love this! Wait...
Yes, Elon would do exactly that. Single handedly - while using his other hand to tweet about how cool and eficient he is, going against every bit of institutional experience of how one should approach software development to stand a chance of sustainability, let alone success.
In the process he would find out that his development model is unworkable and hastily sell his company, mere days from a financial collapse, to an investor, who in turn would have to rewrite everything from scratch.
Yes, I bet many of us know exactly what would Elon do, as many of us have worked for our respective Elon and were quite happy to part ways with him.
"We welcome the community to implement their own custom square rounding add-ons. Of course we will supply some, too, so that we can produce shiny demos of a wheel rolling, but we are going to forefully avoid any blame towards the core square team when the add-ons start to fall apart at high speeds."
I've been stuck with my Iphone 7 for quite a few years now because of its repairability (evading and avoiding Google as much as possible being one of my hobbies, I don't want to switch to a smartphone with Google's Android). Newer Iphones just aren't that repairable any more, if at all. This evolution looks like a step in the right direction...
... only I've been testing the French /e/OS* to my surprisingly high level of satisfaction. As soon as I learn again how to live with only a decent digital camera instead of a great one, even the better repairability of the new Iphone isn't going to be enough of an incentive, given Apple's years of neglect and abuse in this area. Majority of the /e/OS supported phones are even better repairable by design.
* The French managed to achieve even stupider nomenclature than Apple and I sincerely hope they will reconsider soon. The name is unpronounceable and unsearchable - special characters are usually ignored by search engines and the name eos is one of the more common throughout industry, from cameras to cars.
Given what Google have produced over the last few years (heaps of money, a privacy nightmare and a totally crippled information structure throughout the Web), I can understand that they are hungry for more money but wish few orders of magnitude less productivity if they want to keep their current course.
When I was in high school computers were growing their first thin roots in general education. PCs were still expensive and difficult to paintain* in working order. My school bought a lab worth of PCs with windows on them and additionally got some really old boxes from a bank as a donation. The bank PCs weren't powerful enough for Windows at that time, nor were reasonable desktop options available on Linux, so our lab manager installed the then free Red Hat on them with CLI only, together with Lynx, Pine etc.
The lab was open to general studentry during brakes (home PCs were still rare and hunger for the Internet was huge even though xxx was strictly verboten) and with one PC for every 30 pupils only the sharp elbowed got their vaunted online time. The white on black screens with their blinking cursors, howere, remained vacant all the time. Yes, it was then that I learned to love the blinking cursor - one could get everything done on those machines, since the only use for graphical display connected to the Internet was banned anyway and, worse, the ban was strictly enforced.
It was good times. Textual information would be conveyed in text form, not a JPEG or a TikTok dance, Brin and Page were yet to apply for the grant for their Lego server, which only later turned into the privacy, social and ecological nightmare we are having now, the systemd guy was still bullying his kindergarten mates and not the whole world... I'd hazard a guess that the only thing those machines would struggle with today is TLS, the rest would still be quite serviceable, if we managed to keep to text. (I quite like curent KDE though:)
Icon: happy Friday!
* Originally a typo but I'm keeping it as like the word!
I'm surprised that such a beast could at all have benn developed for so long on SVN. We switched our comparatively small projects to Git years ago and we would never go back. Git can be quite intimidating to SVN users, but the internal logic pairs well with how C++ works, so it shouldn't be that much of a problem for WebKit devs.
But why GitHub? It's even not the high fashion anymore, let alone the optimal choice from technical, legal and reputational point of view. Many reg articles attest to that.
I've been quite happy with Devuan both on desktop and servers/vms. The only insurmountable problem I've had in past two years with Devuan is MiracleCast, and I can live without that. I've never had other problems with non-free drivers, firmware etc., both on older and fairly recent hardware.
Ubuntu has problems of its own (how they handle the Linux name, the opt-out stats, aimless juggling with desktop envs etc). I can only recommend Devuan with KDE to Ubuntu users.
This story reminds me of talks with my dad, an IT guy in the times when all IT guys knew each other by name. He would say "you youngsters have it easy with IDE ATA. I used to spend whole weekends with a screw driver and an oscilloscope to get the heads in line". Or "you youngsers have it easy with BIOS and MBR. I used to have to enter the first 16 opcodes by hand to make the machine eat the punched tape with program on it". Or - to me and my brothers when we requested another 16 MB memory for our Cyrix 266 - "what on earth you need another 16 MB memory for? We ran a whole steel factory on 8 kilowords of memory for a decade!"
On his last job before retirement he got a company iphone. He called me asking for help: "I got a black fondleslab, I don't know what to do with it." What brand is the smartphone, I asked. "I don't bloody know, there is nothing written on it, must be some cheap Chinese thing"... After a bit of to and fro, my dad understod it quite well actually: "so it's just a portable computer with a modem in it, nice thinking, I can live with that." He called me a week later: "I think I understand it now. One question though: How do I end programs to save battery?!"
Even today, he lives on 70 MB monthly plan.
Not much to do with the article, I know. It just reminded me of how cool my dad is. I fear that getting DisplayLink to work without systemd or wrestling with NPM will never qualify as cool...
Actually, I'm with you on that one, I just didn't want to start a flame war. To be fair, I find the preloading function on Windows 10 quite effective. If I find the time, I'll try to switch it off on my work PC to get an idea what is the actual memory requirement of Windows itself.
For some years I've been using KDE Plasma on nearly all my desktops, usually fairly recent (3 to 12 years) with enough memory (8 to 64 GiB), so I haven't been giving the KDE's memory footprint much thought. But I've always suspected it must be quite resource heavy, with all that functionality (in my view KDE is functionally on par with modern Windows, with which it can't be easily compared for Windows' default preloading of favourite programs and DLLs).
Now I'm pleased to learn that KDE actually requires way less memory than Gnome, and actually has the smallest but one memory footprint of all tested variants. Well done KDE.
And thank you, Liam Proven, for the test!
... who isn't hell bent on screwing their customers over? Who just delivers a functioning printer without built in bricking device, without evil bloatware, without illegal contracts, without all the stuff nobody wants? Canon, HP, Lexmar, Epson, Brother, Xerox. Every single one of them.
I do believe there is a market share to be gained by a Peterbilt of printing. If there is not, then all hope is lost.
In my days in nuclear research, the matra was "yes, we could have Thorium and molten salt cooling, but nobody is paying for it, pwr's and bwr's have been paid for by cold war armament programmes". All the old hands thought we would need another war, cold or hot, to get the requisite funding, unavailable in peacteime.
Are we going to war with China (or China with us) or is China's combination of energy thirst, available funding and manpower in R&D and manufaturing big enough to pull it off without the requisite underlying military conflict?
I had similar thought but in different direction: one feels that Cisco's software must be full of bugs, Juniper as well, Aruba at least half full (call me an optimist here), even Fortinet get their laundry publicly wasched every now and then. But so far I haven't heard about one public announcement of a vulnerability in Huawei's infrastructure gear (consumer gear and endpoint appliances do get mentioned from time to time).
I'm curious why. Do they disclose their vulnerabilities in a similar manner as Cisto et al? If yes, why they don't get similar media coverage? If not, why? Is it a cultural difference or a language barrier?
I'd hazard a guess that Huawei gear gets updates and patches as well. There are lots and lots of Huawei boxes installed throughout Europe. Is there a Huawei admin here on this forum, who could chip in with a real world experience?
There was a time when Google used medium tier off-the-.shelf stuff for their datacentres and made public some rough figures on how it made sense financially at the time. The core difference being that Google were masters of their own workloads, whereas Microsoft rent the compute and storage capacity out. But on their scale careful analysis of usage profile of existing stuff may be even more valuable than running their own stuff exclusively.
So I woud say perhaps more "sensible" with the money, rather then "ruthless".
Of course they are. But we already had datacentres before the much touted cloud. One just had to organize the multihoming himself or delegate it to a provider. Big customers already had the requisite elasticity in performance and resources even on-site (I remember daily core count and memory size adjustment in a leased on-premise server in 2005).
I'm sure everyone here already understands "the cloud" as "someone else's computer", my rant was meant more towards the manglement, marketing and beancounter side of affairs.
The way these regimes work is that they monopolise whatever they need to control the population. Banning cryptomining among the general public means most probably monopolizing it to the state.
And the state my not need it for internal use, I'm sure you understand what I mean.
Ever noticed how dome used Dell laptops advertised on Ebay have their service tag blurred out while others not? I have always instinctively tended to preferring the latter over the former. I wouldn't ascribe all of these blurrings to malice, not event the bigger portion of them, but still, one has to wonder...
It took me some time to recognise the pattern, but there it is: the comment section of all On.Call and Who, me? articles has been a greatest joy to read through and quite often a learning experience, too. Sometimes I even get the feeling that the articles themselves have more purpose in seeding these discussions than anything else.
So I now take the opportunity to thank you all for your great comments, past and future. Here's a pint for you!
Google seem to consider everyone on the Internet to be a third party to themselves and their useds*. Whereas other big companies (Facebook, Amazon, Microsoft, Cloudflare et al) tend to build and expand their own very high walled gardens, Google are hell bent on privatizing the whole Internet for themselves, at no monetary or social responsibility costs wherever possible.
To me Google looks like Standard Oil a hundred years ago: a company which brought about a huge change to everyone's lifestyle, improving lives of people while employing questionable business and ethical practices, getting slapped on a wrist here and there, slowly growing to a size and power too big to manage (from the civil perspective). And, most importantly, ripe for a breakup. With search, omnipresent ads, means for lazy and inept developers to spice up their webpages, Android and Chrome under one roof, the company is a disaster waiting to happen. With Android and Chrome separated legally from the current state-of-the-hydra, the risks just might become manageable again. And, if the example of Standard Oil breakup could be replicated, I, for one, would not object to the financial benefits to current stake holders of such legal action (iirc Rockefeller family's fortune increased tenfold after the breakup of Standard Oil to the many separate legal entities).
* I got the term 'used' from Stallmann: "We call them 'useds' rather than 'users' because Facebook is using them, not vice versa."
I have spoken to a number of police in several European countries, many of whom confirmed that their respective police forces were indeed officially sanctioned to use NSO's software (among others). This article sums the situation nicely: it only becomes a problem when prime ministers' phones are affected.
I do hope that same prime ministers' phones will be the first ones to become equipped with the "completely secure encryption methods that allow access to law enforcement for the puropse of fighting child pornography", otherwise we, the plebs, have no chance of digital privacy.
Well, from business perspective it makes little difference. We make our key generation structured, so that one always knows (or should know) from which parent key is a particular end key derived. A hundred million should crop up either way. It doesn't matter which technique the employee used. Being able to do this is the problem.
One bar owner is always present at their bar, other requires that guests pay only in exchange for a registered receipt, still others may employ yet another technique. But a bar owner, who lets their bar tenders sell from their own bottles a mere few years after the establishment went bankrupt, such bar owner doesn't hold much promise.
The Avaya seems nice enough and expensive to replace (where I worked some years ago the IT people had to scour Ebay for used headsets a few years ago to keep the comms going), would be shame if the company should fall victim to such practices.
Disabling a most useful tool is like never having a smartphone for fear of being robbed of it.
It should, of course, be an important part of defence-in-depth, part of active monitoring. "Twenty Powershell windows on a development machine with admin privileges? Probably okay, but one should stonewall the dev network from operations and finance. Single powershell process spawning out of the blue in the middle of accounting, where they only use ERP and Excel? Why? Let's take a look what's going on."
Funny, as I read an article or three to that effect several years ago. I thought Powershell would have been well understood by now. Why the sudden urge to comment on in, especially by a "random" group of five eyes states?
... I remeber times when USB really was universal (up to version 2.0).
Nowadays it seems that USB plays a sinister role more often than not: a devilish vessel for Windows updates (system hub not working with older versions), Dell's x86_64 tablets freezing out on Linux upon attaching periferies, Thinkpads can't be bothered to work with any USB-C docks beside their own (which can't be ordered because of chip shortages) and so on and on.
It's nice to have one connector for displayport, hdmi, charging both ways, network, usb, lng, petrol and garden hose. But one feels sometimes as if we were back in the times of flatbed scanners to work reliably over SCSI: words like alchemy and lottery spring to mind.
In this respect it is not surprising at all that Microsoft produced a laptop whose USB refuses to work with preexisting periferies.
Re the scary warning from Debian about the non-free firmware and drivers: at least one knows what's going on. This shouldn't be a reason for a jump to a minor obscure distribution, which itself is per definition unofficial from Debian's point of view. Debian itself is pretty conservative and on the slower part of the update spectrum, and having another (up/down) stream node in the update and community support pipeline seems a price bit too high, should one choose this distribution over Debian solely because of the warnings.
Re live installers: last time I used one of those on Ubuntu, I followed the spirit of "try it out first, install when happy", and to my mild annoyment I got a clean install sans all the packages I installed in the live mode. Makes sense from the system point of view but not from the user's. I personally don't find this behaviour optimal, especially if the raison d'etre of the distribution is to be more user friendly than the original Debian (before you hurry to the downvote button, please read again the "I personally").
I'm not much of a Ubuntu guy, but all pros of the SpiralLinux mentioned in the article seem to be present on Ubuntu, too: non-free packages, non-free firmware and drivers, btrfs, live installer. Systemd is not mentioned, so I'd expect one gets a healthy dose of that, too. Ubuntu doesn't support Flatpack but explain why and offer alternatives.
So why choose SpiralLinux over Ubuntu? One thing comes to mind and not an unimportatn at that: SpiralLinux at least says it's a Linux distribution.