* Posts by cdegroot

62 posts • joined 1 Dec 2014


Could it be? Really? The Year of Linux on the Desktop is almost here, and it's... Windows-shaped?


Re: @Long John Silver - Nothing to do with Linux, all to do with Windows.

Indeed. Smaller slice of a bigger cake.

I run Linux everywhere, even on my old MacBook, but win10 is pretty much a requirement for gaming. Hate dual boot but virtualbox under win10 works just fine. I toyed with WSL, even bought Xming, it worked all fine but I’m either gaming or hacking code, no need for some seamless experience there and the Win10/WSL/Xming desktop fell much short of my full Linux desktop experience.

In colossal surprise, Intel says new vPro processors are quite a bit better than the old ones


Re: How much?

I just bought a dirt cheap E5-2650v3 based workstation. 10 Xeon cores and 64GB RAM for less than CA$1500. It feels every bit as fast for “product development” purposes as the MacBook Pro my employer provides at three times the cost.

Moore’s law is well and truly dead.

Vivaldi browser to perform a symphony of ad and tracker blocking with version 3.0


Nothing new...

For me, the country selector is right there next to the safesearch and time settings. https://pasteboard.co/J4Zud1R.png for visuals

IBM exec told that High Court evidence in Co-Op Insurance case wasn't 'truth, whole truth, and nothing but the truth'


The A-word

I learned the hard way that doing "Agile Software Development" with a customer that isn't all in will end in tears. I worked on a project where the customer had requirements analysis done by a third party, and as part of our bid we proposed to shred it and start over. Frankly, the requirements docs were nonsensical and had little bearing on what the customer actually wanted. They agreed, and we drafted a contract where we came to demo every two weeks, they would sign for acceptance of the sprint or not, and with signature two weeks of payment were due (the customer could also bail out at any point in time). Some 30 odd sprints later, we declared the project done and a success, even though it was in no way what that original requirements analysis proposed.

So cue our unbelief when months later (IIRC half a year) we get a legal summons to finalize the project "according to the original requirements analysis" or else. Apparently someone higher up didn't understand what we had been doing and/or tried to get the org's money back (it was a small government shop). Luckily we had a good paper trail and nothing bad happened to us, but the project was declared a failure by their head honchos and got never used.

Just saying that while I'm always happy to lambast IBM, I'm not entirely sure that they're wrong in saying "the customer didn't cooperate with this agile thing".

Shipping is so insecure we could have driven off in an oil rig, says Pen Test Partners


Nothing new...

I wouldn't be surprised if shipping containers would be a very low margin business. You have mega capital expenses (Google "MSC Gülsün") and a bunch of very large players that all offer pretty much exactly the same product: pick up container in Shanghai, drop it off in Rotterdam. That's the sort of business where you want to cut all non-essential costs.

Git takes baby steps towards swapping out vulnerable SHA-1 hashing algo for SHA-256


Re: "The concept of hashes in Git was never "for security" "

Torvalds is all about good ideas, meh execution. Linux was (and is) entirely unoriginal, replicating even then decades-old and largely outmoded ideas about how to build an operating system (we could have something nice that wouldn't need to reboot with every change; yes, Tanenbaum was right). Git is a great concept but very, very rough at the edges. The UI is a mess (go through all the command line arguments and options and try to make sense of it) and any decent designer wouldn't have spared a second to conclude that yes, you do want to make that hash algorithm pluggable.

But, to be fair, everybody is abusing Git as just a faster SVN and that does not help a lot either. It was meant, like Mercurial, as a distributed version control system where the primary mode of interaction would be exchange of deltas between (mutually trusting) parties. I bet not 1% of Git users out there know how the email integration works, or even that you can quickly push a commit to a colleague over ssh or something. I do think that that makes hash weaknesses worse, because now there's a single point of attack (Github, Gitlab, Gitorious still around? You should be able to work without any of these)

BSOD Burgerwatch latest: Do you want fries with that plaintext password?


Re: Surprised they don't use *NIX

I'm a big fan of Nerves (https://nerves-project.org/) in that space. OTA updates without downtime if you want to. Downside is that you need to develop in Erlang or Elixir, upside is you get an extremely reliable and cost efficient system.

But yeah, I guess it's easier to stick with the Windows knowledge you have if your primary business is cooking burgers, not IT.


Re: Surprised they don't use *NIX

I saw a job posting a couple of years ago by a local nuclear supplier asking for PDP-11 and RSX experience. I guess they're a tad more conservative in their hardware than MacD :)

Electron devs bond at Covalence conference: We speak to those mastering the cross-platform tech behind Slack, Visual Code Studio, etc


Funny that a Slack engineer 'fesses up about bad quality software. A quick comparison: HexChat (for the youngsters, a client for a predecessor of Slack called "IRC") uses 57M real, 600M virtual after having ran for a couple of days. I just launched Slack (that hipsterware to keep you off work at work) - 300M real, 2.4G virtual. And I know that HexChat will stay the same and Slack will probably at least quadruple. That's progress for you. And I'm not even touching on the vast difference in usability - one of them I can extend in a bunch of different programming languages and has nifty things like custom hotkeys.

The whole "multiplatform is hard" thing is bollocks anyway. There are only three platforms left, and I think all of them have decent Qt bindings (at the least; I think Gtk is available as well and I once did a cross-platform thing in Wx that looked surprisingly better than I anticipated). It's sheer laziness that Slack refuses to make a decent client; that, and - of course - being enterprise-directed the CIO, not the user, is the customer. CIOs don't sign cheques for optimized code (so in that sense they're rationally right to sling Eletron crap at their users). So it's a bit more work, would never impact Slack's bottom line but nobody who matters complains loudly enough. And I think that _that_ is the real thing wrong in our industry.

Boeing aircraft sales slump to historic lows after 737 Max annus horribilis


Re: Flying

Flying also never has been cheaper (unless you're taking a domestic flight here in Canada but that's an entirely different topic). Back in the late '70s/early '80s, my parents took us to Spain for the summer vacation. A charter ticket Amsterdam - Alicante cost NLG 600, or EUR 275. In today's money that'd be at least 500-600 Euros, an amount for which Air Canada will fly me not to Alicante (2 hours), but to Toronto (7-8 hours).

Apparently airlines are making the correct trade-offs. People want cheap tickets and will take a bit less legroom in turn. Grumbling is a favorite sport of our species (I guess really the only think that sets us apart from chimps?), so they're probably right to ignore it and just look at the rising passenger numbers.

Whoooooa, this node is on fire! Forget Ceph, try the forgotten OpenStack storage release 'Crispy'


"Yeah, whenever"

I ran a small ISP in the early naughts that had a similar hardware replacement policy. Once, one of our switches broke down - one of these expensive 24 port 19" Cisco thingies. I realized that a) we didn't use all 24 ports, b) we didn't yet use any of its management facilities beyond basic port monitoring, so c) I yanked the cables from the 12 port no-name switch on my home office desk, hopped in the car, swapped the switches (the no-name one wasn't rack-mountable but luckily had magnetic feet so I just attached it to the side of the rack enclosure) and went to bed. Next day, dropped off the Cisco at the office asking my admin to send it in for a warranty repair and picked up a fresh no-name switch for at home at the local PC store.

A year or so later, I noticed a box with a Cisco brand on it in our office. The admin had forgot to tell me that the repaired switch arrived (a mere week or so after the incident) and I forgot that our little ISP was still running an important chunk of traffic on a cheap no-name switch...

If I ever get dumb enough to start another company that actually has to spend $$$,$$$ on hardware, I'll make sure I'll have something better than an "yeah, whenever" replacement policy in place :) (I wont. Ever. The smell of data center in your clothes after another 12 hour shift standing behind a tray-mounted keyboard still makes me sick)

It's Hipp to be square: What happened when SQLite creator met GitHub


Hardware, operating systems, version control, what's next?

When I started in the industry, everybody made their own chips - then Intel won. At least, everybody still made their own operating systems - then Linux won (at least in my current line of work, SaaS/Cloud style stuff - but that seems where we're all moving). They go under the heading of "solved problem" coupled with "80% is good enough" (or "less is more", if you want to), and that's it.

I've been using tons of version control systems, starting with RCS and ending with Git. Git is the first one to qualify as "good enough", there are usually bigger fish to fry in any development team, so everybody starts using Git (including MS, and I think pretty much for the same reasons).

I wonder when that'll happen with programming languages as well. Most of them are roughly similarly productive, with a large group clustering around "fast for computers, somewhat slower for humans" (C, C++, Java, Golang, Rust, ...) and a large group sitting at "fast for developers, somewhat slower for computers" (PHP, Ruby, Python, these days Elixir - although the latter is only marginally slower for computers). And of course, the front-end lingua franca, Javascript. There's really no differentiator anymore for a team to choose any of these, none of them will - if you're brutally honest - get in the way of success although some of them will make for a more fun time than others (and some of them in the hands of an unskilled team will make a mess much quicker than others).

It's a sign of maturity, in my opinion. People learn to leave their tech religions at home and have a somewhat more realistic look at the sort of tools that make a real difference for a team. And often, they rightfully conclude that "it just does not matter". We've mostly made that decision for hardware, operating systems and clearly version control; I'm curious what's next.

In Rust We Trust: Stob gets behind the latest language craze


Re: Think I'll pass

The problem with that usually starts when I want to refactor - small steps, keep tests running, and intermediate results may not be perfect but I'll fix the edge cases later. In this very strict languages, everything in between simply refuses to compile, I lose my safety net, and often just give up and leave the bad code in place.

No wonder Bezos wants to move industry into orbit: In space, no one can hear you* scream


No Sci-Fi needed

I am not a celestial mechanics specialist, but as far as I understand it, it'll work with little energy because you're going down, gravity-wise - first from the very low gravity of the asteroid field to the low gravity of the moon, and then all you need to do from there is to drop finished product down to earth.

Compare the massive Saturn V rocket needed to get the Apollo capsule from Earth to Moon, and the tiny little rocket engine that got said capsule back again. I think that for a factory-style setup, you can just do some electromagnetic catapult on the moon and stuff will just drop to Earth for free.

We are absolutely, definitively, completely and utterly out of IPv4 addresses, warns RIPE


Re: The internet will be privatised

There will be a point where obtaining IPv4s and/or setting up "carrier grade NAT" is going to be more expensive than just enabling IPv6. It's just happening slower than the RIRs want.

US games company Blizzard kowtows to Beijing by banning gamer who dared to bring up Hong Kong


" centralized and authoritarian political system"

I read somewhere that the technical term is "fascist" and if you look up the definition, it fits surprisingly well. Seems the differences between "far left" and "far right" are much smaller than I was taught at school :)

Scotiabank slammed for 'muppet-grade security' after internal source code and credentials spill onto open internet


Indeed. Well before the term was coined, I worked on projects in the pharmaceutical industry, heavily regulated code (Good Clinical/Laboratory/Manufacturing Practice, FDA inspections, that kind of fun), and we could still be light on process and heavy on rapid learning and quick delivery; the acceptance criteria just specified lots of testing, including automation, and including printing out test reports that the project lead had to sign and file. I had similar experience building software for some banks.

However, now that "agile software development" has turned into "Agile for Enterprise" and all sorts of nonsense (SAFe anyone?) - that sort of stuff just makes project managers fight different political battles and QA is still out the door. Quality is a mindset, not a process.

For real this time, get your butt off Python 2: No updates, no nothing after 1 January 2020


Nothing new...

It's not well-suited - it's slow at computation and basic string processing and heavily relies on libraries to keep up appearances. But, it's easy to learn for data scientists (which are usually not full-fledged computer programmers), has a very rich ecosystem in the area, and said data scientists like the workflow that things like Jupyter bring. So it's mostly an ecosystem thing, I guess started by Google (and everybody wants to do what Google does, for some reason). To me, running Python AI/ML in production is equivalent to running Excel sheets in production. Yes, it can be done. No, you probably don't want it.

Alternative languages (like Julia) are trying to get a foothold in the space, personally I think that Common Lisp is much better suited (certainly from a performance standpoint), but "Python in AI/ML" is a bastion that will prove very hard to capture.

Captain, we've detected a disturbance in space-time. It's coming from Earth. Someone audited the Kubernetes source


With the ubiquity of tools like "cloc" that count actual lines of code (iow, statements; it will also report blank lines and comment lines separatly), I'd hope that this is what they're talking about.

Golang is one of these languages that really likes to pull in lots and lots of library dependencies; I wonder whether all these dependencies, recursively, have been audited as well.

Thunderbolts and lightning very, very frightening as loo shatters, embedding porcelain shards in wall

Thumb Up

Re: Cloud Processing

I have a septic tank. I was a bit of a skeptic at first but it's actually fine. Also, I don't need to read the rules of your local sewage company which had, for example, in my previous place a ban on in-sink garbage disposal units ("garburators"). I flush whatever I want and every other year (better safe than sorry) have the thing pumped for less then what I paid for sewage in the city.

All I still need is one of these outhouses with a garden gnome taking a dump while reading a newspaper to mark the spot where I have to dig up my wife's garden when the truck comes. And, apparently, a lightning rod.

Low Barr: Don't give me that crap about security, just put the backdoors in the encryption, roars US Attorney General



Unless you go full one-time pad (with the associated key exchange headaches), I think that these book-based ciphers won't work anymore against a state-sponsored actor. It's just to easy to suck in a digital version of the library of congress and try every possibility. Much, much easier than even cracking DES.

(how many books? a billion? With magazines, round it up to four? Searching through 2^32 options for stuff that sounds like not gibberish is something my laptop can probably do)

This major internet routing blunder took A WEEK to fix. Why so long? It was IPv6 – and no one really noticed


Re: "If anything, it is a demonstration of how robust IPv6 can be in the face of such mistakes."

The numbers work, of course (I have a /56 and use only one /64 in that - calculate the number of addresses I'm not using!). Say my ISP gets a /32, that means that we can have 4,294,967,296 ISPs on the planet. Each of them can hand out 2^(56-32) = 16,777,216 customers (and once you get over that, you nab a new ISP block).

The biggest advantage of this very sparse ("wasteful") allocation strategy is that you can have very small (compared to IPv4) routing tables. "The internet" only needs to know about active ISP prefixes, ISPs only about what prefix they handed out to customers. I guess that once the designers of IPv6 realized just exactly how mindbogglingly vast the space of 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses is, they realized that this wasteful approach could actually make things much more efficient without risking another address space problem.

Also, I like all the addresses I have at my disposal. My laptop's WiFi adapter has a bunch active: a link-local one, for low level techie stuff (fe800::...); a network-local one for things that, well, shouldn't leave my network or be accessible from outside (fd00::...); and a proper publicly addressable one. Plus temporary versions of the latter two to help thwart (ad) tracking - a lot of the 2^64 address space I have is used for that - and a predictable version of each based on my device uid. Initially, it all smells like horribly over the top but once you dive into it it starts making more and more sense.

You're not Boeing to believe this, but... Another deadly 737 Max control bug found


The funny thing is that you can - and probably should - do a development style that gives you increments and quick feedback (I'm careful around the term "agile" these days since the consultants kidnapped and raped it); it's probably gonna be a style of work that's much heavier on initial specifications and testing, etcetera; but most of the principles would apply (I know - I worked in this style in heavily regulated industries which were heavily regulated because people would otherwise die).

The issue is that agile does not mean "deploy crap to production and let your customers be your QA staff". It means that you iterate, learn, and that way develop the code that your business/customers needs - including the level of quality required. Re-read the manifesto for agile software development. Nothing there about "ship fast and fail fast", that's only appropriate in some contexts.

'Bulls%^t! Complete bull$h*t!' Reset the clock on the last time woke Linus Torvalds exploded at a Linux kernel dev


Nothing new...

Sorry - he was a moron when he debated Tanenbaum in a highly, err, "interesting" way and he's a moron now. I still am sad that we have to deal with Linux and not some well-architected kernel lead by competent people. Linux was just an accident - the wrong code at the right place and time, like that other horror show from Finland, MySQL. Both filled a gap where quality did not matter and both have been picked up and cared for by competent people that are not their original creators. In Linux' case, only to be yelled at.

The _real_ gem that Torvalds created, and I grant him that, was Git. Design-wise, then, as in "this is how a distributed version control should work". Alas, the UI is obtrusive, even by Unix command line standards, and I blame that as being a major reason that the distributed aspect of Git is hardly used and we're in a situation that development teams "can't work, Github is down". Still hoping that someone will fix that but with M$ now having $$$$$ interests in a centralized development model, not holding my breath.

How much open source is too much when it's in Microsoft's clutches? Eclipse Foundation boss sounds note of alarm


Nothing new...

It's indeed pragmatism of the "less is more" type. Do I like the default Ubuntu/Gnome experience? Nope. Do I hate it enough to spend oodles of time tweaking it and then again for my other machines? Nope. And the same thing goes for a ton of stuff (systemd is the other big one). People stick with the defaults because they're good enough and the default becomes Linux, then Ubuntu, then Ubuntu with Gnome, and so on.

Also, I don't understand VS Code. Barely beyond notepad. Spacemacs here, it's the only IDE I can bear


Free online tax filing? Yeah, that'll soon be illegal thanks to rare US Congressional unity


Nothing new...

Still, as a relatively new inhabitant of the Greath North, I don't find it that much more complicated than my returns I formerly did in the Netherlands. There are differences and it feels slightly more complex, but if you keep some minimal receipts (health, charity/political contributions) and pre-fill from the info that the CRA already has accumulated from your employer and your bank, you're done in an hour. Friendly people on the phone, too, when I was locked out from their on-line services.

(in the Netherlands, I think there's an app for that - your return is prefilled, you log in with government id, and press "yup, that's about right" ;-)).

Hams try to re-carve the amateur radio spectrum in fight over open or encoded transmissions


Nail. Hammer.

I mean, I do want to learn Morse. I wish spark-gap radios were still legal. It's sort of "romantic" if you can talk to someone else using not much more than a battery and some copper wire. Just for the sheer minimalism of it I really want to learn Morse one day.

But it's 2019 now and software defined radio is dirt cheap and promises to make so many things better that we should embrace it. I'm sure there are a bazillion things you can do with digital that nobody has thought of, and a lot of tinkering by radio amateurs has made it into the mainstream. Progress needs to be fully embraced.

Thanks for so eloquently explaining it -- VA3CGR

Cheap as chips: There's no such thing as a free lunch any Moore


Nothing new...

...which is best described as "blazingly fast for its time".

The real problem here is software developers that have stacked abstraction on top of abstraction and always got their butts saved by Moore's law. We had snappy GUIs on hardware that's so laughably primitive, people call it a "microcontroller" these days and refuse to even put it in your watch.

This is purely a software problem.

Redis kills Modules' Commons Clause licensing... and replaces it with one of their own


Re: Wow, they finally got it!

The Affero version of GPLv3 fixes that, not? However, it still wouldn't make a difference - GPL only requires you to make code available, including your changes; it would not block Amazon from hosting it on their cloud and profiting from it. The GPL always made it very explicit that making money as a third party was not only allowed, it was in a sense even encouraged; as long as you play ball and share your code changes back to the common code base.

Linus Torvalds pulls pin, tosses in grenade: x86 won, forget about Arm in server CPUs, says Linux kernel supremo


Torvalds is wrong. Film at 11

Sometimes I think he's a bit of an idiot savant - he's an ok-ish coder (I was around a lot in the early kernel sources, nothing to write home about) and he seems to be doing a decent job of corralling lots of developers into building something that, frankly, should not have been built (I'm with Tanenbaum here ;-)). I get it though that El Reg needs to run an article every time he opens up his pie hole.

Anyway, I think he's wrong. I'm a "cloud" developer and for the large part it's a "don't care about the processor" kind of business. Cloudy stuff happens in Java, Ruby, Python, Golang, ..., and in our case Elixir - modulo the odd native dependency which I all find compile fine for ARM. Things just run on both platforms. If ARM servers can save money at scale, I would be more than happy to spend the one or two days to figure out cross-compilation for our build systems; economics dictate it's a good investment.

I think that that's pretty much the biggest uncertainty - can ARM deliver more transactions per dollar than Intel? Everything else follows from there. I'm doubtful, to be honest, even though I'd love to see some competition and ARM and RISCV are pretty much the only game out there next to x86.

Having AI assistants ruling our future lives? That's so sad. Alexa play Despacito


Re: Nothing new...

You mean Facebook, that company that saw its userbase growth slow down markedly and saw its stock drop by over a third after all the scandals? It seems that at least some people care.

Also - Facebook's primary product is selling data about you. Amazon's primary product is selling stuff to you. There's less of a network effect and more alternatives in the latter, and as others have remarked - outright snooping in your homes is a tad worse than Facebook figuring out what you do and showing you ads in response. I use iPhones over Android for the same reason - I'm more the customer and less the product.

I guess everybody makes their own decisions in this area. It's a fine balance between trying to protect your privacy and enjoying some of the advantages of the modern age. YMMV, etc.


Re: Nothing new...

That doesn't mean I can keep an eye on when a "smart home assistant" is sending data and how much and whether that looks like transmitting voice data or just the occasional and expected "I'm still alive, any software updates for me?" packets.


Nothing new...

I decided that market trust is what keeps Amazon from snooping. The risks of them being caught are pretty much 100% (traffic analysis, insider leaks) and the upside isn't that big. So I have two Echo dots, and I can control my room thermostat and my main home theatre ("Alexa, turn on Apple TV") and play music in sync over both. That's pretty much the extent of its usefulness, it's a gimmick and if the interwebs go away, nothing bad happens. If Amazon is caught making a boo-boo, my $70 go into the trashbin and I'll have to look for the Harmony remote again to switch the telly on.

One of the reasons I bought a couple was to find out how these things could become genuinely useful, but so far I've come up empty-handed. It's just a toy, pure and simple.

Want to spin up Ubuntu VMs from Windows 10's command line, eh? We'll need to see a Multipass


Re: Why use Hyper-V or even Windows for that matter?

Docker for Windows/Mac (it really doesn't matter) is super slow. Comparable with WSL - disk access is emulated and that just puts the handbrake on things. I've tried various approaches for developing on my Win10 laptop (because games, Lightroom, and so on), and I settled on running a full-screen Linux under Hyper/V on a second virtual desktop. It's very fast.

I don't think KVM is more efficient than Hyper-V - both are, AFAIK, hypervisors rather than oldskool virtualization engines and performance should be ballpark the same. Whether you boot Win10 or Linux first shouldn't make too much difference because from the point of view of the hypervisor it ends up being just another client OS and Linux-under-Hyper-V-under-Windows and Windows-under-KVM-under-Linux both have a root hypervisor and then two OSes running underneath that. At least, that's what I understood about this whole business when I last checked how stuff worked :-)

Back on topic: I really don't like Hyper-V's management UI, so this will hopefully help.

Apple hardware priced so high that no one wants to buy it? It's 1983 all over again


Re: Limped after Apple II

Oh, that old Audi 100. My dad had one, just when I got my driver's license. A car from the future: affordable (compared to BMW and Mercedes), super quiet, very fuel efficient, very nice handling and, IIRC, couldn't rust because one of the first zinc-treated cars. And back then the styling was from the future as well... Lovely piece of work.

Microsoft to rule the biz chat roost – survey


Re: Slack

IRC, Skype, Hipchat, Slack, Bluejeans, Hangouts, Zoom, ... - it all seems pretty interchangeable to me. My employer switched chat and videoconf providers a couple of times, and it's not a big deal. Which is why I think that Slack is not worth $10b but I'm sure that Goldman-Sachs will ignore my opinion and get the valuation they want anyway for an IPO ;-).

I don't think that running your own IRC server is a solution for any company - your admin hours are probably better spent elsewhere. That doesn't mean that I _like_ being forced to one chat client where IRC gave me integration wherever, however I wanted - in the CLI, in a 24x80 terminal, inside Emacs, under any GUI, and so on. But times go on and sound business reasoning precludes rolling your own if there are so many relatively cheap solutions out there that just work.

Poor people should get slower internet speeds, American ISPs tell FCC


Re: Internet minimum data transfer rates for broadband.

Indeed. I don't know what idiot decided 25/3 qualifies as minimum "broadband". I'm on 5/3 on a local rural wireless ISP and I work from home full time, do video conferencing while my wife is watching Netflix, so I don't get what all the fuzz is about. My guess is that Google/Apple/... sponsored that 25/3 upgrade to make sure that people stuff more things in their clouds.

Pencil manufacturers rejoice: Oz government doesn't like e-voting


You mention the most important thing here a couple of times, but let me stress it:


Observability by ordinary citizens is the key advantage of paper voting. It maintains trust, and maximizes the number of eyeballs verifying the ballot. I volunteered at polling stations and always had huge respect for the people who came in around closing, sat down, and just observed us counting votes. That is democracy in action - for everybody, by everybody, and people who think that this should be taken away and replaced by machines are not to be trusted. Nothing is gained, and a lot is lost.

Sysadmin’s plan to manage system config changes backfires spectacularly


Nothing new...

My thought as well. I've used CVS for the same in the '90s, and it worked quite well - I hated the guts of SCCS and always tried to stick with RCS instead which didn't have the anal locking that SCCS sported.

I've never gotten around unleashing git on /etc/ though (although my "dotfiles" are there and it's very nice). There's enough stuff in there to make it maybe worth a try, although these days Chef/Puppet/Ansible/Salt/... are probably more appropriate.

WLinux brings a custom Windows Subsystem for Linux experience to the Microsoft Store


It's a server, not a client...

"Using X410 as a Windows X client" - nope, X410 is the (display) server.

Anyways, I don't get the fuss either. I have been using the (paid version of the) Xming X11 server, on the Ubuntu WSL side it was just a matter of `export DISPLAY=localhost:0`. The only fiddling I remember was having to drop some preferred fonts (Source Code Pro) in the right spot.

Still, the dealbreaker for me is disk speed. Compiling larger stuff is just not working - if they fix that, I'd seriously think about making Win10 my main OS given that Linux laptop support is still shaky at best.

Alaskan borough dusts off the typewriters after ransomware crims pwn entire network



"120 out of 150 servers" - 150 servers sounds like a tad much for a borough that servers 100,000 people. Couldn't find it in the linked status update either so I guess someone has been misreading something?

Uptight robots that suddenly beg to stay alive are less likely to be switched off by humans


Nothing new...

Well, psychology is the study of how students behave under lab conditions, not?

Internet engineers tear into United Nations' plan to move us all to IPv6


It's all about the big numbers

The address space is not wasted, it is just vast. At my previous place, I had a fixed IP, a /64 allocated out of the /32 (I think) that my provider got assigned, out of the, say, /16 assigned to Canada. This way, anyone in the world just needs to know about the /16 to route packets for me to "roughly Canada", the (say) Toronto Internet Exchange just needs to know that 16 bits further down it went to my old provider, etcetera. It limits route tables to really manageable entries and you can still have tons and tons of top level ISPs.

The /64 in my house gets subdivided as well: privacy IPv6 addresses, MAC-based IPv6 addresses, and then a couple of DHCP ranges that for example routed between my regular network and a bunch of docker swam networks on Raspberry PIs (don't ask ;-)). I wasn't using my whole address space, but I certainly had another subdivision going on.

Having 340282366920938463463374607431768211456 addresses makes large scale routing really efficient by purposely going sparse. It's a bit of a twist from IPv4, but it makes a ton of sense. 128 bits is so mindbogglingly big (picture all of space and a sign "you are here") that it enables these sort of strategies and still be future proof, even though the allocation strategies seem wasteful at first sight.

(glad you asked. I'm on a rural PtP LTE connection now. Behind five-hundred layers of "Carrier Grade NAT". Which is how we all will end up on IPv4, with no option to, say, run a webserver on your home router).

Admin needed server fast, skipped factory config … then bricked it


I blew up an Alphastation's PS by using it as a luggable...

It was somewhere mid '90s, and my employer sent me to Redmond from .NL where I lived to join some sort of Microsoft hackathon. Laptops were woefully underpowered back then, and I had this sweet AlphaStation from work on which I just installed OSF/1 and a nice development environment, so I decided to (carefully) toss it in my suitcase. Also, it felt just right to bring a Unix machine into the lion's den ;-)

At Microsoft, they quickly supplied me with keyboard and monitor, and a network cable ("this is a T3") and a power cable later, I was ready to start work. If it would have turned on.

Needless to say, putting the power supply switch to 110 did the trick.

Needless to say, not putting the power supply switch back to 220 when I came home two weeks later did the trick of letting all the smoke escape from it. Never felt so silly.

Translating Facebook's latest 'Hard Questions' PR spin – The Reg edit


I'd pay...

I'd even pay a premium over their ARPU for my country, which would be very easy money - they would have to _not_ do stuff in order to take my money. Not feed it into their big data stuff, not sell it - directly or indirectly - to advertisers, etcetera. I still don't understand why that is not an option - after all, they know how to collect money from their current customers, why not from me?

Sysadmin unplugged wrong server, ran away, hoped nobody noticed


Indeed. When I ran my own ISP, I found out the hard way that cable management wasn't one of my strengths, so I always unplugged everything at the server side in our racks and let it dangle. A replacement, new server, or whatever was bound to need these connectors anyway at some time in the future.

Tech bribes: What's the WORST one you've ever been offered?


Borland, in the '80s, with train tickets to Paris

To be honest, I was a student, so the bar for bribing was low. Anyway, as a student writing for one of these monthly paper tomes full of computer stuff with "PC" in the title (ah, the good old days) me and my buddy got not only offered a complimentary offering of pretty much all that Borland had on their product catalogue so we could review it (which seemed reasonable), but also the invitation to hop on a train and come to Paris so - well, I forgot the very thin veneer they put on top of the sweetener and called "the reason". I think it had to do with installation assistance or being able to talk to some head honcho over there.

Needless to say, we took the train ride, the two nights in a business class hotel (which translates to "very posh" for two poor students) and wrote a glowing review of Turbo C and Quattro, or whatever it was we ended up putting in the magazine. Did I mention we were poor students? That should make it ok, not? ;-)

18.04 beta is as good a time as any to see which Ubuntu flavour tickles your Budgie, MATE


I'm using less and less of all this anyway...

Typing this on a MacBook Pro that I rage-upgraded to Ubuntu 17.10 after giving up on Mac OS X (specifically High Sierra). It turns out that as a developer, the list of apps I need is quite limited: a browser, a really good terminal program, Emacs and IntelliJ/IDEA. So switching for me has become simpler over the years, not harder. For mail and office stuff, Linux is bad indeed, and I can't even blame the City of Munich for dropping it. Again, as a developer, a bit of Google Docs is all I need; YMMV.

Funnily enough, for me the whole windows management stuff is actually the critical bit. I'm switching all the time, need all the screen real estate there is for my IDEs, and as such I need something that lets me run full-screen, multiple virtual desktops, and keyboard-based switching of it all. With _my_ keyboard shortcuts, thank you ;-). Ubuntu (the base edition) does a pretty nice job there, so I'm happy.

Ubuntu 17.10 pulled: Linux OS knackers laptop BIOSes, Intel kernel driver fingered


Two things: 17.10 is a release, but a test release; users who want to feel safe should install an Ubuntu LTS version like 16.04. Furthermore, a RaspberryPi and a cheap cable/chip clamp can be cobbled together to form an in-system flash writing tool; I guess if you want to live on the bleeding edge with your Linux distribution, you might be expected to resort to such extreme measures once in a blue moon ;-)

(typed from 17.10. On a MacBook)

Bitcoin price soars amid technical troubles for exchanges


Nothing new...

"PwC said it has begun accepting Bitcoin as payment for its advisory services"

Well... given that BTC is mostly useful for crooks, that figures...

(yeah, by the time I hit post I'm sure I'm not the only one to make that bad joke)



Biting the hand that feeds IT © 1998–2020