* Posts by dgeb

35 publicly visible posts • joined 29 Aug 2018

Research finds electric cars are silent but violent for pedestrians

dgeb

It's not clear from the article whether the numbers are corrected for number of miles driven *in that environment* - I'd expect that EVs are relatively over-represented in urban miles, and that urban miles would be riskier, so just doing pedestrian casualties per 100 million miles driven would be likely to find an EV:risk correlation even without any causal connection.

(TFA says that EV was not found to be more dangerous than ICE in rural settings, which would hint at looking at this, but it may just be considering absolute counts in each environment/powertrain vs total mileage/powertrain).

Ofcom proposes ban on UK telcos making 'inflation-linked' price hikes mid-contract

dgeb

I think clarity to the purchaser is the important part - I see no issue with a 2-yr contract advertised as £20/mo for 1yr then £25/mo, if both numbers are equally prominent.

The issue for me is when that is obfuscated, e.g. buried in T&Cs such that (a) many people never notice it, and (b) even if you did, you are now far enough through the process that you don’t reevaluate all the other options again fully. (See drip pricing).

Inflation linked price rises during the contract term should never have been allowed in the first place - it shifts all of the burden of the contract to the consumer, and all of the benefit to the supplier.

Tesla is looking for people to build '1st of its kind Data Centers'

dgeb

Re: Rather a grab-bag of degrees

I wouldn't expect the 'Sr. Engineering Program Manager' to be personally doing any of those things - but as a background all involve a technical role in building a complex physical thing, often work together, and all are disciplines that will likely be managed by this person in this role. When you have a technical rather than administrative manager, one of the key benefits you're looking for is that they understand and can speak the language of the people that they are managing, after all.

AMD's 128-core Epycs could spell trouble for Ampere Computing

dgeb

Re: Threads

SMT doesn’t really work like that. It’s 128x clock rate, the second thread per core just helps to keep it busy. However the IPC is also not comparable between totally different architectures, so it isn’t useful to compare based on simple multiplication.

Overall performance boost from SMT varies tremendously on workload, but it would be unusual to exceed 30%, and on a pure integer benchmark is probably well approximated by 0.

Also you absolutely can run a whole chip in a turbo mode for an extended period, if the power and cooling can support it. Max all-core frequency is likely to be significantly below max single-core, but it is almost certainly above the base clock.

For password protection, dump LastPass for open source Bitwarden

dgeb

That's just a label so that *you* can identify which code to use - if you import with a QR code it will usually prepopulate that, but you're free to enter the PSK manually and call it whatever you like.

Non-binary DDR5 is finally coming to save your wallet

dgeb

Re: Reading betwixt the lines

I think the significant thing is proliferation of memory channels - somewhat discussed in the article but not fully tied in to the price step point you quoted.

With a 2S Genoa EPYC system, you want to have 24 DIMMs exactly, so the only lever you can pull is per module capacity. Doubling means spending (and getting) 2x. Earlier systems with fewer channels but more DPC would allow you to add a second bank of half size modules with a negligible performance penalty, especially when crossing past the largest RDIMM to LRDIMM, versus the impact of not being balanced across memory channels.

It’s long been the case that most systems do not need to be maxed out with RAM - like with the desktops you mentioned, the memory per core for general purpose compute has been relatively stable in recent times, but as the core counts have continued to climb rapidly the total demand for memory in those boxes is still going up, and memory hungry applications have continued to get larger and more numerous (note the trend in the highest ratio of core:memory on e.g. AWS instance types over time).

SolarWinds reaches $26m settlement with shareholders, expects SEC action

dgeb

It's likely that not all the shareholders will receive anything from the settlement - the major ones with control of board seats are unlikely to have a case. So it isn't just an enforced dividend, it's a net transfer from the owners who should have known the risk to those allegedly misled about it.

Can gamers teach us anything about datacenter cooling? Lenovo seems to think so

dgeb

Re: "you'd run out of power before your rack is half full"

Rack heights from 42U to 48U are normal. Taller than that exist but are rare for server racks.

dgeb

Re: Isn't it statingthe obvious that liquid cooling is more effective than air cooling?

Integrated liquid cooling absolutely is a thing, but it isn't particularly common for a number of reasons:

- Most data centres have a mix of equipment: specialised systems from vendors, a few generations of server hardware, networking gear, power handling equipment etc. Unless all of it supports liquid cooling, you need to implement an effective air cooling system across most of the datafloor anyway. At that point, adding liquid cooling is just extra expense.

- Risky failure modes - liquid cooling everything means an awful lot of plumbing and manifolds, and a lot of connections to all the systems. Leaks are bad news. Keeping all the lines air free is also demanding.

- Flow management - its hard for an individual system to manage its cooling demand, so you're likely to be over-cooling those systems not running at full capacity, which will reduce efficiency.

- Air handling efficiency can be improved significantly with simple implementations like enclosing hot/cold aisles.

- Systems often contain a bunch of components which passively use the airflow, but don't demand enough power to justify a liquid cooled variant - like RAID cards, or NICs, or SAS expanders, or... - so its difficult or expensive to make those general purpose systems work. These aren't considerations on desktop-type systems because there's still substantial passive cooling capacity to tolerate it.

- Datacentre operators often are in a position of supporting what their users want to run, rather than dictating (even for in-house DCs, but doubly so for colocation providers).

Web trust dies in darkness: Hidden Certificate Authorities undermine public crypto infrastructure

dgeb

Re: self-signed CA

It isn’t a defining property of a root CA that it sign intermediate certificates - although public CAs always will, and most well managed internal ones too, so that the long term trust can be in an offline key.

A root CA is just a self signed cert with basic constraints CA: true. That is still true even if it has never actually been used to sign any other certificate.

Computer shuts down when foreman leaves the room: Ghost in the machine? Or an all-too-human bit of silliness?

dgeb

As mentioned up-thread, we do have lighting sockets in the UK, 5A round-pin plugs (like theatre lights) which are attached to normal light switches. Absolutely ideal if you have a significant number of floor/table lamps. Not all that common in modern properties, though.

The different socket design is the protection against attempting to use them for general purposes.

AWS Lambda was already serverless, now it can be x86-less too

dgeb

Re: Serverless?

Serverless benefits the user over IaaS because it actually delivers the cloud premise of freeing them from managing OS instances. IME, that's where most of the maintenance effort goes (more than the hardware/firmware/hypervisor aspect of servers that IaaS covers, in fact).

It's generally more effort to adopt, and less interchangeable with other deployment types, but it certainly makes a difference to the amount of non-application operations effort required. It's probably what I'd choose if starting from scratch, in fact.

HPE campaigns against 'cloud first' push in UK public sector

dgeb

Re: The sad calls I take from HPE that put a smile on my face

I think with vendor SANs it's more the software that needs support - the hardware (whether a vendor designed SAN or an in-house storage platform) should have enough redundancy and modularity that spares work well there too in a deployment of any significant size (in that a spare disk shelf, controller, power supply, a few cables, and a handful of drives are all you need).

If a vendor is involved, of course they probably will insist on having hardware support to get the software support, so you may be stuck with it.

(This is also sort of an argument for having a few storage platforms instead of a single massive SAN - the spares spread across more instances - but you also mitigate the eggs-in-one-basket problem, and are less likely to be running into the edge cases than when near the envelope of the platform's capability.)

dgeb

Re: The sad calls I take from HPE that put a smile on my face

If those important servers are also unique in hardware terms then that makes sense (this is what I was getting at with n+1 being burdensome above). If they’re otherwise the same as a chunk of the rest of the fleet, I’d rather have a spare in inventory - which can be swapped in to service in under an hour just by either swapping disks or configuring the HBA to match. If the thing that broke is under even basic warranty coverage, it can be sent off for repair/replacement to replenish the spares inventory.

We have a hardware support contract on our tape library - that has value to me because it is a fraction of the cost of buying a whole second one as a spare.

dgeb

Re: The sad calls I take from HPE that put a smile on my face

I’ve never really understood the point of support contracts on commodity servers. It’s both cheaper and quicker to just buy a [few] spares. (It’s a slightly different matter for one-off big expensive things, where they are both complex and n+1 is a heavy cost burden, of course).

That’s especially true if moving to cloud is on the cards, as that already requires you to have [re]architected stuff to accommodate instances failing and to avoid strict dependencies on any single bit of hardware.

If your storage admin is a bit excitable today, be kind: 45TB LTO-9 tape media and drives just debuted

dgeb

Re: Old IT guy, niche?

If the data volume is low enough, a disk drive in a removable caddy is a good solution. Even rotating several such around will be cheaper than buying a tape drive.

With larger amounts of data, physically transporting a case of disks is (a) harder and (b) carries a greater risk of damage than a case of tapes. Tape also works well when each generation has more cartridges than drives - at this point, the cost of keeping multiple generations rapidly vastly favours it.

Speed, though: sequential throughput on a tape drive is good. It's *much* faster than a single HDD spindle. You probably wouldn't fill an 8TB HDD in the same amount of time as the 18TB tape.

It's also especially beneficial when drives and hosts are many:many, as any single host can then backup/restore in a short time window, even though a full backup/restore of the entire environment might take a couple of days.

What is your greatest weakness? The definitive list of the many kinds of interviewer you will meet in Hell

dgeb

When interviewing, I specifically try to elicit strong opinions - holding *and justifying* a strong opinion is a good indicator that the person has both actually worked with the tech in question, and cares about it. It's also a natural starting point to discuss their previous work, because the explanation almost invariably revolves around that.

If it crosses the line into being arrogantly opinionated (e.g. not rationally justified by the evidence presented, unwillingness to concede that there should ever be a different conclusion, or an opinion copied from someone with whom they assume they should agree) then that's a cause for concern, but by far the more common issue is candidates seeming not to care about their craft.

'There was no one driving that vehicle': Texas cops suspect Autopilot involved after two men killed in Tesla crash

dgeb

Re: Musk has tweeted to say recovered logs

"in 2019 there were 36,120 deaths on the road in the 'merica. (wikipedia). That is more than 100 every day"

Whilst it is sadly very close to 100 per day, 36,120 is fewer than 36,500.

Facebook job ads algorithm still discriminates on gender, LinkedIn not so much

dgeb

Re: Snakeoil Not Research

Total number of ads is controlled for - they're comparing the relative frequencies of two different adverts, not asserting that they should both be 50:50 - and they aren't just looking at two advertisers, they've taken several pairs of similar jobs for different industries. Three specific pairs are described in the article, and the researchers specifically highlight that they want more data, but that gathering it is costly.

Netflix reveals massive migration to new mix of microservices, asynchronous workflows and serverless functions

dgeb

Re: Stateless and computational-intensive

Output is very much not the same thing as state - of course you want to capture the output.

Stateless just means that the output you get depends only on the input you supplied, and not on any /state/ that the service maintained internally (which in turn means that using more instances is much easier, since there is no separate setup to do for each one, and no need to route requests consistently to the same one).

Must 'completely free' mean 'hard to install'? Newbie gripe sparks some soul-searching among Debian community

dgeb

Re: ...not even a separate partition for /home.

> Something else is a bit daunting if you don't know what you're doing

I agree that it is, but it also something you probably shouldn't do *unless* you know what you are doing.

The basic options of use whole disk/largest free space, plus the standard Debian 'recipes' of

> All files in one partition (recommended for new users)

> Separate /home partition

> Separate /home, /var, and /tmp partitions

already give you more flexibility than the interactive Windows installer, so I think it is unfair to criticise the advanced 'something else' option for being harder to use; there just isn't a reasonable equivalent to compare.

The default one-size-fits-all of using a single partition for everything is also what the Windows installer does, and it's a sensible default because you can't make a good automatic guess at the relative sizing of stuff - maybe you've got a webserver producing copious logs, maybe you have lots of large media files, maybe you have a pipeline that involves lots of intermediate steps (so in the Linux example, biasing disk usage towards /var, /home and /tmp respectively) - if you do one of the three then you're going to need that volume to be much bigger than the other two, but the installer can't possibly guess in advance. Producing all three at 1/3 the size each is worse for any of these use cases than just using a single volume for the lot.

(I'm ignoring custom Windows images that include extra disk layout information, since there is an equivalent for the Debian Installer - preseed.cfg, but neither have been discussed).

Windows Product Activation – or just how many numbers we could get a user to tell us down the telephone

dgeb

It's unfamiliar ground for me to be on this side of the cloud debate, but:

No, it's like a bus service which leases vehicles under a contract which includes servicing and maintenance. I believe that isn't common practice for buses in the UK, but it is for many other vehicle types (such as wet-leasing for an airline, but also, I understand, HGVs).

AWS is fed up with tech that wasn’t built for clouds because it has a big 'blast radius' when things go awry

dgeb

Re: Don't most datacenters have separate battery rooms?

At a previous site, one of the DC operators we use did have a fire in the battery room - so I am certainly grateful for them being physically separate.

I've never seen a rack UPS be as dramatic as that, but I have seen more than one blow a whole row of transistors in a big bang/flash/escape of magic smoke, and on one occasion it also tripped the downstream ATS (i.e., its safety cutoff engaged, rather than transferring load to raw mains). It does make me nervous, and I would definitely prefer to have multiple mid-sized UPSes across a few small rooms, doing roughly row-level power.

AMD, Arm, non-Intel servers soar as overall market stalls

dgeb

Smaller servers

I’m not sure I buy the hypothesis about a connection between small server rooms and small servers - small server rooms obviously have fewer servers, but there is no reason to suppose they are individually cheaper. If anything, I would expect them to come in *higher* on the IDC methodology:

First, it counts software sold with new servers, so a lot of small customers are going to be adding to it with things like Windows licences, where larger customers would be more likely to have a separate with software vendors, and more likely to have a significant Linux fleet.

Second, it counts parts sold with the server, so people who add memory/storage/etc separately are under-represented. That’s more likely to happen with mid-size or larger businesses, where the available savings are much larger and the benefits of standardisation greater.

Third, small environments are much more likely to use DAS, which means higher costs for a single server, and thus more in a higher bracket, rather than shared storage.

Related to the above, I would also say that small companies are far more likely to have been fully remote (hence in the category of not there to set it up), and to do most setup manually, whereas a bigger outfit would have maintained a minimally-staffed datacentre, and be able to automate (or at least remotely) do almost the entire setup after physically racking kit.

One thing which might well have skewed the market downwards is that 1S EPYC covers a lot of use cases previously in the 2S bracket, but the total platform cost is lower - and the stated AMD growth would be consistent with this.

Dell joins the 'fast object storage revolution'

dgeb

Re: What a load of Crap

Scalability in that you add nodes in a single namespace to scale storage capacity out, rather than the relatively complex mechanisms to make a single block store really big.

Taking about cloud native application architecture also seems fair, as this is about exposing storage in the same way as public cloud platforms do.

It is less clear how many on-prem deployments would really benefit from that, though - it makes sense if you are migrating an application towards the cloud, or in some relatively niche things like HPC where there is a need for a very large+fast single pool of storage - but I don’t see a more general use case that is compelling.

UK's Cheshire Police tenders for whole new ERP system after Oracle Fusion went live with 'significant deficiency'

dgeb

Re: 20 years

Cheshire police total budget for 19/20: £214.7m (https://www.cheshire.police.uk/SysSiteAssets/media/downloads/force-content/cheshire/careers/we-care-strategic-plan.pdf)

Constables: 2053 (https://en.wikipedia.org/wiki/Cheshire_Constabulary)

Civilian support staff: 1296 (2018 figure - https://www.cheshire-live.co.uk/news/chester-cheshire-news/cheshire-lost-more-police-officers-15754250)

That means an average total cost of ~£64k per head across the entire force.

Even if we assume that these hypothetical clerical workers would cost the same (hard to believe, IMO), £10m/yr would pay for 156.

If you have better, or more relevant numbers than this, please share them.

University of Cambridge to decommission its homegrown email service Hermes in favour of Microsoft Exchange Online

dgeb

Re: Single point of failure

> A 4-year uni has to deal with a 25% user turnover rate every single year.

Another poster already pointed out that the existence of staff means that it isn't 25% - but regardless, the problem with this argument is that if you have that turnover, you already have to have tooling suitable for handling it (and you have auth for other systems too, which would still exist). The benefit comes when you have a much lower turnover and thus cannot justify the effort of making the tools for yourself, but can benefit from sharing the cloud-y ones.

> Planning out an in-house server farm that accounts for all that, plus paying the, say minimum 2 technicians, paid at say £45,000+ each? Now you're talking real money over the life span of the server farm.

But PPSW at least isn't going anywhere - so you aren't saving the staff costs, the scope of the job is just somewhat smaller.

>So unlimited bandwidth

Hermes is on the CUDN - so the local users have no WAN bandwidth impact anyway. Moving it away *creates* a bandwidth burden that did not exist, it certainly doesn't solve it.

> low to no initial investment

Given it already exists, there is no initial investment either way

> no continuing upgrade fees...

Maybe I misunderstand what upgrades you are referring to, but since it is an OSS stack, there aren't any software licence upgrade fees.

> For static requirements, static loads or at least foreseeable, in-house has benefits. But for a public system where things can change, and change into the unknown, is them choosing cloud really such a massively bad idea?

Actually I mostly agree with you, except that this seems like a pretty stable, predictable load, so it fits the former category rather than the latter.

Dell ‘exploring’ VMware spin-off, insists they must keep their special relationship

dgeb

Re: Transfer speeds

vmkfstools to clone virtual disks to/from NFS storage is the fastest way without paid storage vMotion features - that'll do multi-gigabit quite happily. Next best is running rsync as a daemon from inetd, which will do 1Gbps (IME 1Gbps per stream when reading, but only 1Gbps per datastore when writing).

Someone got so fed up with GE fridge DRM – yes, fridge DRM – they made a whole website on how to bypass it

dgeb

Re: Next great idea

I’m not going to disagree about it being commonplace, but that situation is when it makes the *least* sense to use a subscription model.

When you have a sizeable fleet of devices in the same model range, simply keeping a stock of spares adds relatively little overhead. Running them at high duty cycle makes that even more true, as you’re going to do reach the replacement windows for all components on a regular basis.

It’s when you have a single large device that is expensive (from your perspective), and/or used lightly enough that something like an imaging unit replacement is an unexpected expense that it can make sense to have a subscription plan for service and consumables - you are in fact passing on the risk to an organisation that meets the ‘sizeable fleet’ description above.

Oh crap: UK's digital overlords moot new rules to help telcos lay fibre in sewer pipes

dgeb

Fibre in the sewers

Is that not literally what they are for?

EU wouldn't! Uncle Sam brandishes 'up to 100%' tariffs over France's Digital Services Tax

dgeb

Re: How about politicians make simpler tax codes / laws?

"..when prices are raised to cover the tax"

If the market will bear the increased price, they should have been charging it already. To the end consumer, what matters is the gross cost (consider how non-VAT-registered micro-traders compete on price), or consider how many prices stayed the same when the UK rate went down from 17.5% to 15% and then up to 20%.

So a sales tax is overwhelmingly coming out of potential profit margin.

I can't believe you've done this: Cisco.com asks visitors to explain to IT why they have broken the website

dgeb

Re: Sadly, this wording is still common

> First, contacting the webmaster to inform them that the error happened and what time is pointless -- that information is already in the webserver logs.

It isn't necessarily pointless - if the person reporting it does include context about what they were doing/trying to do, the time at which it occurred is definitely useful to correlate that report to the logs about what happened server-side.

Agree that just reporting that an error occurred and the time is completely pointless, however.

>Second, 500 is an internal server error. The odds are vanishingly tiny that it was triggered by anything a user has done.

For a web application of any complexity, I would say that it is actually pretty likely that it was triggered by something specific to the user's action. Some particular edge-case or combination of events that no-one thought to create a test for. If you get a 500 in response to *any request at all* then almost certainly not.

None of that is to say that it isn't a pretty terribly written error message, of course.

If your broadband bill is too high consider moving to Idaho, they get the internet for free

dgeb

Water, electricity and gas (and, within limits, sewage) are fungible, though. It doesn’t matter which plant treated a particular drop of water, provided you meter the supply and consumption so that each provider puts in to the system the same amount their customers take out.

Seagate HAMRs out a roadmap for future hard drive recording tech

dgeb

Re: Awesome!

Well - for a start - RAID5 *does* distribute parity across drives, but the main benefit to doing so is random write (and a lesser extent read); rebuilds require reading the whole rest of the stripe anyway, so only gain from the improvements to concurrent access.

Multiple parity (substantially) improves your odds of a successful rebuild - but it certainly doesn't make it faster - it can only be slower than an equivalent single-parity configuration as there is more computational overhead.

Ultimately, any RAID rebuild requires reading data equivalent to the capacity of the entire (sub)unit being rebuilt, and writing out the entire rebuilt drive - possibly whilst also satisfying routine IOs.

We've found another problem with IPv6: It's sparked a punch-up between top networks

dgeb

Re: Clarification please

I think the article is missing the word "imbalance" - the point is that settlement-free peering only makes sense if both parties are providing an essentially equivalent service to one another.

That is usually considered separately in terms of transit provided to other networks than the two that are peering (on the basis that I'm not doing you a favour by accepting traffic destined for my own network).

So a (non-telco) business network, even a multi-homed one, is essentially never providing any transit to its peers, and therefore pays its provider the entire costs of its connections; a mid-size ISP may be providing some transit (to its multi-homed customers, such as smaller ISPs) and may have some settlement-free peering with other similar networks, and some partially or fully-paid peering with bigger, more connected networks.