* Posts by Justthefacts

210 posts • joined 22 May 2014


Aviation regulator outlines fixes that will get the 737 MAX flying again


Re: Hmm.

Sure, only a few passengers will bother checking their actual flight. But there is just going to be endless publicity, month after month, year after year, as to “which airlines are flying the dangerous plane”. Airlines just won’t be able to stay in business with that label around their neck.

And why would they try? Not only could they buy Airbus, at perfectly competitive prices, but there is now the largest glut in history of secondhand planes in storage with barely delivery mileage on them. Why wouldn’t one just pick up a handful of those for cents on the dollar?

To be fair, Ryanair have a lot of MAXs already, and will continue using them.

Their big advertising selling point is “ we are so horrifically crap we must be really cheap”. To the point that nobody notices that they aren’t even that cheap. But there’s really only room for one airline with that shtick.

Someone made an AI that predicted gender from email addresses, usernames. It went about as well as expected


Re: @gnasher729 - Define gender!

In context.....

Gender is a (rather imperfect) correlate of consumer preferences. I’m assuming this tool is intended as an input to demographically target ads and marketing. Gender, age, income-level, and education are the usual tickboxes. Google and Facebook “know” your gender, for the purpose of its ads. It certainly doesn’t know your biological sex, how would it. But not only would nobody care, as a marketer you want to know gender rather than sex. It’s not what somebody has in their trousers, it’s which kind of trousers they buy that matters.

There is nothing inherently sexist about this as a tool to generate inputs for a marketing algorithm - although this is clearly very bad at it.

If I sell men’s watches, I would like to target my advertising to male customers, please. I’m perfectly aware that some women buy men’s watches. And some trans men, and some trans women. But given that I’m charged by cost per impression, one way or another, I’ll just divide the population arbitrarily into two, and roughly double my advertising effectiveness, thanks. It’s neither a political statement, nor an imposition of my value system.

However, there very much can be something sexist about the selections that the company doing the marketing uses. E.g. a company selling DIY tools choosing to target men only. That’s almost certainly widespread, and I’m not aware of anyone either checking, nor being hauled over the coals for it. The difference to the men’s watches example, is that the latter is a false belief about what is a statistical correlate, aka stereotype. Failing to be given the chance to buy angle grinders isn’t a real problem in life. However, there are things that could be - e.g. political adverts that run different messages to men and women, or ads offering business franchising opportunities to men only.

Predictably grim Q2 for mobe sales, but iPhone SE proves pretty moreish as gateway drug for Android defectors


Iphones keep the lights on

I always hear how “iPhones are pointless showoff things for people too stupid, etc.....”

There’s an angle that I believe technical folk miss. I run a small business, with website that sells physical moderately high-value retail goods. Analytics tells me:

40% of my website traffic comes from IOS devices.

80% of purchases are made by IOS devices.

85%+ of *transaction value* comes from IOS devices.

Traffic from non-IOS devices actually *lose* me a bit of money overall, because the ad cost-per-click exceeds the revenue they generate. Although I still need to service that traffic adequately, because a household considering a purchase may contain a variety of devices, or someone might surf on their work PC & buy at home on their iPad. That I can’t tell.

My point is: the relevant number for IT people is 40%.... but business owners care about the *85%*.

This is why people are so fussed about supporting whatever stupid features Cupertino deem desirable. You *have* to give IOS users a perfect user experience to stay in business. Everyone else like Android, it’s the electronic equivalent of not putting the phone down because you’re too polite.

Chips for Huawei are fried: TSMC stops shipping parts to Middle Kingdom mega-maker this September


Re: He who laughs last ...

True but....

One of the dirty secrets of CPU design over the last more than thirty years, is that all the clever stuff is actually redundant in the long term. It’s a bunch of insanely clever workarounds that just won’t be needed once a very specific key technology problem is solved: memory interconnect speed. Intel, AMD and ARMs technical lead and “moat” isn’t forever, it’s dependent on that one problem.

Just making a CPU that has the basic execution units, even in whatever parallel microarchitecture, with whatever Deep Learning accelerators, that’s just not that hard. It can be done by any good engineering team with less than a hundred engineers in a couple of years.

Treble that, at most, for really optimised local power consumption.

No. What’s hard is: out-of-order execution. micro-op optimisation to make that efficient. Branch prediction. Translation lookaside buffers. Cache architectures with snooping.

The common theme is that they are all workarounds for the memory wall problem.

Figure out a transport interconnect from external RAM to CPU that’s high-bandwidith without drawing insane amounts of power, and absolutely the first thing that will happen is *removing* all those clever widgets from CPUs, massively increasing core count by replacing the silicon area and power used by the caches, and hooking it straight into main RAM.

Obviously, I’ve got no idea how to solve that problem. Perhaps optical-RAM-to-CPU interconnect. Or spin-wave-transistors (which don’t drive capacitance load). Perhaps in-memory-computation.

But at some point, maybe a decade away, or even two, it will be solved. Then, almost overnight, Intel, AMD etc just own a bunch of IP that consumes 95% of silicon area for little benefit.

Given that owning such CPU IP forms a major part of the West’s strategic advantage and leverage over China, that’s really something to think about. Being only a single brainwave insight away from a major strategic shift that will likely happen within ten or twenty years. I’m not talking about FTL travel here, just a board-level signalling technology that breaks no laws of physics.

Everything must go! Distributors clear shelves of ALL notebooks in Q2, even ones gathering dust over last 12 months


Do you ever change your mind, based on data?t

Most people doing “Real Work” aren’t IT staff or require powerful CPUs for anything much.

It’s very much a minority sport.

And most IT staff don’t need to run large compilation jobs on their computers. Neither do testers.

You’re not going to like this, but *by far* the workforce requiring most CPU power at their fingertips is graphic designers, artists, architects, jewellers, film and TV industry, ad and marketing agencies. Creatives doing image creation and manipulation, basically.

Software dev teams mostly don’t “develop software” on “their computers”. They use “their computers” as *text entry terminals*, running Visual Studio, Eclipse or whatever frontends. But stuff like source control, continuous integration, compilation, unit testing etc etc....all takes place on company servers. For good reasons. Of course, every company is different, and yes there are exceptions.

What most developers actually need is two high-quality screens, a decent ergonomic keyboard, any old laptop so long as it’s company standard, and a fast network connection to the servers. By far the most CPU intensive thing they do is have ten different Chrome tabs open on the browser for various forums and bits of documentation.

Millions of people doing Real Work find that first laptops, and mostly tablets actually (with keyboard accessory if necessary), are a better match to their work needs than desktops or workstations, and there are now barely a few thousand dinosaur holdouts. How much more Market Data do you need to change your mind?

NASA trusted 'traditional' Boeing to program its Starliner without close supervision... It failed to dock due to bugs


Re: This

No, that’s not the reason:

Share price is two things (simultaneously) - a fraction share of the company value, and a fraction share of the whole company income stream. Share buybacks give investors a bigger share of both, such that the Price/Earnings Ratio remains theoretically unchanged.

The actual rationale is that if you give dividends, investors have to pay dividend tax. Share buybacks are tax avoidance, pure and simple. If they do neither, the cash sits in the companies bank account, earning less interest than (hopefully) the company normal business - the overall margin of the business drops.

Another way to look at it: a company making 10 planes a year, at 10% margin, would be looking to expand that to making 20 planes at 10% margin. But if they can’t, they can leverage up so that half their shareholders get the “user experience” of being invested in the 20-plane company, without any transaction friction. The other half shareholders were willing sellers.

BoJo buckles: UK govt to cut Huawei 5G kit use 'to zero by 2023' after pressure from Tory MPs, Uncle Sam


Re: Is it wrong to be in favour of this?

Yes, it’s wrong.

Because *the whole security design of 5G* (and 4G), divides it into Core Network and non-core. It’s not an arbitrary word distinction, it’s baked into the protocols. Only the Core Network gets to either “have your data”, or know who you are, or subtler things like traffic analysis. Huawei were specifically excluded from the Core Network kit.

Don’t take my word for it, read CESG report (which you know as GCHQ) who have thoroughly analysed, including reading Huawei’s code, and instituting code-signing mechanisms. They came up with the plan for telecomm security encoded in U.K. policy, now being ignored by people trying to look good.

And since our security services decided it was unsafe for more than 33% of kit to be placed with any one company, and now there are only two (Nokia and Ericsson), who is going to be responsible for the inevitable catastrophe of a Denial of Service breach taking down 50% of our infrastructure?

In other news, you know that Trump wants to buy both Nokia and Ericsson right? He is in full strop mode that Finland and Sweden told him where to stick it, and has said officially (ie on Twatter), that if they don’t he will “ask Premier Putin to step in”.He has basically suggested Russia to invade a NATO countries to secure US ownership of global telecom infrastructure, and it’s *China* you’re worried about?

I've seen things you people wouldn't believe. Spacecraft with graphene sails powered by starlight and lasers


Re: 1m/s**2

Duh, too early in the morning :)

Nah, none of the solutions of secondary focusing reduce the size of the primary mirror anyway.

As one of the other posters said for solar focusing, etendue is a bastard.


Re: 1m/s**2

First solution of relaying the focusing mirrors doesn’t work: if the intermediate mirrors are 10cm, each one only extends the reach by 100,000 km, which doesn’t reach very far. Plus you’re now playing 10,000 cushion billiards onto cushions that are themselves moving at insane velocities.

The second option is slightly more viable - pre-stationing a large set of secondary focusing mirrors along the 100 million km runway track. If you could position each secondary mirror within 20,000 km of the track, these only need to be 4 meter mirrors, and the primary 100 million km away is similar’ish Requires a set of five thousand Hubble space telescopes, doing an intricate dance - things don’t stay still in space, they’re orbiting the sun, so this needs to be precalculated years in advance for them to be in the right place for only the few minutes of the shot. That’s three orders of magnitude more costly than anything we are capable of, plus unthinkable aiming dynamics, to launch one 4g spacecraft. But yes, it’s slightly better than the more obvious option which was six order of magnitude.


Re: 1m/s**2

If the system you are referring to is a 1W laser-launched system, it would indeed be 54 miles per second. The problem is that the spacecraft turns out to be 2.3 million miles away at this point. And you need to keep your laser focused on that 1cm disc for the whole duration of the day.

Laser spots have divergence inversely proportional to the size of the focusing mirror, and you need to worry about atmospheric twinkle. The system you’re considering requires a focusing mirror diameter 180km in orbit around the Earth, plus means to physically swing it to keep it pointed at that 1cm object flying at 54 miles per second 2 million miles away.

The truth is, there is no way to make the numbers on a laser-launched system ever scale correctly. The Moties didn’t do their full system study correctly.

Solar sailing is actually a great near-orbit technology, in use today to save fuel on satellites. And also a great mid-future in-solar-system propulsion technology. But neither technology works for interstellar.


Re: Thoughts

Surprisingly, you can make something decelerate using light from our end!


No, it isn’t enough to be useful. But it’s just so cool that it’s possible at all!


Re: Calling Isaac Newton...

If you want to go down the route of “magic just takes longer”, the problem is that there is no way to know which of the myriad magical solutions is best and closest to feasible at system-level. If you’d started Michelangelo on designing a CPU, should he work on Babbage’s Difference Engine, or help Newton play with coloured light and prisms (which ultimately turns out to be quite useful for optical resist technology)?

It’s better to set yourself nearer term goals. And frankly, having worked in the space industry, one of the biggest problems is that every project with a timeline longer than ten years turns into a boondoggle. Basically, unless the project completes in a timescale of your current job, nobody has any great interest in either finishing it, or really solving the problems. It’s just an annual budget, and people are “contributing”. They aren’t motivated to make the damn thing work.

If I want to accelerate 4 grams of payload to 15% light-speed, then we could “just” store 0.3g of antimatter in a magnetic bottle and let it annihilate as trickle, using it as thrust. That’s (today) completely infeasible, but theoretically do-able. Obviously, we need a nearly perfect gamma-ray micromirror and collimator. And ultra-vacuum technology decades beyond current, to prevent the antimatter annihilating inside the storage bottle. And ways of cooling the antimatter to keep it confinable, and miniaturised superconducting magnets to confine it without their power usage needing to increase the payload mass. But none of that sounds beyond early 22nd century technology to be honest. And definitely all closer to reality than building an 18km diameter launch mirror in space. Plus it solves the problem of being able to decelerate when we get to where we’re going.

The real point is that technologies a century away usually don’t solve problems in the way they look today, rather by discarding assumptions. Could improved sensor, CPU and comms technology mass micrograms rather than grams? Would it make more sense to scale up particle accelerator technology (which already approaches light speed, with picograms per particle bunch) than scale down rocket technology? Would a solar-system-scale coherent array optical telescope achieve better observation resolution anyway in the target star system than a 4 gram micro-spacecraft zipping through at 7000km/s a million km from any planet (Spoiler: yes it could. 25cm resolution. Tech demonstrator missions are on the horizon, and it really might be feasible within 50 years).


Re: Calling Isaac Newton...

There is a huge problem, just the article doesn’t explain it well.

The sails don’t slow down, but the acceleration drops off quickly, which imposes basic limits that aren’t immediately obvious.

If you start off with a certain acceleration from the Earth (distance from the Sun 1AU), once you get to 1.4 AU from the Sun the thrust and acceleration halves (inverse square law). It halves again @2 AU, plus you spend much less time there so there’s less velocity change. Etc, etc. Almost all the velocity the velocity the sail will ever gain, is achieved within the first 0.4 AU. Light sailing is marketed as constant thrust, allowing ultra-low acceleration to get you somewhere useful, but it really doesn’t live up to that.

Some numbers: to escape the solar system, you need delta-v 12km/s from near-Earth-but-not-gravitationally-bound, on a 60 million km effective runway. This requires 0.0012 m/s2 acceleration, which for a light sail is a high hurdle to jump, but achievable. To get to the nearest star within a century needs delta-v 12,000 km/s. To achieve that on only a 60 million km runway, you need 1200 m/s2. Acceleration 120g is just impossible for solar sailing.

That’s why people want to use lasers, to increase the incident power per square meter, and get the acceleration done on a reasonable runway. These guys are achieving 0.1g acceleration, which is an amazing achievement. The problem is, it still doesn’t scale well. Remember the runway length of 0.4AU? Now you have to be able to focus your laser onto the spacecraft sail at 60 million km distance. Focusing is diffraction-limited, the tighter beam you want, the bigger the mirror. However you scale things, there’s problems.

The Breakthrough Starshot wants to make a sail of 14m2, weighing 4 grams, to reach 15% of speed of light with 8.5 GW lasers. That gives a very respectable 7000m/s2 acceleration. It also requires electronics that can survive 700g acceleration (feasible), and given that you are focusing a *8 GW laser* on it sufficient to vaporise most things, either: 99% reflective sail (feasible) with payload that can survive 2000Centigrade (not feasible), or 99.99% reflective sail (not feasible) with payload surviving 380C (marginal, payload mass <<4g, limited thermal insulation)

But the main problem is that they need to focus the lasers onto a spot-size of 4meter diameter from 1AU distance. That’s the same optics problem as a telescope resolving 4m at a 1AU distance. Atmospheric twinkle distortion prevents you doing that, so it has to be a *space* laser. And fundamental optical diffraction limit requires you to make an 18 kilometer diameter mirror to focus the laser. The largest mirror ever made is a 10 meter mirror, on the ground, even with adaptive optics, and James Webb space will be 6.5m.

An alternative mission concept has the spacecraft sail 4km diameter to have an 18 meter launch mirror. Then, the spacecraft mass x million, and even on the Starshot hyper-aggressive mass assumptions needs an 8 petawatt space laser firing for two hours continuous, which isn’t going to happen. In other words, people are tending to focus (pun intended) on the spacecraft side of laser sailing, which isn’t really the problem.

Contact-tracing or contact sport? Defections and accusations emerge among European COVID-chasing app efforts


Re: Testing! testing!

I pretty much agree with all your points, but “Current death rate in Sweden continues to decline”.

Your source is interesting, and vastly disagrees with this:


Worldometers shows Sweden reporting figures that oscillate repeatedly, reporting hardly any deaths on a weekend/Monday, followed by massive spikes. There are a lot of reporting anomalies in every country - e.g. every single country so far has later “found” at least a thousand unreported deaths arriving in a lump or two. It’s “fog of war”, rather then anybody hiding data, just the systems are being stressed.


Re: Generalised testing

No, it’s really not that simple.

Firstly, false positives are rarely due to the “the test incorrectly flagging”. It’s because real tests are never specific enough to flag only the correct molecule. A small number of people will have had something that the test reacts to. Re-test them, and you get the same outcome. Ditto false negative: if the particular person for some reason has only a low number of antibodies, which the test isn’t sensitive enough to pick up.Re-test them, and you get the same outcome. Any potential improvement from re-testing is already included because they already do two swabs / blood samples.

Antibody tests are blood concentrations in real human beings, and the data varies day-to-day, depends on what and when they ate, can be masked by hormone cycles or another infection.

The PCR swab tests of “have you got it”, are very dependent on whether the tester is skilled enough to get the swab far up enough your nose at the right angle, which is really a lot harder than it sounds, particularly if the patient has dementia or mental health issues. Untrained staff do well to achieve false negatives of 10%, doctors and trained staff achieve about 1-2% false negative.

Simultaneously 99% specific and sensitive is *aspirational* for best-in-class tests after years of development, not on a hurriedly expanded testing service. That’s not happening anytime soon. 98% specific and sensitive would be considered damn good. At the moment, the best PCR tests *anywhere in the world* are barely meeting 98%, and the antibody ones seem to be stuck at 90% which isn’t really any use to anybody.

Nevertheless, we somehow need to find a way to make critical policy based on data we know to be flawed.

Surprise! Plans for a Brexit version of the EU's Galileo have been delayed


Re: Can't we just...

Why the joke icon?

Duh, well yes, any U.K. company who wished to actually *pay* for Galileo Commercial Service is fully entitled and enabled to do so, at the identical price to a French company. And exactly as if you are based in the Philippines, or the Cote D’Ivoire. Have you not actually, ummm, *read* the Galileo Terms of Commercial Service?

Of course, why any company or individual *would* choose to buy the service, I have no idea at all. And nor it seems, does anybody else. When the EU put it out for RFI, they got lots of interest from companies who thought the EU was going to pay *them* to test-drive the service. But not one single expression of interest from anybody prepared to pay for the service themselves. Not A Single One.

The EU’s current best plan is to legally require EU member states to “purchase” the service, so that they can give it away for free to resident companies. And, as above, the problem with *that* plan, is that companies are actively refusing to get it for free. They expect to be paid to use it. And that’s where everything is stalled at the moment.


Shame we aren’t in Galileo?

Shame we aren’t in Galileo.....that’s due to launch Full Operational Capability in July.....

Hahaha. Only joking :) The EU assumed everyone knew that their schedule was purely “aspirational”, and was certainly never meant to be taken seriously.

Actually the next launch (for FOC-FM23 on Ariane6) has been delayed until Jan21st 2021, but because of in-orbit failures that’s not sufficient. Marginal service should be achieved in September 2021 launch, but anyway I wouldn’t hold your breath. For reasons of redundancy within the three orbital planes, they won’t be able to actually offer a guaranteed Commercial Service until at least 2023-2024, and that’s with rose-tinted specs on.

More realistic estimate, based on historic launch and in-orbit failure rate, is that they actually *never* reach the point where they launch fast enough to replace failed spacecraft, and never achieve Commercial Service. At least, not before 2030 which is when the next tranche of boondoggle contracts are due to let.

Researchers trick Tesla into massively breaking the speed limit by sticking a 2-inch piece of electrical tape on a sign


Re: Sigh.

No. And while you aren’t going to listen to any of us, perhaps it would take some of the heat out to take some Advanced Driving lessons, and listen to what the instructor tells you.

Put simply, if you are travelling at 40mph on a 60mph road, as *your* assessment of the speed at which *you* can safely go in those conditions, then I’m going to take you at your word. You *would* be unsafe to go quicker than that. But if 99% of other drivers are managing to drive at 60mph quite happily, then they are clearly correct too. Otherwise, you would observe other cars in the ditch every few miles, and that’s just not true.

So I’m afraid you need to be honest with yourself and ask - how you can improve your driving skills on roads that you clearly aren’t comfortable with (training). Or, be prepared to re-organise, and avoid those roads. Other road-users exist, and you shouldn’t be putting their lives at risk.

Cache me if you can: HDD PC sales collapse in Europe as shoppers say yes siree to SSD


Most consumers just don’t want the hIgh HDD capacity any more.

Picking a random shop (John Lewis ;) , they have 154 laptops.

The middle of the market is 256-512 GB, and tops out at 1TB. I’m preparing for a bunch of downvotes from people saying “yeah, but How Can I Do My Video Editing / Proper Work”. To which the correct answer is “YMMV, there are a few of you, but most of everyone else just doesn’t do that so much”.

Point is, if most people decide that 512 GB is enough for them, the cost differential between SSD and HDD just isn’t enough to worry about. Ditto for desktop at x2 capacity

Remember when Europe’s entire Galileo satellite system fell over last summer? No you don’t. The official stats reveal it never happened


Re: Isn't it amazingE

“ The full constellation is expected to be available by 2020.“

That’s what the Galileo website says, but it’s the most hilarious nonsense.

The facts are: Currently, Galileo has 22 working spacecraft. Full Operational Capability requires 24 satellites *plus 6 orbiting spares, otherwise the next satellite failure loses service until the next launch*. Each orbital plane requires a separate spare (too much fuel required to change orbital plane), and they are sent up as pairs in each launch which have to go to the same orbital plane (fuel again).

Hence, the constellation isn’t reliable until there are 30 up there. We are eight spacecraft short of a full deck at the moment.

Unfortunately, the launch schedule only allows for four satellites per year, and only one launch (two satellites) by end 2020. And by the way, because of satellites already failed in different planes, it will be end of 2021 before there is a proper constellation up there at all. At the planned launch cadence, they won’t be able to declare Full Operational Capability (resilient to failure), until end 2024

Except that it gets much, much worse. By 2024, the first satellites ever launched will be end-of-life (12 year lifetime!) and retired. So we will need the 2025 satellites to replace those.... and then the next satellites will be end-of-life and retiring....so we need to wait until 2026, and.....well I think you get the picture now.

The launch cadence schedule simply isn’t expected to catch up with the older dying spacecraft any time during the current so-called “transition” series of Galieo spacecraft up to 2030. The situation is due normal project delays, which have consequences when you are trying to keep up with aging satellites in orbit.There’s another series of 12 spacecraft supposed to go up post 2030 or sometime, but that’s not fully planned or budgeted yet.

Now, that’s the *optimistic version*, where the spacecraft achieve their 12 year planned lifetime. Actually, there are systemic onboard clock problems, which means that we expect two or three of the spacecraft already up there to fail several years earlier than lifetime. And, historically, sometimes satellite launches sometimes fail, as one of them already has, and that’s *also* not included in the plan.

EU declares it'll Make USB-C Great Again™. You hear that, Apple?


I don’t see how USB-C solves the charger-zoo problem......

The real problem is that USB-C negotiates power delivery in 0.05A and 0.05V increments.

Absolutely nobody is going to read the side of the charger plug, to check that it meets the minimum requirement of your specific phone. This is determined by detailed measurement of each brand of battery, and its thermal performance. If it doesn’t, it will still charge, but at dog-slow speed.

Almost everyone will still buy the one supplied by Apple, as that is guaranteed by a massive sticker on the packet to be exactly big enough to charge the iPad at maximum speed, but exactly small enough to fit into the case that comes with it.

Look at this another way. The only charger that can serve *everything* maximally is the one that can deliver the maximum USB PD of 100W. Which is going to be a giant. It would be crazy to have a 100W charger for every appliance. So, once again, you’ll have a proliferation of USBC chargers - a little one for the the phone, a medium size one for the iPad, a big one for the laptop, etc. This solves nothing at all for e-waste, just makes it look good on paper.

Globo PC sales up for first time in 7 straight years – but market still 25% down on 2011


Re: Desktop PCs will be around for long time yet

Well, there are several unjustified assumptions in what you said.

1) “People making real things in the real world”

You are shortsighted in what you think “real work” is. Software development and accountancy is not the only “real work”

Tree surgeons, estate agents, doctors. Their use-case is all runs *just fine* on a tablet. [compare a doctor taking a tablet to a care home, versus a laptop, or typing up notes when they got back to office, which is what used to happen]

Even in financial institutions, only a fraction are making big pivot tables or writing large reports/analyses. The majority of bank employees are call centre, branch front of house, or HR/other support. None of those people need or want a desktop, just that’s been the legacy heap of junk given to them.

2) There *are* power users with greater needs, but max 200k-500k in the U.K. Plus the replacement rate drops to 5+years, as the capability stopped increasing. That’s 40-100k sales per year in the U.K., and maybe 2-5million worldwide. Not enough to support the PC manufacturing industry like it has been.

Put another way: large-key Desk Calculators are still a thing! Have a look in any Finance or Sales department, they still don’t use an app on their smartphones! But calculator sales aren’t enough to bother the balance sheet of a large manufacturer.

3) Yes, “people doing real work” as you define it, need a mouse, keyboard and two monitors.

But that’s not a desktop PC. Any laptop, and most tablets, can interface to those, wirelessly if necessary.

The market for large monitors, keyboard and mouse may well remain. But those are totally unbranded commodity, and decoupled from the underlying workload of application or even OS.

4) You don’t even need a desktop to *write software*. Almost all of the time, you’re either editing, reading/writing documentation etc. Compiling, you need more power. But most “proper” shops now have regular merge/compilation/automated unit test running on a shared Linux box in the corner. That’s one per ten engineers. There may be three monitors per software engineer, but only a single three-year-old laptop in a docking station.

What was Boeing through their heads? Emails show staff wouldn't put their families on a 737 Max over safety fears


Re: "designed by clowns managed by monkeys" type comment sounds damning

The context here is interesting, and uncomfortably relevant to most people’s practice I think.

The specific project “designed by clowns” was the *simulator*. Which being only a simulator was effectively decided as “non-critical” at project level, hence outsourcable to HCL.

The actual MCAS issue here wasn’t a “bug”, it worked exactly as designed at the software level (given only one sensor, act in accordance with what it indicates) but which had been specified at the system-level by muppets. The problem could only be caught by Validation (check spec does what the user actually needs) which requires experienced pilots using the simulator. They should have executed each and every “book” operating procedure on the simulator, but clearly didn’t otherwise they would have seen the repeated MCAS swoops as the effect of following the manual.

One immediate problem was that the simulator wasn’t in a fit state to actually do the Validation (only P0 bugs addressed, dozens of open P1 and P2 bugs -sound familiar?) and that’s one reason why the fatal failure mode wasn’t found.

But the root cause includes that Test teams are usually considered of less than rockstar developers, and less than critical. If the test platform isn’t rock solid, nobody ever halts the production software date on that. Again, does that sound horribly familiar? On a safety critical system, the Validation platform software should have been assessed as the same criticality as the systems it was testing, and the outsourcing policy treated identically. But it wasn’t.

And finally, there were review boards, with technical people voting, who knew there were serious problems. But several managers *and technical leads* decided their job was to come up with excuses of why those weren’t blockers for delivery. I certainly recognise that situation too - including technical leads using the exact phrase that they’ve “Jedi mind tricked” their peers at the subsystem customer to accept a particular test failure.

This isn't Boeing very well... Faulty timer knackers Starliner cargo capsule on its way to International Space Station


Re: Elon Musk was on hand to offer advice.

#2 - “Boeing bid a fixed price contract”....

We are defining “fixed price” very differently here.

In my view, Boeing has not yet met its side of the contract, because what they built didn’t actually work. If this were a fixed price, Boeing would get paid zero now, and still have to rebuild and relaunch under their own dime, in order to get paid.

As in “if I buy a car, and the engine barfs all the petrol on the floor as soon as it turns on, I don’t pay for the car until the manufacturer fixes it, even if that requires a full engine rebuild”

But that’s not the NASA reality. Boeing will get paid 90%? 100%? of the full price, as payment for failing to deliver the payload. If another launch is required by NASA to prove out the vehicle, Boeing will get paid *again*. That’s cost-plus by another name.


Re: Elon Musk was on hand to offer advice.

In this case Pedo Musk was correctly being *ironic* in the British sense. Because:

#1: SpaceX has launch failed quite often, *which is SpaceX’s way of doing engineering*

#2: For Boeing, fixing and trying again is going to cost hundreds of millions, and take months if not a year. And specifically that cost is going to be paid by NASA, not Boeing, because they are contracted on cost-plus

#3: Whereas SpaceX do this repeatedly, and learn from their experience. It would cost SpaceX less than a million, zero to their customer, and be ready to launch again in under a week. Remember, nothing exploded, just a bit of wasted fuel, it would basically be a go-round for them. The ISS folks would still have got their Christmas presents.

It’s *ironic*, because watching Boeing fix the failure slowly and expensively highlights the contrast in the whole way they do engineering.

It’s also ironic, because he assumes Bridenstine sees exactly this, every time Boeing burns ten billion dollars to do a hundred million dollar job, as they have been doing for sixty years.

Whereas actually Bridenstine just sees “business as usual....Boeing will fix this, because it costs billions to do rocket science, and the USA has billions to spend, go-USA-go”.

And it’s finally ironic, because Pedo Musk is unable to see that every time he opens his mouth on Twitter, thousands of comments will reference him as Pedo Musk, just because he’s a creepy older guy unable to relate to other humans.

FUSE for macOS: Why a popular open source library became closed source and commercially licensed


Re: fairness

FOSS situation is more complex than that.

The *real* problem is - who says that a particular feature or direction is a good idea at all? And being “inside the tent” is no guarantee of being able to steer the resources sensibly.

In this particular case, yes the companies were getting features for free, and now they won’t. But a *perfectly sensible* viewpoint for a manager in company A is this:

Should I subsidise external person to do this FOSS? Well, there’s a chunk of work and maybe I like some of it but only a bit. Plus some of it is likely flat-out opposite to what I need/want so I may need to allocate someone internal to make the mods I want on top. And then we’re sucked in to a FOSS commitment, and all for a “nice-to-have”. Nah, I won’t put that feature on the roadmap, if it arrives then great, if it doesn’t then I can live with that.

Alternatively if someone decides the new feature is actually necessary...(And how many features actually are, most of them turn out to be bloat after a bit), then we will implement the 10% we actually wanted, internally and that won’t actually take so much time. Effectively subsidising an external to do 10x what we want, in the hope that gets split more than 10 ways is maybe not a good bet.

Ok, maybe, should I subsidise this project with somebody internal to work on it. Then we focus to get the features we actually want. Which of my team would I put on it? Well, there’s Bill the FOSS evangelist. He’s actually a great coder. But the problem with Bill, is that he’s well-known for going off and doing stuff for weeks that nobody exactly asked for. That’s pretty much the definition of a FOSS evangelist. Within this company, I have him assigned under a technical lead that keeps an eye on him, and keeps him focused on what he needs to deliver. As long as we do that, he’s really productive. But if he’s on his own FOSS project, there’s really nothing to make him focus on the features we are telling him are important, he will just make castles in the sky again. No, then.

Oracle finally responds to wage discrimination claims… by suing US Department of Labor


Re: Your objection is irrelevant

You are trying to twist the data to fit your theory.

Firstly, as a hiring manager for many years in several tech companies, I have never felt such pressure. If it did, presumably it could only come from “evil HR”. In practice, pressure from HR and my boss is a mixture of “ Don’t hire inadequate people for short term goals, the long term costs are huge”; “It’s your job to get the project done, just pick one of these people and get the job done”; “if you can’t find a great senior person, couldn’t you hire one of these more junior people with potential, and grow them into the role - which would also be cheaper for the budget (but you still have to get the project done on time with these more junior staff)”. Notice how these are all - absolutely correct that the hiring manager should be considering, mutually incompatible, and opposite to your theory.

Secondly, if I *did* experience such quota pressure, and decided to acquiesce - and definitely I would be looking to move.....what am I actually going to do? Well, of course, I would try to play the numbers game and push any inadequate quota into the most junior roles where they can’t do any damage. You’d see a massive glass ceiling with lots of minorities in “Junior Engineer” roles, never getting promoted.

Once again, that’s not what the Oracle data showed. Your theory is just *not right*.


Your objection is irrelevant

Take a (typical) “big corporate” salary structure, where a particular job title has a 50% salary band, overlapping.

Say Engineer £20-30k, Snr Engineer £30-45k, Staff Eng £40-60k, Princip Eng £50-75k

If amongst all your Snr Engineers, the women and minorities are crowded at the bottom of the salary band, you have a problem. End of. That is what the data analysis showed Oracle have, and it is unacceptable.

If this were a “pool of hired labour” problem, pay would be equal within each bracket, but there just wouldn’t be very many minority / female engineers.

If this were a historic problem working its way through, you would see pay equal within the Snr Engineer bracket, but you see a glass ceiling with say a lack of female Principal Engineers.

These are different problems.

In my experience in the U.K. tech sector, we have “equal pay for the role”, fairly egalitarian pay across the board for minorities, a historic but resolving issue for hiring women, and significant glass ceiling problem that I don’t see being fixed.

Boffins blow hot and cold over li-ion battery that can cut leccy car recharging to '10 mins'


Re: Filling station power requirement

For that matter, look at Denmark, and specifically the taxis....law of unintended tax consequences.

Danes are exceptionally eco-conscious. They believe that cars should be heavily taxed.

In Denmark, tax is *90%* on a new vehicle. Unless it’s bought solely as a working vehicle, and you keep it for at least two years.

A car which in the U.K. costs £15k, costs a Danish citizen £150k new. After two years depreciation, it’s worth £75k. So a taxi driver can buy a car for £15k and make £60k profit on it after two years. Danes cycle a lot and are very healthy.

The important thing is how this scales with price. As a taxi driver, it becomes financially sensible to buy the most expensive car they can possibly afford, because of its resale value. Buying a Mercedes AMG *as an airport taxi*, may cost £100k, but they can sell it for £500k after two years, as long as they can find a rich enough buyer. It’s not uncommon to be driven from the airport in a 600hp supercar. Not what the tax system intended at all.


Re: Filling station power requirement

No they can’t be that distributed. Or at least there’s a bunch of Engineering thinking you haven’t gone through, as to what is “an adequate power supply”

You want to fast-charge a single car, 75kWh battery, in ten minutes. Ok, so you *must* be charging at 450kW, whatever the technology. If this is at domestic 240V, that’s 4000A rms. A domestic supply maxes at 100A. Even an upgrade to commercial 3-phase is only going to get you to 70kVA. That’s a factor x5 short of what this fast charge requires.

There’s a specific reason for that limit. Running 4000A at 240V is stupid and nobody does it, because the transmission loss would be nuts. What people can and do, for those sort of requirements, is run in an 11kV line, and have a substation, which reduces the amperage to 35A. That’s fine, and hardly rocket science. But you can’t have 11kV substations on home properties for bulk and safety reasons. Everyone knows what an electricity substation looks like, and how big it is. It certainly isn’t “smaller than a petrol pump”.

There are 400,000 substations in the U.K., and the voltage is a trade off between transmission loss, length of transmission line, and cost of substation that’s been in place for decades. There are 8000 petrol stations in the U.K.

If you want another few tens of thousands of substations, that can probably be built out. So, charging stations can be more decentralised than petrol stations, but what they *can’t* be is the eco-warrior fantasy of one per dwelling or even at the end of each road. It doesn’t fit. There are other options here:

1) Abandon fast-charge. If you accept overnight trickle charging at home, this problem largely goes away. You only need 10kW overnight, which is easily doable on standard consumer unit. But what about people who have an unanticipated need, drive more than 200 miles in a day, or have no off-street parking.

2) Allow fast-charge only on moderately centralised stations. The equivalent of a motorway service station could require up to 20MW for 40 “pumps” (remember, ten minutes is still a lot slower than petrol so you need more slots). That’s *serious* grunt, and well beyond a small 11kV substation. Now you need 350kV high voltage coming in, otherwise again the transmission loss due to massive current would be insane. But a few thousand of them could be built.

3) Intermediate charging speed - one hour charging at 70kVA, can be done with commercial three-phase. A few per town high street. Councils are starting to provide a few of these, but we would need some hundreds of thousands of these.

4) An appropriate mixture of the above. As an engineer, I have *never* seen a problem where the correct answer is anything other than “an appropriate mixture of several solutions”


Re: Power required

Nope, the actual value is closer to my estimate than yours.

1) You are quoting manufacturer range 310 miles. Those figures are as delusional as petrol mpg figures. Try Autocar, who measure 155 miles real world versus 250 mile manufacturer figure on the basic model 3, with a 50 kWh battery, translates to 230 miles for the LR. I underestimated by 36%, but you overestimated by 50%


2) “UK 32.5 million cars....7k miles”

Cars, yes, vehicles no.....the trucks and buses also go electric, and need to be included. Turns out that’s a *lot*.

327 billion vehicle miles, consisting of 32 million cars driving 7k each (250 billion miles) and 7 million vehicles consisting significantly of trucks and buses with starship mileages.

Except, the trucks aren’t going to achieve the 3 miles/kWh of a family saloon. A diesel truck typically gets 17 mpg vs car 60mpg, and there’s no reason to think electric efficiency scales differently. So, although trucks drive only 67 billion miles, they use about 240 billion miles of fuel in car terms, for a total of about 500 billion car mile equivalents. That’s equivalent to 12.5k averaged over 40 million vehicles.

So I actually *underestimated* by 25% when you claimed I overestimated by 65%.


The corrected estimate for EV usage adds 40 GW peak to U.K. generating capacity requirement. That’s way outside present capability.


Re: Power required

Yes, I do agree with that - if you add “car fuel”, it may change people’s electricity behaviour.

My point was - this is a pricing option where many people whom it would actually benefit, have decided not to select it. Because, they don’t really know what their day and night usage is before they get the E7 meter installed, and after that it would be too late. There are perfectly valid reasons that people might not follow price signals that economists assume they ought to.

Plus - I have E7.....my day/night meter clock is *clockwork* and has no means to keep time. I can’t touch it, and meter readers don’t check it.Mine thinks that “daytime” starts at 2am. As far as I can guess, this is to my advantage. So, take that as you will.


Re: Power required

Well, that is genuinely interesting. It shows the problem can be addressed from the consumer POV.

But still not the general points related to scaleability of EV usage & electricity infrastructure:

Just to be clear, I actually support this. But it's going to take a lot of system engineering, massive regulation, and possibly re-nationalisation of electricity industry

"What happens when that IT stuff goes down" [or IOS updates & suddenly doesn't support the app] - Nothing much, then I'll just use a standard charging point & pay what it costs.......Errr, no you won't. When it goes down, it's down for everybody at the same time. That's a pain when only a few people have leccy cars. But it's pretty grim for society if forty million people suddenly have no means of making their cars go. And that's trucks delivering food, too. There is *no* system-wide single-point failure equivalent for a petrol station network.

"What is the financial benefit to *E.ON* vs National Grid"

- National Grid cannot depend on this app to maintain grid stability by running a distributed load-balancing algorithm on people's smartphones. That would be crazy. E.ON won't & can't provide any back-guarantee to National Grid. Moreover, E.ON & dozens of other utilities are entitled to change both electricity pricing & load-balancing algorithm in the app, without consulting each other. But without those guarantees, National Grid have to build *the same infrastructure as if the load-balancing didn't exist, in case it failed*.


Re: Power required


“The energy capacity of EVs are known, their typical range is known, the energy delivery from chargers is known, the average annual mileage is known. You can use that for calculations.”

So why didn’t you do the calculation then?

90kWh buys you 200 miles range today, average 10k miles per year => 5MWh per vehicle. 40 million vehicles => 200 million MWh per year, 8000 hours per year => 25 GW on average. However, it really isn’t realistic that the charging demand is totally levelled across the day. *Let’s assume* that the peaking EV charging problems are mostly solved (unrealistic, but this is la la land), and just use the non-EV demand peak-to-average from Gridwatch, squeezes the demand into roughly 12 hours of the day, requires 50GW generating capacity.

His back-of-the-envelope was 80 GW required, whereas the estimate based on EV and Gridwatch hard data an is only 60% of that.

That’s a fair difference, but I’m not seeing how it affects his argument?

Whereas your argument does have some massive holes:

1) An ICE may be 25% efficient, but burning fossil fuel in a power plant is also only about 45% efficient. Taking into account 15% electricity transmission loss, and 10% battery charging loss, the truth is there isn’t much difference in fuel-to-wheel energy efficiency.

But the trade off was *never* about thermodynamic efficiency, that’s just rubbish.

2) Ironically, your statement “assuming that petroleum is extracted without using any energy” is the key one, both positive and negative. 30-40% of all the primary energy goes into refining it to petrol, and that’s bad. Whereas oil burning generators are more forgiving and can use a wider range.

But, the UK moved to gas from oil for generating, not for thermo efficiency reasons, but because it is dispatchable, ie spins up quickly in response to demand. In fact, you couldn’t have wind generation *at all* if we hadn’t done so. It would be simply impossible to run a Grid like that.


Re: Power required

*Theoretically*, dynamic pricing fixes the problem but in practice there are so many problems.

Firstly, does variable pricing work in the real economic world? In the UK we have Economy 7 day/night pricing, to support the very well established fact that it is cheaper to generate electricity at night than day. And.....only a minority actually have that meter installed. So, decades long market experiment tells you there's a problem with persuading people that's in their economic interest.

Secondly, how will this dynamic pricing be implemented? Perhaps everyone will have a "much smarter meter" installed. How has the deployment of the first generation of smart meters gone? Has it actually justified the £14 billion cost? Did the cost-benefit calculations of the eco experts turn out to be correct, or regulatory capture crap?

Or, perhaps everybody will have an app? What happens when that IT stuff goes down? We can see today: there are several apps which direct you to EV charging points; when the *pricing* goes offline because of 4G outage / website outage etc, the charging point has to go offline even if the leccy works, to price-charge the user correctly. At any one time 10-20% of charging points are offline due to this.

Thirdly, you've ignored the way the infrastructure organisation was actually sliced up by Thatcher for ideological reasons. The stability benefit accrues to a company called "National Grid". Consumer pricing is handled by dozens of separate companies called like E.ON. What is the financial benefit to *E.ON* to implement a costly IT system, and *reduce its price at night*, to shore up the infrastructure of "National Grid"? And currently (pun intended) consumer supply companies compete for pricing on a quarterly time period - how is this going to work if they compete minute-by-minute at charging stations, it would be mayhem. You would need to re-nationalise the whole system to align its incentives.

Fourthly, as you pointed out, this is decades old research based on the old electricity grid. The 21C grid contains highly variable and unpredictable generators like wind. You can't just wait "until the price of electricity dropped to near the lowest value of the previous day." If yesterday was a windy day, and today is dead calm, the price might never drop anything near - it might double or treble overnight. The problem is actually the unpredictability, not just variability. I can't "set the car to wait", because it has to be charged when I wake up to get to work, and the price might or might not drop to my target overnight. You've now got twenty million people addicted like sad gambling day-traders, checking their phones every few minutes to see when to charge their car.

Europe's digital identity system needs patching after can_we_trust_this function call ignored


Much more serious than you think?

“The scope of these vulnerabilities, we note, is rather limited: the software is used by countries to talk to the systems of other countries. It could, therefore, potentially, be used by agents of one nation to pretend to be citizens of another nation – or by miscreants that somehow managed to impersonate or compromise an eIDAS-Node deployment, at which point, you've got bigger fish to fry.”

Are you sure? I read the back-link, and the bigger picture looks much more serious.


Seems like currently any EU citizen has or can have a smart card reader, to read their National ID card, and a ton of organisations have agreed to use the same ID software with potentially common security failure modes.

AllI have to do to subvert this system is get hold of any citizen card reader, open it up, and MITM some of its responses towards a few well chosen organisations web portals, since the standard server side software wasn’t verifying signatures.

“Several public and private organisations allow this login mechanism (e.g. the online tax filing portal, several De-Mail services, several insurance companies)”

OK.....well, I bet the German online tax filing portal patches this PDQ. But every insurance provider and telco? Everywhere in the EU? I just have to find the weakest two or three organisations who fail to patch, out of maybe thousands in the EU, logon there to pwn that ID, and redirect those mailing address to where I like. Normally you only need two or three letters from telco, utility or insurance provider to your address, as evidence of ID for getting other ID’s. This is the mother lode of ID fraud!

Cringe as you read Horrible Histories: UK Banking Sector, sigh as MPs finger cloudy Big 3 as future risk


Re: Contractor exodus

Nobody said that the “contractors” (who were/are nothing of the sort) had it “easy”. We do say that it is tax-dodging. Yeah, one reason why most people didn’t go that way is being risk-averse, but can you explain why somebody’s personality trait should be rewarded by the tax system while doing exactly the same work?

Let me propose the following law to you, and see how you feel about it.

Companies who employ more than ten people, administering PAYE, should be allowed to “pre-spread” the income across their employees such that the income tax band to be paid is based on the average, not per-person. Companies can re-allocate some of the total tax saved to increase the salaries of cleaning staff well above minimum wage, to make it worth their while too. Everyone’s a winner.

Logically, what’s wrong with it? Contractors do their own admin, cleaning, HR. Just because permies choose to specialise roles within our company, making some people go into higher tax bands, why should we pay more tax on aggregate for the same total amount of income and work?

Boeing admits 737 Max sims didn't accurately reproduce what flying without MCAS was like


Re: Say, w

Boeing have just ignored the logical contradiction at the heart of this.

“The system as a whole” requires MCAS for this aircraft to be fully aerodynamically stable.

It just doesn’t make any sense to disable MCAS. What actions would you expect the pilots to take, that can guarantee to be safe and save the aircraft? It’s not even hypothetical, there are two options, and they have to pick one to write into the Ops Manual. Whichever they pick, everyone on board is going to die with 50% probability in case of a single AoA sensor failure. This is not a credible solution.

The pilot sees the AOA disagree light, and MCAS disengages. At this point, *one* of the AOA is indicating that the plane is pitching up into a stall, but not the other. The pilot doesn’t know any more than MCAS which one is broken. Either

1) The pilot guesses that left AoA is correct which claims they are in level flight. They decide not to trim down. 50/50, the left AoA is lying, and the aircraft really is pitching up uncontrollably. The aircraft rapidly pitches up further, making it physically harder to take corrective action, and within maybe 30 seconds the aircraft is not correctable. The aircraft stalls, and If the aircraft is within a few hundred feet of ground (as both of the crashed aircraft were) crashes without time to do anything.

2) The pilot guesses that right AoA is correct which claims they are pitching up. They decide to trim down, exactly like the MCAS would have. 50/50, the right AoA is lying, and the pilot drives the plane straight down into terrain.

Boeing are wilfully failing to understand that the AoA sensor is *part of the control loop* for stability. It doesn’t stand outside it, just because there is a pilot in the loop. Therefore, the AoA should follow standard design criteria for safety critical, which is triple redundancy to allow for single failure, plus voting. It really is that simple.

AMD sees Ryzen PCs sold with its CPUs in Europe as Intel shortages persist


Re: Intel contraints

By "financial business", do you mean "banks, insurance companies, financial services"?

I think that misses the big picture. The majority of jobs in the wider economy are either not desk-bound, or use some very stereotypical system which can be classic "thin client" - e.g. tree surgeon or estate agent (tablet with app), or call centre employee (headset & keyboard).

Financial services have only one million UK workforce in the UK => 250k annual replacement PCs max. Well over half of those fall into the call centre or other-non-typing-huge-program-or-document category, leaving you with barely 100k unit sales per year. That supports a cottage industry, but certainly not Intel, which is probably why they are throttling this market in favour of supplying the hyperscalers & data-centres, which is where their main market is now (already).

There's a great model for "computer cottage industry": mainframes. Mainframe manufacturers still exist, exactly to supply support for legacy systems of banks. I suspect that's the future for desktop PC's, coming sooner than we think.

But there's good news! If I'm right about this, Windows will be just a miserable memory within ten years. Essentially, the only people who need to do this very specific kind of "real work" will be exactly the businesses / users for whom Linux is the workhorse anyway. Non-power-users will find that tablets are sufficient for their needs, particularly as MSOffice is available there.

Reaction Engines' precooler tech demo chills 1,000°C air in less than 1/20th of a second


Re: This is truely impressive

I used to think this was amazing engineering, and indeed it is. This stuff is Really Difficult.

But I’m less certain than I used to be, that this is the right technical way forward for launchers.

If we look at what Space-X are doing - Elon Musk may be a first-class twat, but he teaches us a lot of important lessons.

1) Fuel efficiency and delta-v aren’t the most important thing. Fuel is 2% of launch cost.

Fuel mass is only important, in that it scales the cost of dry mass of the tanks, and the lifting power of main engine.

Dumb dry mass is cheap, smart dry mass is expensive.

2) SSTO is cool, but not ultimately critical. Re-usability of top stages is most vital. Cost drives everything, and simpler turns out to be cheaper than smaller.

3) Even weight efficiency of the structure isn’t critical. Build Cost is critical. The latest for BFR is using riveted steel like Flash Gordon movies, rather than aluminium and carbon fibre for the main tanks. It’s just cheaper, faster and easier to build, and the extra fuel to lift it still isn’t the dominant factor.


Twitter: No, really, we're very sorry we sold your security info for a boatload of cash


Re: De ja vu

Megacorps aren’t maximising value for shareholders any more, particularly not the FAANGs. They just maximise the pay packets of the C-suite for a decade or so.

Share*holders* make negligible profit from this, because these companies pay little if any dividend. The life cycle of these companies is roughly:

1 “we’re growing massively we need to invest in growth not dividends”

2 “we’re massive so we have to avoid tax, which we do by retaining earnings and not giving dividends”

3 “ the next big thing has arrived, seems our product and company is now worthless, sorry”

Share speculators make money on the way up, lose it on the way down, Twitter scams are great for them. Ironically, the share speculators don’t really care whether there are any real Twitter accounts at all, or they are just botnets. The underlying fundamentals are entirely irrelevant, as the only important game is to time the pump and dump. The pension funds have to follow the trackers, so it’s the pension fund “shareholders” that are always the loser in this zero-sum lifecycle.

Not to over-hype this storage chip tech, but if I could get away with calling my first-born '3D NAND', I totally would


Difference between 3D NAND and TSV chiplets?

Whats the cost benefit between 3D NAND and TSV chiplets?

Obviously, there’s a huge win from 2D NAND up to say 128 layer. But there must come a point where the cost of putting down each layer contributes most of the total cost, and then the price of the chip just scales linearly with number of layers. At that point, why not just make say 128-layer chiplets and stack them on top of each other using TSV or interposer technology, as high as you want, like Hybrid Memory Cube?

Anybody know where the limit of this might be?

UK.gov's smart meter cost-benefit analysis for 2019 goes big on cost, easy on the benefits


Re: If "smart meters" are so good

*How* exactly would a smart meter help me to use less energy?

The one thing it doesn’t do, is tell me which devices are using energy and when.

I *could* do a bunch of experiments turning stuff off and on, and look at what it displays. I can do that with a dumb meter now, and just watch the spinning disc, and I have once on a wet weekend. I’m sad.

If anything, in practice smart meters makes it harder/worse. If I really want to know what actions to take, it doesn’t take a rocket scientist to add up the bulb wattage in the room and figure out whether ir makes sense to go out and buy leds. Using the meter to measure the usage of the washing machine is far less accurate than just googling for the power per wash, because of spin cycles etc, ditto fridges and freezers. The only thing the smart meter gives you, compared to waggling your head around the house + google + pen and paper, is the ability to run a bad experiment and draw an unwarranted conclusion.

Put another way. This is a software site, and a fair few of us care about the power efficiency of the code we write for embedded etc. If you needed to do a power analysis of the impact of your software to present to the system architect, would you:

A) Get a very large envelope, and scribble some formulae however simplistic, relating CPU usage, frequency, memory bandwidth yada yada, and a couple of use cases

B) Find a bit of software that’s sort of similar and extrapolate because it’s better than nothing.....wait for it, you haven’t seen all the options yet.....

C) Both A and B, and sanity-cross-check....

D) Run the software on your Work PC, and read the current draw off the UPS that supplies your floor of the office.

700km on a single charge: Mercedes says it's in it for the long run


Re: 350kW!!!!

“everyone can plug in at the same time, the cars will simply charge at the rate the grid can handle.”

No, that’s not the way electrical grids work.

Firstly, grids dont discriminate between “car” and “your employers computer system”. If demand exceeds supply instantaneously by 30% that’s a general brownout where most electrical goods don’t work.

And secondly, that’s *really* not the way grids work.

Grids target 50Hz frequency as the way to maintain exact balance between supply and demand. If the frequency drops past 49.5Hz limit ( 1%) , they drop entire sections of the grid off completely until balance is regained. Once a section has been dropped, it takes between ten minutes and an hour to re-stabilise and reconnect. Can you see the problem?


Re: 350kW!!!!

Actually, we both missed something here.

I agree that I mixed up peak power and average mileage, I got the numbers wrong. But your analysis is even more wrong.

Firstly, I said that 350kW charging was unacceptable, but anyway so was 30 kW. The article is based on a car that charges at 350kW but you casually down-scoped by a factor of 12.

“Overnight” is just not how plugging something into a socket works. Most people will typically arrive home roughly at clocking off time, synchronously, and plug in. At that time the full charging load kicks off for some length of time, even if it *theoretically could* be trickled overnight.

The *right* solution is probably to limit typical domestic charging to 10 kW (not even 30) then everything automatically trickle charges overnight. Fast charge can be allowed, but must be separately and heavily premium priced to prevent everyone doing it.

We actually agree on that side of the practical solution, you just chose to disagree for whatever reason.

Your idea that people will agree to use their car batteries as spinning reserve for the national grid voluntarily, is economic rubbish for several reasons:

1) The car battery lifetime depends on the number of charge/discharge cycles. Nobody is going to reduce the lifetime of their car to help out the grid, without both being paid a lot for it and having control over the process. And “paid” doesn’t mean per kWh, it’s paid for the capacity and capital depreciation. The grid would have to be prepared to pay every single car owner at least a thousand a year for the impact on their car. That’s paying £1000 to a customer who is only paying £500 annual total for their electricity..I don’t think the electricity company would stay in business long.....

2) Obviously, I can’t risk going empty when I want to drive, so any smart charger must limit the usage to say 10% of capacity. You actually do recognise above that this requires over-provisioning the car to compensate, to achieve the same effective range. So, you are just replacing efficiently concentrated industrial capacity, with capacity that is both inefficiently fragmented and *hauled every day on the roads*. That’s just crazy.

3) Recognising the idea as unworkable at scale, you just say “not overnight......there is time to build the extra capacity”. According to both the green lobby and car companies, the car transition should be largely complete by 2030. It would take a minimum of 20-30 years to even double our current grid capacity. So, no, there isn’t time even if we had started ten years ago.


Re: No Need to Panic

No, you’ve missed the hidden assumption, in the phrase “a lot of these will be overnight”

It’s *possible* to make it work, but it will be an epic ClusterFK unless someone actually manages the technology correctly.

“Overnight” is not how plugging something into a socket works today. Most of the population gets home at (roughly) clocking off time, and will plug their car in when they get home for “overnight topup”. But the plug doesn’t *know*. The entire topup would happen within 10 minutes after they get home, at 350kW per car, pretty much synchronously across the country. Immediate catastrophic blackout.

The way to manage this, is simply to limit the *average domestic* charging to 10kW, so it automatically spreads the load overnight. And coincidentally doesn’t require change to the domestic infrastructure.

It’s fine to have a fast-charge plug, but it has to be priced at premium if not punitive levels to prevent people lazily using it as the typical case.


Re: 350kW!!!!

In the sense you mean it, indeed no it definitely won’t. Wind power often achieves less than 5% of its rated capacity across the whole of the U.K. for a couple of weeks at a time, so you need 30GW storage for a fortnight!

To give a feel for the safety issues of energy storage failure, for a technology like pumped hydro, it’s 1000 Dinorwigs. Look at the map of the U.K. and realise that there is nowhere you can be more than 10 miles from such a one. The risk of building them so close is unacceptable, because a dam failure would wipe out a large town with a tsunami costing tens of thousands of lives.

Dinorwig actually proves that this tech *isn’t* appropriate to cover windpower outages. If safety weren’t an issue, they would have built it closer to where the energy would be used, reducing the transmission costs, but they didn’t. It’s not just where the mountains are. Compare it with a nuclear power station. Those are also built in out-of-the-way places for safety reasons, with similar generating potential (1-2GW). But Dinorwig only generates power for 5 hours at full whack, and wind power needs 300-hour (at least) coverage to make it usable as a primary source. You need to find 60x as many safe sites for hydro as nuclear, when you think of it as wind power cover. In a country the size of the U.K. that isn’t going to happen.

Why did they build Dinorwig at all? Simple. In the old days, storage meant “for as long as it takes a coal-fired or nuclear station to spin up”, which is a couple hours, and a good match for Dinorwig’s capabilities. It’s wind and solar that make demands that can’t be met.

Ironically, we do know of one extremely stable, energy-dense storage medium, with engineering legacy. Hydrocarbon. And that’s not as crazy as it seems. If we had a means of driving carbon-capture from the atmosphere electrically into hydrocarbon, that would be a great storage tech, and then burn it in a standard gas turbine when needed. It would also be *the* grand solution for the atmospheric CO2 levels already baked in.


Re: 350kW!!!!

Perhaps we are looking at this from the wrong end of the telescope?

The society-wide solution can’t be either 350kW or even 30 kW domestic charging anyway. If you allow that, then people all choose to charge at the same time when they get home or before they go to work. The peak power draw across the U.K. would be 3500GW, 100x what we can generate.

But the typical current domestic limit of about 10kW fits with the existing power generation capability reasonably. So, most people, most of the time, will just have to re-train ourselves to trickle-charge overnight at 10kW.

There does need to be a solution for the times when you forgot and have to charge the car quickly. Then you go to a charging station that can provide 350kW, and pay through the nose (maybe quadruple normal cost). That’s a mistake you won’t make more than very occasionally!

Gradually replace pumps with plugs at the 8000 petrol stations in the U.K., 2-4MW each, 50% peak network usage, should be less than 10GW which seems achievable. The difficult bit is going to be ensuring that “petrol/charging stations” remain economically viable to remain open.

Obviously, this does mean building out some more nuclear plants, rather than windmills. But it’s not complete fairyland like “in ten years everyone will be driving electric, and by the way rewire their house, magically find off-street parking, totally re-engineer national grid, and increase generating capability by 100x”


Re: 350kW!!,

I don’t think there will be brownouts because the system can’t be allowed to operate like that at scale.

Half the population are going to want to top-up just before they go to work, drop the kids at school, or whatever else. There are 20 million cars on the road in the U.K., not even counting the big rigs. That’s 3500 GW peak power, which is 100x the available generating capacity. restricting to 30kW charging barely touches the problem as it’s still 10x of available peak capacity. Nor does telling them to plug in when they get home in the evening - same problem, different time.

Once electric cars become more than a minority sport for the 0.1%, people simply can’t be allowed to decide when to charge their car. You will have an allocated time each day. There is no other way, this is a limitation of the tech. To be fair, it probably won’t look exactly like that to the consumer - more like overnight trickle-charging at reasonable prices; plus on-demand charging for emergencies but the price per unit would be quadruple to 10x base-price.

So, the *real* irony is that the whole concern of “how long does it take to charge” will just disappear, because electricity will be re-priced to prevent people convenience-charging.

Bus pass or bus ass? Hackers peeved about public transport claim to have reverse engineered ticket app for free rides


Re: Shut it down

Contrasting two business models:

#1: A really large vehicle ("bus"), sharing the cost of the journey between 20-50 people. It maximises sharing by scheduling on predefined routes, at predefined timings. It needs public subsidy.

#2: A small vehicle ("Uber"), travelling from where you are, to where you want to go, when you want to do so. This is incompatible with ride-sharing, but has been named part of the "sharing economy". Apparently this is an economic miracle, so that even a company which only runs timetable and Payroll, is worth billions.

Yes, #2 is just a scam, defrauding both investors and drivers.

But it's strange that #1 can't be profit-making. The bus must be profitable when full, so the problem is likely that most buses run nearly empty. It's a Pareto paradox - 80% of the people want to use 20% of the buses, so 80% of the people see the buses as crowded, while actually 80% of the provided capacity runs nearly empty and un-profitably.

But you would think that something like an Uber app for buses could be exactly the thing. When the bus is full, it sticks to route & time. But when it's nearly empty, it can easily divert a few streets to pick up randomers on request, while only delaying one or two passengers on board for a couple of minutes. Plus, many buses have really winding routes around town to pass stops that might have passengers at them but usually don't.

Of course, you would have to explain to people that by doubling passenger numbers during quiet times, they could halve fares......

Bus companies are clearly getting *something* badly wrong, and should consider whether there is some more flexible service they could provide than traditional- otherwise they will locally optimise themselves all the way to bankruptcy.



Biting the hand that feeds IT © 1998–2020