Re: Keyboard layout
Shriek ! - is what it's called in mathematics, where it denotes a factorial (5! = 5x4x3x2x1 = 120)
237 publicly visible posts • joined 13 Dec 2011
I'm sure there's a lot of things D-Link could have done better, like using toolchains and languages that are safe against buffer overflows and other common attack vectors.
This only makes it a bit better, it is still a small cheap processor facing the whole internet, where bad actors have immense resources, and indefinite time, to construct an attack.
Using open-source software is almost certainly a better approach, it does normally ensure that the code gets a lot of expert review, but even then it cannot be guaranteed perfect forever - the "forward security" problem. So, it needs remote re-programmability to install patches, updates etc. - and including these features massively increases the attack surface and the potential consequences of a successful attack - like it can totally reprogram itself with hostile code.
It would seem to be this bit, the over-air-reprogram function, that has the vulnerability in this case, and this code is not reprogrammable, to avoid "bricking" type problems. So, they're stuffed, patches won't work.
The only approach I know of would be to formally verify the router code, using formal methods - and then remove any reprogrammability function.
OK it takes longer, costs more, but you end up with an inviolable product that never needs updates.
It is a big pill to swallow - you can pretty much forget about stack-based languages, unless you can absolutely guarantee stack depth and integrity for all time - to mention just one issue.
Is it any sort of realistic possibility? I know that very complex software for oil refineries, chemical plants, uses these sorts of methods. I'm guessing the hard allocation of memory means you need more.
Is it plausible for $50 routers? The volumes they get made in ought to dilute the SW cost down to nothing. Even if the HW cost more, it's a much better product.
What does the Reg-o-sphere think?
The insights of Gordon Welchman from Bletchley Park days are still classified, some of them at least. He worked on what would now be called metadata analysis, ie without any view of message content. An awful lot of valuable information can be gleaned from this, the full extent is still secret.
If I were a corporate IT bod, I'd be wary of putting this into the hands of others, from the above argument, and the fact that it "ranks" particular documents, communications, persons, as the most important. This is the key to understanding the technical, scientific and political heirarchy of an organisation - and a huge ready-prepared bounty for any "bad players".
This much is indisputable.
Onto slightly more conspiratorial matters, as per the title, what is the US legal status of information that does not include direct transcripts or disclosures?
Are cloud-storage providers, or database-management tools, allowed to share this metadata information with whatever partners they choose? Similarly, if the authorities wanted visibility, what would be the legal status? - It's not quite warrantless interception, definitely illegal, as arguably no "confidential " information - such as might be encrypted - has been disclosed. I don't know, perhaps caution rather than suspicion is needed.
Ultimately, what are the benefits to a company of the "cloud" storage, versus a backed-up server? I've always liked the fact that appropriate related documents are in the same sub-directory of the project, on the F:\ drive, and they stay there, to be browsed whenever, forever. To find the correct unique thread in Teams or whatever, as the only way back to a remembered document, is a huge impediment - and utterly unavailable for new members. And for anyone, if you decline to renew your license.
The batteries we're talking about here are Lithium Thionyl Chloride, they are Primary cells with zero electronics.
Most/all the comments above apply to Li-Ion rechargeable cells.
LiSOCl2 are a different construction, a different set of risks - and in many ways worse.
The energy density is higher, and they're fully charged of course, unlike Li-Ion in the factory (typ 20%).
SOCl2 is highly reactive, corrosive, poisonous - and reacts with water to produce mixed acids.
Nice stuff. It's what they use to burn-through the epoxy when de-capping packaged silicon chips.
I'd definitely send it back to earth without pilots aboard, just to be on the safe side.
You can't send the two pilots home in the soyuz though, that would leave the ISS without an emergency escape vehicle.
I'll let NASA figure it out.
I don't think they have much love for Boeing at the moment.
Who'd have thought the idea of de-orbit and landing, using just a ring of 6 spacehoppers, was in any way flawed?
The real prize for AI would be to detect irony, but that's going to be difficult since an awful lot of humans still don't get it.
I think the researchers, like many people, treat irony and sarcasm as equivalent terms. They are not.
Sarcasm is a blunt tool, with a definite intent to ridicule some victim - it sounds embittered and is normally perceived as quite offensive.
Irony is much harder to detect, indeed it is often missed, because it is on the limit of plausibility. The only possible victim is the self.
It is a deliberate risk, if it is not understood then the speaker has conveyed an incorrect impression that he/she is some sort of idiot, or bigot. It's difficult to undo. Tell me about it.
For those who might want a further introduction to irony, I have to recommend Kate Fox in her book "Watching the English": https://edisciplinas.usp.br/pluginfile.php/4434518/mod_resource/content/1/Watching%20the%20English.pdf
For those attempting to acclimatize to this atmosphere, the most important ‘rule’ to remember is that irony is endemic: like humour in general, irony is a constant, a given, a normal element of ordinary, everyday conversation. The English may not always be joking, but they are always in a state of readiness for humour. We do not always say the opposite of what we mean, but we are always alert to the possibility of irony. When we ask someone a straightforward question (e.g. ‘How are the children?’), we are equally prepared for either a straightforward response (‘Fine, thanks.’) or an ironic one (‘Oh, they’re delightful – charming, helpful, tidy, studious . . .’ To which the reply is ‘Oh dear. Been one of those days, has it?’).
I've posted before on the subject, https://forums.theregister.com/forum/all/2022/10/22/chrome_extension_howto/#c_4553678
> Can we stop it though?
Possibly not, though we might impede it long enough for other measures to come into force. Very much in line with immune system strategies.
A firmer mandate that commercial secrecy does not apply to public-money contracts would be good. The redacted Palantir contract is obviously hiding something that would be unpalatable to the established review methods.
The rules are simple - if it's public money, then terms and conditions are a matter of public record. For god's sake they already publish every word of parliamentary conversation in Hansard, why should predatory corporations be offered better terms?
Disallowing any one-on-one lobbyist conversations might be an answer, conversations to government should need at least two government parties present. This hugely complicates any deliberate manipulation.
Educating politicians about the risks of soft-capture, letting them know there are highly-trained manipulators out there, is another possible defence. Maybe give them an opportunity to declare themselves as being under threat - then let them wear a wire, so that our team can disassemble the methods targeted at them? OK, take-up would be zero, I appreciate that, but it's at least a problem defined - even if as yet unsolved.
There's not enough discussion and comment from the Regtards on this dreadful story. Perhaps because there are so many factors it's difficult to know where to begin. Fundamentally, we may have a genuine threat against the future of humanity. Yes that's a monster claim and I'm not entirely serious, but there are pernicious factors that need to be nipped out now.
Allow me to put down a few starters, for further discussion.
Firstly
American Healthcare Giants make untold trillions from the controlling position they have created - regulatory capture I think they call it.
Consider the "last dollar last day" pricing of drugs for terminal conditions - allowing a free market pitches the value of life against the value of money - just like meeting the grim reaper, "all I own for one more day?"
Our NHS pricing (aka NICE) keeps them honest, and is a huge embarrassment. It reveals their cartel.
In fact, the pure existence of the NHS - which treats maybe three times the patients for the same money - is unwanted proof that the "free to all" model works. But for this example it would soon be accepted that such things can never work.
Secondly
Our pesky insistence that medical drugs cannot be advertised to the public is under sustained threat, and is starting to falter. Look at the TV advertising for what is just a combination of ordinary painkillers, paracetamol and ibuprofen, and the price they charge - versus just combining the generic medications and taking two tablets. The promotion and branding of these trivial combination medicines is a huge moneyspinner, and the big pharmaceutical companies are desperate to let it loose over here.
On a much darker note, the promotion of "non-addictive" opiods has resulted in what, 500,000 deaths in the USA? - never mind the "less than death" adverse outcomes. Purdue Pharma are offering $6bn of the $18bn they salted away (my figures from memory) - and sure, go round again...
Thirdly
Allowing parliamentary lobbying to continue in its current form must stop. We are setting our mild and well-intentioned MP's against the world's best with regard to influence and control. These people will deconstruct any target psyche into bits, and work from there - look at the individualised text messages to American voters on social media. There's no need to bribe or blackmail any more. Donald Trump managed to get elected, and later to get Congress stormed, using manipulation of opinion alone.
Now look at how the new Palantir contract has been signed and delivered - including all the redactive measures to ensure it cannot be scrutinised. You can't get this from bribery or force, it needs full-on psychological conviction - the target needs to get it past all the colleagues, committees, civil service, legal - without being aware he is being carefully coached in each of his arguments. Quite possibly - and the numbers are speculative - this might be an individual MP pitted against a team of twelve, all expert in finding the motivations, convictions, principles of otherwise decent people, and corrupting them.
What chance does even the best well-intentioned MP have against all this?
Well, oversight within parliament helps - but this seems to have been breezed through, contracts already awarded. Our final line of defence is openness (i.e. non-secrecy) - and the independent scrutiny this allows, forever. What we see here is psychological control with careful elimination of all countermeasures.
It is not Regulatory, but more Parliamentary Capture.
We cannot sit by and let this happen.
I totally applaud what they're doing, but with one serious concern:
OEM-manufactured Laptop batteries and chargers are amongst the safest and most reliable consumer electronic devices on the planet. They are like fridges, we trust the fridge to never catch fire, we just do.
The risk of midnight fires in living quarters is very serious indeed, but is almost entirely eliminated by stringent controls on the design, manufacture - and tamperproof nature - of sealed battery packs and chargers.
Allowing the possibility that some of the population might be inexpertly repaired - even a tiny fraction - completely ruins this risk.
It will mean that we all need to switch off at the wall before going to bed, and the laptop won't be fully charged in the morning.
I don't think that batteries/chargers can be repaired or remanufactured to the same level of safety as new parts, not in any way that could be underwritten.
So, regrettably - due to their energy density and risks - they need to be ground-up and recycled after their 5-10 year design life. I'm only talking about batteries, and possibly mains supplies.
This is the only way I can see to maintain their "fridge" status as infallible devices.
I'm very happy to be proven wrong - but I'd like to inspect the proof rather carefully, if I may.
.
I love Thinkpads, and own five of them.
All except the P50 - where the 90W supply is a gigantic slab, and the 60W supply will not allow the computer to charge and run at the same time !
This is an appalling design decision, the PC needs to be fully shut down or else the 60W supply does nothing at all.
I can see why they did it, the PC might take 80W peak, and this would overload the 60W charger - but it's not beyond wit to design the power system so that the battery can "help out" on peak loads whilst mostly charging - it should be a design requirement.
I don't know what other models are broken by design like this, but please Lenovo, don't ever do it again.
If you're interested in this sort of thing and haven't read Tim Worstall, do go into the Reg archive and have a browse.
He made the point clearly, that rising prices turn dirt into ore - like a receding sea level exposes more land.
Because of this, attempts to corner the market (looking at you China), usually fail.
However, this is better still - a newly discovered find, not one that has been promoted into viability. It's added maybe 50% to known reserves, in a concentrated form - and suggests there may be similar instances elsewhere on the planet.
The Lithium problem seems like it might now have gone away, with this and "Direct Lithium Extraction" - which looks like it will dramatically accelerate the process, with reduced energy costs, environmental impact etc.
see here, for instance...
https://www.cnbc.com/2023/06/05/how-new-lithium-extraction-tech-could-help-us-meet-ev-targets.html
Arguably, the worst one can be is "no better than a coin-flip".
If ChatGPT answers 99% wrong, it's doing really well - you just need to invert the result.
Obviously we're not talking binary options here, there are many more ways to answer wrong than to answer right - but it is then not a fair question to ask of a coin.
So far, Linux on a desktop has been adopted by those that have a strong preference for it, for all sorts of reasons.
Most people, currently, prefer windows or Mac, as the stats show.
That preference is being constantly diluted as excellent free programs (I'll not call them apps, that's for mobile devices imo) such as Libre Office, KiCad, LTSpice, just keep on getting better.
Now, with M$ being so expensive and shite, a new factor emerges, aversion.
I don't want an operating system that constantly trawls the internet for clickbait sites to present to you, that you can't switch off.
Nor do I want my every keystroke recorded, my every edit published, in a suite of Office programs that just get forever worse.
I've a strong aversion to any OS that locks me into cloud storage, sulks when offline, and regularly spends up to 30 seconds pleasuring itself - whilst ignoring any input.
I certainly don't want to have to pay for the misery, on top of everything, and at maybe twice the cost of the computer.
Now, it appears I don't have to - I can get all the things I like, from nice people, and for free.
There's no doubt we need more chip designers. The problem is that it is so complicated now that it can't be taught effectively, with real examples. at undergraduate level.
The toolchain handles too much - students need to understand how and why the tools do what they do, first. We need to know "basic woodwork" before programming CNC machines to do it all automatically.
I just propose that simpler, earlier toolchains should be freely available - and ARM could help in this, by open-sourcing a 1990's toolchain so that it is as easy to use as LTSpice for example. The support community then grows itself.
For real actual chips, I suggest the tools are configured to produce working designs on some sort of "baby process" - like 4um design rule in polysilicon, or maybe the Sedgefield IGZO lot? [Blair's constituency, heh...] - they claim to tape-out in 24 hours not 24 weeks and with zero mask cost. (I'm not associated, never met them, link is below)
IGZO only runs at 30kHz or so - but great, you don't need $$ 20GHz scopes to debug your IC, and you can have several goes at it, make those rookie mistakes, and get it all done in one term [semester].
Apparently IGZO can make 32-bit ARM cortex cores [ https://www.nature.com/articles/s41586-021-03625-w] - these designs are big enough to introduce all sorts of necessary further constraints and hierarchical approaches. I don't think yield is so good at this level, I'd start with small well-documented 8-bit RISC cores, peripherals, to learn from their simplicity.
There's nothing like learning from the ground up, like most [all?] previous chip designers. You can then debug a design right down to bare metal - an engineer's understanding, right down to the physics.
The new skill is to take all that, to allow it to be totally automated, abstracted, and yet still keep it under control. I can't do that, and greatly respect those that somehow can.
A low cost "primer" process with primitive tools would be an excellent hands-on teaching approach, showing directly what tasks are tedious and can be automated, once fully understood.
We all started with stacking cups as toddlers and I don't think we can miss-out any of the subsequent, practical steps - we need our towers to fall down, to make real things that either really work or really don't.
Only then do we learn the value of simulation tools, test vectors, redundancy strategies, emergency reconfigurability options - that can avoid embarrassment.
Technically you are right to be confused, but it's not uncommon to see a graph axis with values of say 1,2,3 x 10^-6, and also labelled "micro"-units. It's duplication of exponent. In this case, they mean the mantissa numbers are in millitorr, that's my most likely interpretation.
The paper looks a bit rushed, not surprising given the Nobel prizes possibly at stake, and to me this adds authenticity to the claims. They might be wrong, but they're honest.
Interesting points, there's plenty of opportunities even for low current superconductivity. It's early days on this one and current density will improve as sample quality becomes more homogenous.
Computer chips are worse than power grid - the limits in place today are due to electromigration, at current densities so high that the metal ions start to flow.
For signal electronics, zero-R inductors have infinite "Q" and are already in use in basestation filters - despite the cost and complexity of cooling. It would be great if they could be sold as ordinary components - allowing RF band filters with incredible performance in ordinary handheld devices.
Let me correct that for you:
The laws of Physics do not prohibit superconductivity under any particular circumstances.
Oh, and "there is no apatite for this story" - you missed an easy one there.
Unsupported humourless opinions fail on so many levels.
I've looked a the paper, it has all the hallmarks of genuine honest research, and all the expected behaviours of genuine superconductivity.
I, for one, have plenty of apatite for further progress...
I don't disagree, I've worked on ABS and airbag systems that have separate hardware safety, "Safing" as they call it.
This only really works for systems that have a simple "safe" state they can default to - like "airbags off". OK it's not very safe at the point you need them, but this is so rare that it's OK to just raise a fault, which will get fixed, and you're back on cover.
More complex systems with no simple "safe condition" - like oil refineries - need a better approach.
I didn't know it existed, but it is possible to write software that is guaranteed (by formal methods in mathematics) to produce the correct output only, or an error. If you couple-up two or three of such systems, you have safety and reliability that insurers will cover.
I'm not an expert in this, and clearly the combination of the outputs needs special attention, but I do know that it is do-able, proven, and accepted.
I'm hoping it gives some sort of answer to the "Quis custodiet ipsos custodes?" - a question so old that it's known in its latin form.
Who watches the watchers?
In response to your company disclaimers, it's perfectly fair to disclaim responsibility for others' deliberate events outside your control.
Consider the similar case of engine remapping... ECU's with modified software will invalidate not only warranties, but insurances also. Accident investigations will go as far as extracting code images if necessary.
However, it is possible to publish your code for expert scrutiny without revealing how a particular unit might be reprogrammed.
We assume your unit does not blindly accept any update, it has some sort of signing, crypto, secure boot, and you've disabled JTAG...
This means that units cannot be intentionally "hacked" by the user - unless the secret key is requested - and given..
Moreover, if the relevant secret key is disclosed to the legitimate customer, all warranties and responsibilities can be voided, being conditions of the release. This unburdens the manufacturer from the need to prove that new hacked software was installed.
So, we can separate the code-inspection aspect - a wanted outcome, from the arbitrary reprogramming by users - an unwanted outcome, unless authorised.
.
... Should the firmware for a safety-critical sensor be made available so the device can be hacked?...
Bit of a loaded question, I’d say that the firmware should be made available so the device ## can’t ## be hacked…
Code that has been or can be reviewed is always going to be stronger, eventually. The open-source community has immense knowledge regarding which methods and approaches have been easy targets in the past – like for instance all the buffer-overrun mechanisms. These are easily eliminated with the right toolchain, with bounds-checking, address randomisation – but the adoption of better techniques is slow. It’s very much a case of “stuff that doesn’t get checked, doesn’t get done”.
The downside to publishing your code is that malevolent hackers have easier access – but the real threat actors don’t need it, they’ll work it out anyway. Take a look at the iPhone jailbreakers (I’m not saying they’re evilly intentioned) – but they are able to break all manner of unpublished state-of-the-art security measures, in just a few days.
Relying on “obscurity” has never been a successful approach, you must assume the attacker knows everything about the system and the code, apart from the secret keys. This is how cryptographic validation-attacks are set-up.
However, it takes time for code to be reviewed and flaws found. The maker of the “safety-critical sensor” in this case will need to delay release and/or update devices in the field.
In fairness, “safety-critical” and “secure” are independent requirements, pulling in opposite directions.
Many safety-critical interfaces, protocols are not cryptographically secure - there’s a lot more to go wrong, which affects reliability. They often simply rely on network security to ensure that there are no bad actors sending out spoof messages.
That hasn’t worked out well for factory-automation busses – SCADA, Modbus and the like, if Stuxnet is anything to go by.
So, coming back to the question, it’s tricky…
The safety-critical aspect of the device is better if reviewed, and releasing the code won’t reveal anything of use to an attacker – he already has details of the protocols and can mess-up the system using just those.
The firmware-update protocol needs to be secure, in the cryptographic sense. This is also improved by review, and releasing the code only puts all attackers in the “known starting position” against which security is measured.
The only real cost of release is that hackers might more easily find a backdoor, an exploit, that can bypass the secure-update method. I’d say that your real threats can already get all the information they need, with just a bit more effort.
So the cost of release is outweighed by the opportunity for free review and “bounty” type code-fixes. Bounties are much cheaper than employing an equivalent level of talent on the payroll. Your existing team will gain expertise through this process.
Maybe there is a way to release code under some sort of NDA, so it isn't available to all and sundry - and the NDA explicitly permits "white hat" penetration testing.
Finally, the customer is in a better position, he doesn't need to "blindly" trust your code - he can see the review discussions, set his own experts on it, whatever.
Totally in agreement with the post above. It's not just the code efficiency aspect, or cost, security, provability, simplicity - it's the fact that you have both feet on the ground. The actual machine code instructions are fundamental, there is no level below*, no abstraction, no mystery.
I'm sure I'm not alone amongst engineers, in having a "learning style" that requires a continuous chain of understanding right down to bare metal. Educationally, this is how science (and mathematics) is taught, there is a clear path to the fundamental maxims, definitions and root equations.
Odd that a similar approach is not taken with computer science - it's all abstracted to death:
"A method in object-oriented programming is a procedure associated with a class. A method defines the behavior of the objects that are created from the class. Another way to say this is that a method is an action that an object is able to perform. The association between method and class is called binding."
Great!!...
That might be a clever shorthand for what is intended, but offers no visibility as to what actually happens with real inputs, when imperfect or deliberately malformed. You are utterly reliant on reams of unknown code.
Either it's just my old-fashioned "bottom up" approach, or maybe Computer Science is unique in being a subject best taught from the top down?
Yes the hollow fibre is quicker, but eight times more lossy. You then need more amplifiers, like EDFA (Europium doped fibre amplifier) which is near zero latency and near noiseless, but there is a limit, and at some point you need to decode and error-correct, which is much slower.
The nice thing about solid fibre is that the "wall" is from a high refractive index (the core) to a low index (the cladding) - which means you get TIR - total internal reflection. The attenuation is just then down to the transparency of the glass, which is amazingly good.
Other common types of fibre, graded-index and single-mode, also rely on a high-index core, and cannot be replicated with an air-core.
I respect Mr. Sod's input, and yes, it will happen.
My point is that it will happen infrequently. Moreover, it will happen anyway, even if you never deplete your reserve. You need 2nd-level strategies, like load-shedding, controlled shutdown, which is a thing datacentres can do pretty well.
Just to run the numbers, the batteries will hold-up for 2-4 hours if fully charged. This only drops 20% if you allow depletion. I'm saying that the probability of a 2.4 hour outage (assuming this is the figure with no depletion) is very close indeed to the probability of a 2.0 hour outage (with depletion).
So, all you need to do, in the worst case, is to run your load-reduction (which you have to do at some point regardless) a little bit earlier.
Taking this observation to its limit, a datacentre only really needs enough battery power to divest itself of [most of] its workload - which might be just a few minutes. It would then run at maybe 5% till power returns - just so that it doesn't drop off the network altogether.
I'm assuming of course that there are other datacentres that do have power, and spare capacity - isn't this the original raison d'etre for DARPANET and the internet in the first place?
I was rather hoping that the entire EV battery pack could be re-used – with all its busbars and monitoring electronics intact. This could give the “scrap” battery a real value of maybe 25% of the £10,000 original price.
The alternative, disassembly, is extremely hazardous due to stored energy and all the connections being welded – I can’t see it being done by machine, other than with a month underwater and a big crusher. The Lithium, which we do need to reclaim, is then only worth £500, and that is when fully refined – I don’t see that it covers the costs and risk involved.
Ideally, there would be a standard subset of CAN commands to allow re-use of the whole pack, automotive makers have been quite good about this in the past, particularly when forced. However, it is not absolutely critical – I’m sure that software drivers could be made for each BMS (battery management system) supplier.
Of course, I doubt that datacenter-sized battery farms would want a hotch-potch of different equipment all under one roof, they would be better making their brand-new packs compliant as above, and selling the exhausted packs on.
For £2500, say, versus £10,000, I’m sure that domestic PV enthusiasts would use them, they’re even a reasonable proposition for garage-based electricity resellers, like Tesla’s wallpack thing. Finally, there will be a growing market at recharge sites, like motorway services, garages, where they need more peak power than the grid can provide them - and would greatly benefit from buying cheap-rate versus peak.
I hope you don't mind me re-posting my comments above? I think they fit better here:
It's a linear programming problem.
If you build your energy reserve just big enough to cover the specified power-outage duration, it is a lot of cost that sits there doing nothing for nearly all of its life.
If you increase the reserve by just 20%, you can trade power every day, generating an income. You can get the best spot-price of the day, and the wear-out costs for just 20% cycle-depth are minimal. It also keeps the equipment "exercised" at full power, so you can trust it.
You might just choose to decline the peak electricity price and run your datacentre from batteries for an hour a day, using electricity that you've bought at minimum price, minus your electrical losses.
Now run the numbers for [x%] of additional reserve, additional installed cost, and you can see it works better. In fact, for a 20% increase in capacity you'd only be looking at more batteries, the converters would remain the same - so a cost delta of 10% say.
You can even play with the statistics - the power outage duration is a Gaussian, unlikely to coincide (in its worst case) with you being at your minimum of stored power. So, for a small increase in the risk that you won't cover the outage (already a non-zero risk) you can pay back some costs, even with zero increase of your energy reserve.
It's a linear programming problem.
If you build your energy reserve just big enough to cover the specified power-outage duration, it is a lot of cost that sits there doing nothing for nearly all of its life.
If you increase the reserve by just 20%, you can trade power every day, generating an income. You can get the best spot-price of the day, and the wear-out costs for just 20% cycle-depth are minimal. It also keeps the equipment "exercised" at full power, so you can trust it.
You might just choose to decline the peak electricity price and run your datacentre from batteries for an hour a day, using electricity that you've bought at minimum price, minus your electrical losses.
Now run the numbers for [x%] of additional reserve, additional installed cost, and you can see it works better.
You can even play with the statistics - the power outage duration is a Gaussian, unlikely to coincide (in its worst case) with you being at your minimum of stored power. So, for a small increase in the risk that you won't cover the outage (already a non-zero risk) you can pay back some costs, even with zero increase of your energy reserve.
I agree that decommissioning needs to be included in any fair assessment, but it's more complicated than that.
Firstly, I don't believe that Li-Ion batteries are particularly toxic, certainly nothing at all like nuclear, and their reprocessing is already implemented on a large scale.
Secondly, there is a good case for taking "exhausted" car batteries (Li-Ion) that have fallen to 70% capacity, and rather than recycle them, just continue to use them in fixed installations till they drop to 50%. This would easily double their working life.
Thirdly, flow batteries are possibly a better contender for long-term storage, like keeping a weeks worth of power stored.
It's not a dumb question, if the complexity of the answer is anything to go by.
With the battery fully charged, the maintaining power for the electronics and any cooling might be a few kW, then there is the self-discharge of the batteries, maybe 10kW? These are just my estimates, the real figures could be higher.
The point is that the system expects to be cycled, and that's where the inefficiencies creep in. There are losses in the power converters, from AC to DC and back again. Power converters at this scale are normally quite "agricultural" - they might use 12-phase thyristor invertors, which involve a fair bit of smashing voltages together. This is what is used for the UK-France DC link, with efficiency of maybe 95%. Then there is the battery coulombic efficiency, which is very high for Li-Ion batteries, at low current - but drops significantly if you run them at C/2 or C/4 rates (i.e. so they charge or discharge in 2 hours or 4 hours respectively).
It is this 2-hour rate and 4-hour rate that gives you their published figures, the overall round-trip efficiency. This includes all the losses described above.
Quoted figures for the Tesla megapack are
2-hour duration:
Power & Energy: 1,927 kW / 3,854 kWh per Megapack
Round Trip Efficiency: 92.0%
4-hour duration:
Power & Energy: 970 kW / 3,878 kWh per Megapack
Round Trip Efficiency: 93.5%
source: https://electrek.co/2022/09/14/tesla-megapack-update-specs-price/
I deeply object to fact that most scientific papers are paywalled.
Invariably they are publicly or charitably funded, to some extent, and therefore should return that investment by being publicly available.
The idea that the public purse should then finance maybe 150 years of free policing and prosecution, to support a few immensely rich knowledge trolls, is absurd.
For that is the deal with regard to copyright. For comparison, patent rights extend for only 17-20 years and do not provide either policing or prosecution.
Copyright is way too generous, especially since being extended to 70 years after the death of the author - a massive windfall, or land-grab if you prefer...
The only benefit to the public of this immense free deal, is that there are lending-library provisions and that the material will ...eventually... become public domain.
### So, could the aims of Zlibrary be met, entirely legally, by using these lending-library provisions? ###
I wouldn't mind, if in order to meet these provisions, it had to be through a pesky online PDF viewer. It would be nice if it allowed screenshots, and/or hyperlinks. It is the difference between listening to a track on youtube, and downloading an MP3.
This should be applicable to scientific papers and books alike.
Do Reg readers have a better-informed opinion on whether this might be possible?
Further comments/opinions would be welcomed.
PS: I'm aware of Unpaywall, which is great, but only works for open documents. Similarly, there is annas-archive.org, the few links i tried didn't work - but given it is serving PDF files, it will always be subject to shutdowns.
Pagers are still in regular use at all big hospitals.
They emit no RF, so are safe next to ECG's, EEG's, and they guarantee to cover right into the depths of the building.
By a similar token, faxes are a simple, well-understood backup between pharmacies, hospitals, GP's and the like.
They have an extremely low "attack surface" and do not require schooling in the many risks associated with email and internet.
It pays to have multiple backups, the internet can break, phone systems normally don't.
It wouldn't be beyond wit for the exchange to detect fax tones and act accordingly, whether it's a DSP decode, a higher bandwidth Voip line or whatever.
I see it just as BT looking for cost savings they'll then keep, ta very much - like being allowed to charge line rental in advance, the c***s.
Glad you liked the post.
I think the point stands, we just need to separate patriotism and royalism.
The English aren't particularly patriotic, compared to the Scots for example, or the Americans. (Both fine, I like them, they're just different in this regard, on the average).
The English don't really adopt any national identity, nor claim any distinguishing features - it is almost as though we see ourselves as a reference, the norm, like BBC "received" pronunciation.
Despite the efforts of our newspapers and institutions, we mostly reject excessive patriotism and royalism, because it is jingoism, most unseemly. Also, we mistrust any appeal to base emotions, we don't like to be "gamed" into mob politics. I accept that we are starting to lose that battle, through laziness, ignorance, and the resources available to social media.
The outrage against the Sex Pistols was whipped-up by the tabloids and BBC, but most saw it as an attack on the Queen, who cannot respond, and therefore a bit unfair. Our sympathies, as ever, for the underdog.
Note that it doesn't preclude other attacks, like Spitting Image, The Royals, perhaps equally savage, but very funny - so that's OK then.
I'll second that - and please read down for the main point, let's keep the cachet of British irony and incorrectness, brother RegTards.
- when I worked in Germany, "proper" native English, with its accent, idioms, vocab and corruptible grammer, was highly valued.
However, what they really loved was our dark humour and irony.
Nationalities, like people, tend to undervalue their best, most effortless skills because they are intrinsic, and because it might be immodest. - Here best explained by Kate Fox, in her book "Watching the English".
The English are not usually given to patriotic boasting – indeed, both patriotism and boasting are regarded as unseemly, so the combination of these two sins is doubly distasteful. But there is one significant exception to this rule, and that is the patriotic pride we take in our sense of humour, particularly in our expert use of irony.
The popular belief is that we have a better, more subtle, more highly developed sense of humour than any other nation, and specifically that other nations are all tediously literal in their thinking and incapable of understanding or appreciating irony. Almost all of the English people I interviewed subscribed to this belief, and many foreigners, rather surprisingly, humbly concurred.
What took more time was introducing humour in meetings and discussions with more than two participants.
By convention in Germany this is strictly verboten. The definite upside being that annoying comic wankers, company clowns, are routinely and deservedly shot.
Downside is that the devices we love to slip in to see who's awake - like veiled insult, wrecking endorsements, blind innuendo, faint praise, helpful but catastrophic suggestions - will just cause confusion, cognitive dissonance. - Are we being clumsy, rude, inept, vicious, stupid or what?
It is of course soon remedied, they get it - it is a question of scope, not of understanding. We've broadened the rulebook and smuggled in a subtle, subversive, perpetual game, and it's a new, toe-curling type of funny.
Again, better explained by Kate Fox:
For those attempting to acclimatize to this atmosphere, the most important ‘rule’ to remember is that irony is endemic: like humour in general, irony is a constant, a given, a normal element of ordinary, everyday conversation. The English may not always be joking, but they are always in a state of readiness for humour. We do not always say the opposite of what we mean, but we are always alert to the possibility of irony. When we ask someone a straightforward question (e.g. ‘How are the children?’), we are equally prepared for either a straightforward response (‘Fine, thanks.’) or an ironic one (‘Oh, they’re delightful – charming, helpful, tidy, studious . . .’ To which the reply is ‘Oh dear. Been one of those days, has it?’).
Seriously though, Reg readers and creators, look at New Scientist - once excellent, British, highly read and enjoyed worldwide. It was taken over and infantilized, then peppered with American token-words: quadrillions, cellphones, holiday season, freedomheit.. to mention just a few.
It is now heavily paywalled and completely worthless.
The Reg, many thanks to Lester Haines originally, has been for years a beacon of quintessential British humour, irreverence, irony and wit.
The straplines alone are an absolute artform, seen here first.
Please don't spoil it by removing the linguistic tokens that identify it as English (UK).
Not sure about that one, they are at least in one way similar - in that the products weigh less than the reactants, as previous post noted.
And in another way, much like chemical reactions, it is the binding energies, not the components themselves, that change.
The binding energies of the daughter nuclei are fractionally higher than the fuel nucleus.
All the particles are conserved*.
So, even nuclear reactors don't exactly "convert matter (or mass) to energy" - as in the annihilation of whole particles, let alone "back to future" direct annihilation of garbage into stupendous energy.
* some of the neutron flux that sustains the chain reaction will NOT be captured in the surrounding materials and will then decay (in ~1/4 hour) into a proton, electron, neutrino - but the "hadron number" - (protons+neutrons) is the same. Think of a neutron as a bound proton+electron that is unstable in "air" - i.e. outside a nucleus.
Further caveats below, more for interest than for proof.
OK, there might be side-reactions where daughter nuclei decay through beta+ process, and the positron will annihilate with a nearby electron - that ## would## be direct conversion of mass to energy.
However, B+ decay is only favourable for neutron-light isotopes, and daughters of fission are naturally neutron-rich. I think that's the right way round, could be wrong.
If you're really picky, or just interested, yes, in beta decay, there is an antineutrino, 0.3eV or less.
It might then annihilate with a "proper" neutrino, making a direct conversion of matter to energy - but this would be in a distant galaxy, due to the vanishingly low "cross sections" - reaction probabilities - of neutrinos generally. Actually, maybe not at all, there are arguments that the neutrino is its own antiparticle, a "Majorana" particle - though this is unproven. It would mean that there is no annihilation.
Helium-3, being a Fermion, should not exhibit superfluidity, a property of bosonic liquids. Helium-4 refrigeration below 2.17K is difficult because it goes superfluid and gushes through the tiniest of gaps, ask CERN....
<high voice> " we think the Helium is leaking"....
Of course, Fermions can pair-up, Cooper pairs, to make Bosons, and this is why electrons can become superconductors - and why, eventually, He-3 can become superfluid, but at a much lower temperature.
If law was truly based on reducing social harm then online gambling ought to be just as illegal as drugs, but it's not, and gambling firms make huge amounts of money from promoting addiction.
The war on drugs is a moral crusade - I am not a homosexual therefore all homosexuals are depraved deviants - similarly, drinkers, non-churchgoers, golfers, queen fans, and anyone else indulging in my list of petty hates.
I don't see how taking drugs, consenting adults acting in private, should be a criminal matter - surely criminality has to include intentional or reckless harm to others?
Sure, there would be public health penalties if drugs were legalised, as there are with horse riding, motorbikes, mountain climbing, alcohol, food, and worse of all, gambling.
I don't have an easy answer, legalised consumption but illegal supply chain, as in Portugal, is fundamentally conflicted - and does not take the money out of the criminal empires that kill thousands every year.
I don't see that the current approach will ever succeed, like prohibition you cannot stop people "getting off" on stuff, it is a universal human trait - across all cultures - right down to little children spinning round till they get dizzy and fall over.
Which particular views did you find sickening?
Sure, a lot of commentards are agreeing that a harsh justice was served - which is never a good thing generally. Beware of those who seek harsh punishment was a Greek Philosopher's maxim, or Roman, wasn't it?
However, this could have been the beginning of a "London Bridge" type killing spree, indistinguishable. Such cases of legitimate fatal self-defence are rare, and are, like it or not, a fair argument in support of gun carrying - a well-trained, vetted, un-angry citizen protecting his family against immediate lethal threat. It ought to be possible to make gun law in the US so that it is more of this type, with training, licensing, vetting...
So, give the pro-gun lobby their day in the sun, you have to understand their reasonable argument in this case.
I agree with your conclusion that overall, guns are a bad thing, a very bad thing, with vastly more costs than benefits, but I refuse to be sickened by "evidence for" - evidence that runs counter to my conclusion.
Fascinating link, thanks for that
My favourite of all time is this one:
http://visual6502.org/JSSim/index.html
Its an entire 6202 chip, running code, even your code if you want - with the nets highlighted when logic high, well worth a look. It's all written in Javascript.
On the main article - " a printing process Intel would dearly love to copy" - made me laugh out loud, a bit like going up to Faf de Clerk, the long-haired South African rugby player, built like a brick shithouse - and telling him he looks a bit like a girl.
There are a number of potential fixes that don't involve a garage visit, it is just an engineering problem.
Firstly, reduce the amount of information you "need" to store to an absolute minumum, this always helps.
If the unit has permanent power, you could keep this in RAM, and only commit to flash if the battery is disconnected, relying on your bulk capacitors. However there are a number of tricky issues involved, many of which could be overcome, but it's all a lot of work. Things like the latency between detecting loss of power - perhaps the unit might be asleep? - and getting on with the write. You may only be able to write a few blocks, depending on the write time and your capacitance.
Best then to have an acceptable "no data saved" backup configuration.
My preferred solution, assuming there is no other easy fix, is to make the nVidia chip so it never writes its flash again - and offload the NVM storage either to another unit on the CAN bus, existing or new, or even, ghastly but cheap, make a device that plugs into the OBD port, and store stuff on there.