"It already exists. They aren't increasing the memory on it."
Spreading the cost of the new purchases. Otherwise the new customer would likely be paying more than 100% more. Now everyone only pays 50% more.
48 publicly visible posts • joined 17 Sep 2012
The geniuses behind some TPM implementations didn't encrypt the communications to and from the TPM.
https://www.covertswarm.com/post/how-secure-are-tpm-chips
"Using publicly accessible tools which included a custom high-level analyser and a script to enable the parsing of the captured data, it was possible to decode and extract TPM transactions from the SPI stream. This resulted in a data dump that we were then able to use as a search repository."
<q>..."static volatile" and "register" keywords were, shall we say, suspect, at least initially. They were deliberately excluded from our coding standard because of that.</q>
Well, if you're purposefully excluding the very keyword that is there to indicate to the compiler to not omit read/writes to a variable...don't be surprised that it does, in fact, omit those read/writes.
Very safe. According to the WHO, ten times safer than Tylenol or Aspirin. Both probably do nothing for Covid. Immense wasted effort on banning it forcing desperate people to drink horse de-wormer that as other not safe for humans anti-helminthics.
Imagine all of the drama that would have been saved if people could get something that the WHO considers safer than tylenol or aspirin in a form fit for humans with decent dosing instructions.
"Renewables can provide 80% of energy requirements therefore we must not use them at all. Truely unassailable logic."
Because for the other 20% you need 100% hard dispatchable supply. To illustrate. Load requires say 1 MW. Renewables can supply 80% of that. With how renewables function, that usually means that 80% of the time renewables can supply 100% of the load. Thus, renewables only make sense if the fuel costs of the hard dispatchables are enormous. Othwerwise why pay the capital costs for two generation systems and both their maintenance costs? At present, the options one have is 100% hard dispatchable, or 100% renewable (used 80% of the time) and 100% hard dispatchable (only used 20% of the time).
The maths might still work out in favour of renewables, but Germany and the UK appear to show otherwise with their electricity prices.
PS. Yes, fine the assumption that 20% of the time renewables supply 0% is flawed. But for 2% of the year both wind and solar tend to supply 0% for about a week thus one still needs 100% backup. For households solar is different. Easier to reduce consumption ahead of inclement weather and maybe visit friends and families that do still have power. Or just sit in the dark with flash lights for a while.
The size of the hardware timer would depend on the speed at which it increments as well as the duration before overflow. Not hard to calculate. Additionally, it isn't hard to combine the overflow of the timer with a software counter that would extend the duration even further. Lastly, as long as the "sleep" duration is less than the overflow duration, this would point to an incorrect code implementation of the test for whether the sleep duration has expired. So definitely a skill issue and a lack of adequate testing.
Though it also points to how a seemingly trivial function, is_timedout(t_now, t_start, duration), can pose significant challenges to organisations (Airbus, ESA and Boeing) that ought to know better and ought to have rigorous tests to detect counter timer overflow errors. Especially since it is a well-known and predicable problem.
"On top of which, it's CCTV. It's not supposed to communicate with anything but the computer of the security guard that is supposed to check it between coffee and donuts."
True. Though, since the cameras use Ethernet (PoE probably as well) the contractors installing it probably just hook it up to the existing LAN infrastructure. Perhaps the recording server is also connected to the internet and the the cameras not fire-walled appropriately. Bit of an overreaction since you want the "closed-circuit" part for your CCTV. Or have we learned nothing from the Mirai botnet?
FYI CAN the bit stuffing (to aid in clock recovery) can destroy the CRC characteristics such that what was supposed to be something that detects all 5 bit error can only detect all single bit errors. Slide 56 https://users.ece.cmu.edu/~koopman/pubs/koopman14_crc_faa_conference_presentation.pdf
So CAN is not a great example as it is arguably inferior to RS-485 with parity detection. CAN does have advantages in terms of acknowledgement and bus arbitration.
I believe that it isn't quite an apples to apples comparison comparing so-called nanometres to nanometres. Apparently has very little to do these days with feature widths. Seeing that TSMC is the leader, though, I'm inclined to believe that Intel's process will be a couple of TCMS nodes behind. Will be interesting to see.
"But the newer or cleaner ones will just go from running 24x7 to running less, to being on "standby" where they can be spun up if there's a really cloudy or still week..."
That is the issue. The standby station will only be used roughly 10% of the time. Thus, they will have a capacity factor of 10% making their energy costs very expensive. One will have to pay for all the maintenance and personnel costs for something that just idles 90% of the time. One will have to have as much of the idle capacity as there is demand in the country since the output of both both wind and solar can fall by a factor of a 100 for a the duration of a week.
Of course, if one is willing to suffer some form of blackout for 10% of the time (for up to week at a time) then one might not need the same dispatchable capacity and it wouldn't be as expensive. Perhaps only hospitals and other critical infrastructure (e.g. some minister's dog house /s) can be connected to the "reliable" sources.
In Rust, you would create page table entries and track them as objects, with each row being a structure of a given length; if your code is mathematically capable of writing past the end, it doesn't compile.
It seems that is the case for simple bounds calculations. Some would be run-time checks. The same static analysis that Rust uses to give a compile warning, and similar to what a compiler uses to eliminate run-time bound checks, can be used for C/C++. Linux already uses things like smatch. There is also the Linux Driver Validation process. So, most of the bound checks that Rust does at compile time can be done for C/C++ at compile time as well. I think an exception could be where arrays have decayed to pointers. Perhaps one of the reasons MISRA does not like arrays being decayed to pointers. Something akin to std::span from C++ might help. Void pointers might also complicate things.
If the choice is between Rust and C, by all means choose Rust. With C++ it is not so obvious. The whole overflow/underflow issue can be solved in C++ using a wrapper class for the std::intx_t types that does the overflow checking. The major issue Rust solves is the memory handling issue. Which is arguably a solved problem is C++. It doesn't solve any of the other resource handling issues. For handling any other resources, one will likely have to resort to similar approaches as used in C++ with classes.
So no. I don't see much of an advantage moving to Rust yet as a C++ programmer. Until there is an ISO Rust spec, I won't, personally, recommend anyone from C switch to Rust either.
"My point is that most HALs are poorly written and inefficient."
On the ST side I find the reason is because they want, or need, to support as many of the various use case for the peripherals. They do often allow one to start an example project faster. Then if performance is an issue one simply implements the functionality one needs using the HAL as an example. It usually isn't difficult with most peripherals.
Though I'd agree that I'm not a fan of the C coding techniques typically used by hardware coders.
One could load the keys for encryption and authentication (pairing) on the CPU and the TPM during manufacturing or some other time occasion that is presumed secure. Obviously the private parts of the key pairs should not be readable by external devices (or leaked to them). So the private keys ought never to leave the CPU secure enclave (i.e. can't use a debugger to view the registers of the CPU to obtain the keys).
You don't need CAs since only devices with the private key can decrypt the data. If the CPU can't leak the private key, then only the CPU would be able to decrypt the data.
Changing the public key on the TPM ought to wipe all information on the module. Though a passphrase + key combination could be used to recover stored information or to enable key changes without wiping of stored information.
The crux of the problem is storing the private key of the CPU if the CPU does not have secure non-volatile memory. Though perhaps a hardwired unique private key could also suffice.
Feels like you missed the entire point of the article.
If the Govt. set the specification such that there is no requirement for portability between providers and the chosen provider implements the specification using proprietary methods, then the renegotiation fees will include the cost of porting the proprietary bits to the new provider.
Thus, the current provider can charge the Govt. min(cloud competition) + porting costs. As long as the provider correctly judges what the Govt. considers to be the porting costs, then that is what they can charge. The cost of porting is likely to increase as a function of time as the Govt. becomes more ingrained.
If the Govt. required portability, then they'd basically only be paying min(cloud competition) since the porting costs ought to be negligible (at least in theory).
The article basically describes how Govt. has failed in keeping porting costs low and now has less negotiation headroom since rationally any vendor can logically increase their prices to the Govt. as long as the total price is still lower that the cost of porting to the lowest priced competitor.
That is how the free-market works. Increase prices until profits starts to drop. Price discovery. Of course, the price may change at any moment depending on the the state of your competitor(s) or the client business.
Now one can argue that new businesses would rather go for something else more affordable. The new businesses might then stay with their choice and never migrate to VMWare. Old customers might decide that the long term savings justify switching from VMWare to something else. So VMWare might have significant short to medium term profit gains, but the long term outlook might be less rosy. Probably perfect for all the MBAs. I guess we'll find out.
I think the general portability of Linux would mean that there is a decent chance that older processors will be supported for quite a while. With compiled code one could support most of the various SSE, AVX CPUs at the cost of extra binary bloat. The hand rolled assembly would likely be the biggest obstacle.
Those wanting the most performant Linux have tended to compile their own kernel at least for quite some time. Not that I noticed a massive improvement when I last did it.
Bit different from walking into a building. I think one can reasonable expect the local community to adhere to the principle that they shouldn't enter unauthorized areas. One can also, obviously, apprehend and prosecute such people easier if they're in your jurisdiction.
With the internet you are exposed to all walks of life. Including those from enemy countries over whom you might have no jurisdiction. Not like anyone can do much about North Korean hackers. In this case, one should reward people that identify vulnerabilities in your systems (assuming they did not exploit those vulnerabilities).
In my opinion, this is how one can identify companies that actually care about security and those that just do the bare minimum required by law.
I'm not moving to Windows 11 until I can move the taskbar (I like mine on the right). I've moved my father to Linux (Mint). I guess my mom and I are also moving sometime around early 2026. Maybe I'll get Win 11 for gaming...hopefully Linux gaming will make that unnecessary.
The way I see it that you can not permit any random device onto the TTE network. I don't think this is quite applicable for space and aircraft applications. I imagine for industrial control applications where the control and generic LAN can be combined (marketing: "It's a feature!") is more of a plausible scenario. There will probably be some policy that prohibits unauthorised equipment from being connected, but given the way this attack functions I don't think it will stop the attack.
Aircraft and trains could potentially provide LAN ports for travellers and this would then be a cause for concern. Though I think WiFi will be more popular since all the ethernet ports are likely to be stuffed with used chewing gum. Space probably is likely better controlled and unlikely to have such a device installed since anyone with the required access likely has better attack opportunities.
Though I don't like the fact that one link can affect all three link. Seems that should be fixable with an update.
Usually extending a regression model past the data used to fit the regression is a bad idea. Extrapolation vs interpolation. Can be done for things where the general model is known and rather simplistic...but that is usually not the case where ML is employed.
The weight of the storage container for the aviation fuel on an airplane is, for all intents and purposes, negligible. Mostly just the sealant for the void spaces in the airframe. Maybe some structural members are heavier than if they would have been if the voids weren't filled with fuel. The fuel storage weight decreases significantly as the fuel is used.
This is not the same with batteries which have no change in weight. Nor with hydrogen. Currently 5Kg of hydrogen requires a container of around 80 Kg. Batteries, or compressed, hydrogen will reduce the payload capabilities of the airplane significantly.
Cryogenic hydrogen might be better, insulation is typically quite light, though does take up more space. Though I believe cryogenic hydrogen is volumetrically about a third as energy dense as kerosene...so you need three times as much space excluding thermal insulation.
Thus, I don't expect to see a battery or compressed hydrogen Airbus A320 equivalent plane in my life. Cessna equivalent yes. Though it will probably be inferior to a hydrocarbon burning aircraft in distance and payload weight. Might have other advantages that would make it worthwhile.
"Covid's killed just about more people in the UK thant he whole of the EU combined. Who's fault is that?"
Not sure what your sources are. According to https://www.worldometers.info/coronavirus/#countries you have the following number of deaths:
UK: 117,166
Italy: 93,577
France: 81,814
Germany: 65,566
Spain: 64,747
Clearly your statements is "clownishly" false.
"I for one would rather have Brussels fucking it all up instead of Boris and his clown cabinet"
With, ironically Brussel (meaning Belgium) having the highest (of the populous nations) death rate in Europe. Granted it is only about 8% higher than the death rate of the UK, 1,864 vs 1,720.
"Well how about replacing one layer of grossly incompetent with another?" Well, you had two layers of incompetence. Now you only have the one. Not sure why you would want the extra one as well...
YES!! A screen in between 4:3 and 16:9!!! YES!
Sorry I love my old 14" 4:3 SXGA (1440x1050) laptop screen and keyboard. I HATE the HD ready 1366x768 laptop screens. To get a similar space you need a much bigger screen on the 16:9 format, which translates into a far heavier laptop.
3:2 is in between the two. Seems like a decent compromise.
Yay! Now I wonder if the keyboards any good...
Most likely is that using more "spin" states will result in inter user interference. This will put a limit on the rate of information that can be reliably sent using the technology. There are ways of mitigating the effects such as those used currently in MIMO systems, though they're not perfect.
Backwards prediction is all fine and dandy, but what happens is that after a while you start tweaking your model to actually give good backwards prediction results. Or stated alternatively, you're effectively running a genetic algorithm where only models that do give good backward prediction survive. This is another form of "curve fitting". Thus by continuously using the same "validation" data, you're effectively using fitting the model to that data.
That still doesn't mean that they'll be any good with predicting was is going to happen. Interpolation is good, extrapolation is evil. Effectively fitting observed data to a model to obtain the "actual" forcing values becomes hazardous the more model parameters there are. The problem being that there may not be a unique solution and you can't determine beforehand whether the solution, "fitting", you've found is the correct one. (Most likely it isn't.)
Another thing to bear in mind is that many of the "constants" used in the model were obtained by fitting historic data to a model of some sort, the more data, the better the fit. Therefore models should be able to "predict" the past quite well since they are based on the historic data.
Unfortunately, only time will tell whether the models were/are correct/incorrect and by that time it may be too late.