So basically using a bad scaling algorithm that introduces tonnes of aliasing can be abused. Answer - don't use a bad scaling algorithm
12 posts • joined 14 Jun 2011
FYI: You can trick image-recog AI into, say, mixing up cats and dogs – by abusing scaling code to poison training data
Now that's what we're Tolkien about: You need one storage system to rule them all and in the darkness bind them
It's much worse than suggested
Data is wrong, and gets fixed (or more usually the missed upload sits in some failure queue for a few weeks before someone finally realises and uploads it).
In a perfect world you'd maintain the old and new versions of the data so that you can re-run reports and get the same results. If the data keeps changing you'll never understand where differences come from. It's the equivalent of 'value date' in accounting systems, where you want to be able to separate when a report is run for, from what set of corrections to include (show me the year end as of the year end vs the year end with the corrections we've subsequently applied).
This post has been deleted by a moderator
That scary old system with 'do not touch' on it? Your boss very much wants you to touch it. Now what do you do?
'Migrate it to a cloud based micro-service based architecture'... right
The thing about legacy systems is that they weren't originally legacy, they were the latest greatest architectural decisions running on cool cutting edge hardware. Now they just look like some weird mess of scripts and code running on an OS that none really understands, but they didn't start that way.
I've a feeling that the average cloud based micro-service design will not look half as good as that weird legacy you are trying to replace in 5 years time, let along 20 years. Add a dose of vendor lock-in to a cloud supplier, and you'll have management screaming at the next team to replace their new legacy system. 'Ah, this is a cloud v1 system - you need a cloud v2 architecture'. Rinse and repeat.
Re: Plug cable entry angle
Actually the fuse is to protect the lead from too much power, not the socket or the appliance. That's why it's on the lead. It's a common mistake to think the fuse is to protect the appliance, but it's not, so a 13 amp fuse in a table lamp is absolutely fine *if* the lead can take 13 amps.
Re: Spectre on the hyperthreads
There are a number of processor architectures that have taken things further than the Xeon 2 threads per core model. Off the top of my head the Sparc T1 had 4 threads per core, rising to 8 threads/core by the time we got the T3.
In current use, the XMOS processors use this technique, I think there are between 4 and 8 round robin slots per physical core, so the 500Mhz processor appears to run 4 independent threads at 125Mhz, for example (which is handy, as it hides fetch latency etc).
If you are writing memory bound software, hyperthreading isn't a win. If you are compute heavy, it can help, and of course it really depends on what else is running on the cores (outside your control).
There's nothing wrong with what you are suggesting on the face of it, but in my personal experience it's a solution looking for a problem. I don't see errors like this creeping into the codebase.
Patterns which resolve common programming problems, absolutely! I code mainly in C++, and so in the last 10 years various patterns have come in which make a positive contribution - standardised containers which are fast enough to use without worry, smart pointers, std::atomic, ranged for loops, there's a long list of sensible ways to resolve common problems.
What can be patched
That there are flaws in the processor is not that surprising - it's a new design, and this stuff is hard if not impossible to reason about.
The interesting question is whether AMD are able to patch these systems to resolve the flaws.
Another explanation for the lack of disclosure delay would be that CTS-Labs are well aware that these problems are easy to fix, and hence they would have a non-story if they delayed publication.
This is quite a simplification. There are plenty of situations where jobs are using remote storage (SAN say) to access large datasets, which can't possibly be replicated locally due to the data size. In these situations the local disk is hardly hit, and file access is normally sequential, so the criteria you are quoting (access speed, or basically random access latency) is totally irrelevant.
I'm sure there are applications that this sort of architecture would help, but there is plenty of stuff where this is not the case.