"the biggest trade secrets crime I have ever seen" but only $750,000 in compensation?
303 posts • joined 9 Oct 2013
The success of one attack motivates the attackers to repeat it against others, causing much greater losses to the wider economy. Losses which are not borne by the original victim - to them it's an externality, which doesn't come into their financial calculation about whether to pay the ransom or rebuild their data.
The idea of threatening sanctions to force companies to consider such externalities is hardly a new one. It's why companies get fined for polluting water courses or exposing our personal data.
Fines are the appropriate sanction here, though, not prison: you just need a sufficiently large monetary penalty to alter the financial calculus for the company, à la GDPR.
A ransomware attack is somewhat different to a kidnapping. The entire company knows that the computer systems were suddenly down. The entire IT department knows why. None of them are paid enough to be accessories to a felony. If the CEO chooses not to report to the data protection authorities, they're taking a big gamble.
You have to wonder who is putting some of these things on the list.
Perhaps someone so militantly in favour of inclusive terminology that they want to bar any term that might have ever been used as a playground insult.
Or perhaps some opponent of inclusive terminology seeking to poison the well and support the "slippery slope" argument.
Both seem equally possible.
The total value of the 5 Brexit ferry contracts was £100.4m, not £108m. £13.8m for Seaborne Freight (the outfit with no ferries) and £86.6m split between Brittany Ferries, DFDS, P&O and Stena:
Not that anyone on Twitter would ever let mere numerical precision get in the way of a good story.
The Serco and Pestfix contracts are both being reported as £108m though. I wonder if Matt Hancock's secretly a Buddhist.
"Customarily defined" is probably the key phrase there... I bet it's a hang-over from when customary units did need to be defined in law, because they weren't well standardised. Every country had a different ounce. But SI units aren't customary units, they are standardised, so the law shouldn't need to specify what sort of kilogram it means.
It seems the reason why SI ended up picking kilogram, metre and second as the fundamental units is down to the units for electricity. 1 volt-amp = 1 watt = 1 kg.m2.s-3, and all the base units end up handily sized for practical use. In the 19th century, gram, centimetre and second were used as fundamental units (the CGS system) but apparently the derived electrical units were inconvenient to work with.
The question we need to be asking the French post-revolutionary government is, having fixed the metre as the unit of length, why did they decide that a gram should be the weight of a millionth of a cubic metre of water? Why not a thousandth, or one? If they'd gone for a thousandth, things would be more consistent now.
My guess is it was a practical decision based on the use of balance scales. A set of weights for a balance scale would increase by powers of 2 or 3, which is easy enough to work with as long as the smallest weight you commonly need to weigh is a whole number. If you need to weigh stuff less than 1, the decimals get unwieldy: 0.5, 0.25, 0.125, 0.0625, 0.03125... (or, heaven forbid, 0.333, 0.111, 0.037, 0.0123...). Or you go back to labelling your weights as 1/2, 1/4, 1/8... which somewhat defeats the purpose of having a decimal system.
Why does British law need to contain the definition of what a kilogram or a metre is at all, when they're clearly and precisely defined already? What next, copy-and-paste the geometric definition of a circle into the traffic signs regulations, in case anyone's in any doubt?
Surely they could just say "kg is the SI unit of mass" and leave it at that. No need to amend the law every time measurement methods improve by one part in 10 million. Anyone using weights and measures for trade is several steps removed from the reference definition anyway.
There are so many things that should be precisely defined in British law and aren't, yet for this they decide the law needs to include a definition down to the last hyperfine structure transition.
Plenty of small-medium European e-tailers have .uk versions of their web sites to market to UK customers, e.g. alpinetrek.co.uk -> bergfreunde.de.
Of course they may decide it's no longer worth the candle when VAT rules, customs, and regulatory divergence start getting in the way, so the number might reduce anyway, regardless of .uk TLD rules.
Microsoft are notorious for being obsessive about backwards compatibility. Even refusing to fix bugs on the basis that it would break stuff that relies on the bugs... Excel still thinks 29/2/1900 is a real date. That one's not even their own bug, it's for backwards compatibility with a bug in Lotus 123!
You can criticise Microsoft for a lot of things but I don't think that particular criticism is a valid one.
You can go to https://support.microsoft.com/en-us/lifecycle/ and find out not only whether your version of .NET is still maintained, but exactly when it will stop being maintained too. Many other vendors, and most open source projects, will not give you that commitment.
Furthermore, when the version of .NET you're using goes EOL, and you have to move to a new one, you know Microsoft will have bent over backwards to maintain backward compatibility as far as possible, and your application will probably still work on the new version of .NET. Again that's something you cannot rely on with a lot of libraries, and it's a major cause of the problem the article is talking about.
If you're trying to use data from it to do population epidemiology, or make policy decisions, then sure, a small sample may not be very helpful.
But it's a contact tracing app, not a population epidemiology app. The point is to identify specific individuals who might be infected, so you can prevent them passing on the infection. Even if you identify only a small percentage of all infections, that still allows you to reduce R0 by that percentage. And even a small reduction in R0 can make a difference over the course of the outbreak.
No doubt the number comes from the same basic arithmetic. The proportion of the population you'd need to not be transmission vectors, for whatever reason - either by immunity or because they'd be identified and quarantined via a contact tracing app.
That probably does give a clue about some of the assumptions behind the 60% figure for the app (actually 56% as per my post above)... 56% is not the proportion of the population that would have to merely install the app; it's the proportion that would have to install it, have it running properly all the time, and perfectly comply with any instructions arising from it.
A downvote's not really much of an answer to my question, is it, so I looked into it and found an answer myself. Some boffins have indeed been working out the real numbers:
Looking at figure 6 in that report, their various scenarios have cumulative deaths over 140 days being reduced by very roughly 10% when app uptake is 20% of smartphone users.
80% of smartphone users/56% of the population is what they reckon is required to suppress the outbreak if there's no lockdown and you're relying on the app alone. But it's clear there are benefits even if uptake is much lower.
Does it become useless if you have less than that though?
Say you get an uptake of 20% (meaning 20% of the population have the app installed and actually running properly on their phone when they're out and about). So you've got 20% of currently infectious people. Half of them might never develop symptoms or might not tell you, so maybe you actually find out about 10% of the infectious people. The app could potentially identify 20% of the people they've infected, so 2% of all new infections. With good testing, you might catch a lot of those 2% in time to stop them passing it on, and maybe you could reduce the overall infection rate by 1%. It's not a lot, but it's not nothing. Based on on-line models I've seen, even a 1% reduction in R0 can reduce total fatalities by hundreds.
The above is very much back-of-an-envelope stuff. Hopefully some boffins somewhere are working out the real numbers.
You are protecting your password from being stored in plaintext or weakly hashed in some unsecured database by every site you have an account on.
Certainly it's a far from perfect solution if you care about privacy at all. But if you don't, it has a clear benefit in the context of this story about re-use of passwords.
If I could give one piece of advice to web site designers about password policies, it would be this:
Put the password policy on the log-in page.
I come across so many sites that I fail to log in to, have to use the password reset option, wait for the password reset email, go back to the site, try to enter a new password, have it rejected, and only then find out that the reason I couldn't use one of my "normal" passwords was that this site doesn't allow punctuation, or spaces, or swears, or has odd length limits, or wants you to use at least 2 upper case letters, or something equally pointless.
It sounds like this report is berating humans for being unable to use a system that's basically unsuitable for use by humans.
Most of the people who claim not to regularly re-use passwords are probably liars. Some are probably using password managers, but not a third of the population. Surely no-one is really remembering a completely unique password for every single device, internet shop, social media site and forum they ever used.
Wow, 5 pages of comments, every single one about just one of the four stories in the round-up article. People go proper nuts about typography. I include myself in that.
I can't help wondering if we shouldn't standardise the whole Roman-alphabet world on 3 fonts and then abolish the whole field of typography. Imagine the time savings. To avoid unending arguments about what the the 3 allowed fonts should be, we'd have to pick ones that everyone hates, so probably Times New Roman, Arial and Comic Sans.
Biting the hand that feeds IT © 1998–2020