Re: Once Flash finally dies
Well just make sure you're sitting down before you Google wasm.
2544 posts • joined 7 May 2012
I guess in theory if they know your contact with an infected person occurred at a specific restaurant where you sat on an adjacent table, they could contact trace on bookings to catch those without the app.
Ok, that's a benefit, but on the other side of the equation is that government tracking citizens is really problematic in a democracy, and if you have less people trusting the app then it is much less useful. So that's the trade-off they aren't considering. By demanding more information, they will end up with less information in total.
This is why the zero knowledge solutions from Apple Google are best in my opinion. Widely accessible, no privacy concerns and allows users to know to isolate and get tested.
Then you have the next tier like Singapore and Australia version which to their credit attempt to limit their slurping but are not zero knowledge. They at least have user consent before uploads. Whilst I have some misgivings about how it demands a phone number and how the proximity judgement is performed server side, I don't think those downloading it are crazy or ill informed.
But at the bottom you have the UK version in with these, as it doesn't even try to maintain privacy. Not sure why the author saw fit to group trace together/Covidsafe with the UK version.
As a general provision, yes, providing it is worded well. There are homes that double as workplaces (eg, some day cares, some dental surgeries, vets, tax agents, etc), and if you take the "this is optional" thing seriously, you have to make sure customers and employees are free to decline without getting the sack. You would also want to be explicit in noting that no onus is on anyone to allow anyone else to view their phones to validate that the application is indeed running.
That's not how the OEMs operated back then. (particularly certain middle Kingdom manufacturers who forked Android 5.1 to add their own shameless iOS clone UI).
Fragmentation in Android is much lower these days, but again, I am yet to find another app which reports it is incompatible with my device.
I'm not sure why you think I'm disagreeing with you or defending their claim. They should not be claiming end to end encryption. Anyone clever enough to implement encryption correctly knows damn well what that term means and TLS between server and client is not sufficient.
Point 1 is absolutely correct. It should not share the key with the server if you are not asking for a cloud based MP4 to be available. It does though, hence the controversy.
Point 2 would work, but that isn't what the feature does. You are trading off convenience of a sharable MP4 link for the complexity of requiring a bespoke player and a way to securely distribute the keys to your recipient. Again power to you if that's how you want to share it.
I agree 3 would be a reasonable compromise.
On your dial in suggestions, point 1, faking your dial in number is orders of magnitude easier than compromising the key.
Point 2 would work of course, but now your company needs an extra 50 phone lines for that once a week call. Similar to point 1, there are some security compromises in proving that the incoming call is the authorised party. It also means that all audio of that call is going through a public phone system, so that's where the weakest chain link is.
I don't disagree with point 3. If I ask for an end to end encrypted call, I expect any feature that cannot operate under that constraint to disable. It is wrong to create a false impression of security.
It could be done that way, but I an describing the feature as it currently exists, where I can email you an Uri, you click it and MP4 starts playing in your browser. As I described, the server needs the session key for that functionality.
I'm not advocating anyone use that feature, but if you do then that is how it would work. But if you disabled phone in and didn't record to cloud then no it should avoid sharing the session key to the server itself or remove that end to end encryption claim.
Yes, the malware authors could read this and simply reconfigure their name-generating algorithms. Unfortunately for them, they can't push those updated algorithms to the malware in question because those potential domains where they could have put a payload to update to a newer name generation algorithm have been blacklisted for the next 25 months.
I doubt they would need to register all the domains. That's 8000 per day, or about one every 10 seconds . The #*#£heads behind it can just figure out the 6 domains that'll resolve tomorrow at 4:13 pm and leave the payload on whatever fly by night aws site that resolves to. Have it delay execution of that payload for a few hours and determining the particular culprit domain might even be tricky.
You could even host a JPEG image on a site which contained a steganographically encoded IP address, pull that out and download the payload. The site then looks legitimate if only not looking deeply at the patterns.
Unfortunately, the defenders need to block it every time. The attackers only need to succeed once.
Why does it take 5 minutes, let alone 45? If it was 30 seconds then no-one would care, but they clearly have understaffed their security team if you have that sort of queue. Either stagger the finish times, employ more security staff or change the bag search to a random search. Or provide a locker system where they can securely store their personal effects outside the stock security. What kind of manager thinks this is a "normal"request of an employee.
According to aunty, the reviewer in that case wasn't a customer. But an interesting side note in Australia (but not necessarily related to that case) is that truth is a defense you could run. That is to say, if you can back up the claims in your review with evidence the company may find themselves with a judgement against them. A nice tag line in your review pointing out how these facts have now been established in a court of law would be epic.
From NSA advisory:
Certificates with named elliptic curves, manifested by explicit curve OID values, can be ruled benign. For example, the curve OID value for standard curve nistP384 is 18.104.22.168.34. Certificates with explicitly-defined parameters (e.g., prime, a, b, base, order, and cofactor) which fully-match those of a standard curve can similarly be ruled benign.
So basically the unexplained magic numbers in the published standard are totally secure.
> It is quite hopeless, but removing a valuable indicator just because 85% don't pay attention means that the 15% that do will have to do without.
I would argue that it is unimportant as to how many notice that it was there. Rather, it only matters how many of those 15% of people would notice that was is later missing, and of those tiny fraction of 15%, how many of those would consequently avoid the site after noticing.
I would also argue a direct negative of EV is that same process introduces delays in reissuing a cert that you need to revoke.
In Australia, we have the Australian Consumer Guarantee which covers all products and services.
This states that products must:
"match descriptions made by the salesperson, on packaging and labels, and in promotions or advertising"
"be fit for the purpose the business told you it would be fit for and for any purpose that you made known to the business before purchasing"
Samsung is in rather a spot of bother if they have been rejecting repairs on the basis of them getting wet.
Australia has the same requirement to not weaken encryption yet somehow provide technical assistance.
All they need to do is to make sure their laws usurp the very honourable laws of mathematics. (Quoting someone who pushed those laws of mathematics over 30 times before losing, with the irony being that the mathematics behind those 30 polls now seems very questionable in light of recent events.)
In the days of credential stuffing, byod, and every man and dog being allocated email irrespective of their actual role in the business, I don't think that merely having a VPN layer at the edge solves the problems. Even an internal only terminal services machine is at risk from a wormable exploit.
Sounds like you're presuming that the same clown who wrote the server side code wrote the stored procedures. My money would be on these being the responsibility of separate teams, so rather than use an efficient for-purpose stored procedure, they improvised with what was already there. As far as performance testing, that's another team right? As for showing any form of initiative, if they ever had any then I am convinced that their training involved beating it out of them.
Won't be shedding any tears though. The stupid design calls like not being able to switch your mic/speakers to headset from within a conference call without minimising it and going through the main UI and hitting the config from there. And the times that it just drops the audio quality to the point you have to fire up TeamViewer. Or the times you click join and it just hangs. It's Skype in name only to me. Nothing like the original game changer Skype originally was.
Another take on this problem was pointed out by "uncle Bob" in a lecture I saw but am now too lazy to find the link.
The number of people that you would loosely define as "computer programmers" has roughly doubled every five years since the 1960's. Or to put it in a more frightening way. About half the code warriors involved in every piece of software you might buy today have less than 5 years experience. Many haven't yet been burned by the shortcuts they think they can get away with, and many in that bracket aren't yet at the levels where they can push back against the PHBs demanding dangerous processes (or more usually lack thereof)
And at the risk of defending the indefensible, also don't forget to take notice about the KPI structure such employees are working under. Do they need to close X tickets per day? Do they need to maintain an awaiting investigating queue below Y? Does the employee who closes the most tickets get singled out for either praise or even a bonus? Does the employee who takes the longest suffer poor performance reviews or have to sit with some stuffed toy sloth on their desk that week?
If any of these or like minded hare brained schemes are in place, anticipate, no actually expect employees to play their own games to protect their own wellbeing. So if you come along with one of those "it's annoyingly slow but still technically working" style tickets, expect the incentives to influence the behaviour. Having an efficient payroll department isn't only never directly incentivised but also in this case would almost certainly hurt their measured KPI.
If manager types spent more time reflecting on KPI side effects and less on other reports, they would objectively run a better operation. Of course, managers have their own KPIs which they're themselves playing their own games, so it's turtles all the way down.
I have no doubt that they can throw down 4K at a pretty impressive frame rate using what we used to call powerful servers with lots of GPUs in a data center, but now must call cloud.
The real question for most gaming is how long it takes for a player action to be noticed by the game, and for this you are likely north of 5-10ms because physics.
The second group churns through the advice from as many as required of the first group until they get advice that, when held at a distance and eyes squinting in just the right way, can form a set of words that doesn't entirely rule out the position already held by the second.
A human who was also tasked with capturing information about the vehicle's performance on a device as it drove. If they had her in the car solely as "your job is to monitor the decisions being made by the car and intervene if necessary", your comment would be reasonable. But her job required her to also be a data entry clerk. As such, it was perfectly foreseeable that her attention would from time to time be averted. If the car cannot operate safely workout a human supervisor, then they were negligent in not having a human supervising it at all times.
> You keep it at least hashed
A hash is a cryptographic one way function. Knowing the hash, it is mathematically impossible to recovery the original string without brute forcing all possible strings and looking for one that gives the same hashed value. Being able to vomit back the original password into a password box is kinda a big thing for a password manager.
> or XOR-ed with some other binary
So where do you put that binary so the attacker can't do the same? Why don't you just put the passwords there instead.
Also, what would happen if you xor'd the obfuscated passwords together with other obfuscated passwords from that same secret binary? What can you learn about the key? What if you discover just one of those passwords in a paste bin dump then xor the obfuscated password with the known one? Oh look, secret binary in clear. Now we can read any others too.
Fun isn't it?
Even something as "simple" as clearing the secret out of memory is much harder than you might think. Depending on the runtime involved, you be relying on a garbage collector to actually overwrite the memory and you control over that process is limited. And that's before you consider whether it might be in the CPU caches which might as recent vulnerabilities show, be an oracle.
Biting the hand that feeds IT © 1998–2020