Evidence of Israel's nukes
If Israel didn't have nukes, they wouldn't have kidnapped whistleblower Mordechai Vanunu and locked him up for nearly 2 decades, with 11 years in solitary.
683 posts • joined 11 Jul 2011
If the technical attack which brought the system down can be repeated on the restored system, it matters little what medium the backup bits are restored from. Defence in depth now also means having the forensic capability with a rapid enough turnaround time to be able to figure out the nature of the technical attack and prevent it recurring. Otherwise, your restored system can be made subject to the same fate as the cracked one. If the technical capability you have is too slow to figure this out, then your business being offline for an extended period becomes equivalent to business failure or a loss of reputation you can't afford, regardless of your ability to bring the system up exactly as it was before.
Think about why the Sony Playstation and Travelex networks were down for as long as they were and join the dots. I don't think it's realistic to imagine they didn't have backups, but it is realistic to imagine they didn't have access to the forensic capability needed to prevent re-occurrence.
"The real issue today could be privacy - when NAT is not used it would be far easier to map an internal network by observing the packets addresses, and if addresses are not generated with a good random algorithm, and changed after some time, each device has an observable unique ID."
In that situation use DHCPv6 with a suitably short lease time on IPV6 addresses to internal hosts, and a suitably random address allocation algorithm, in preference to SLAAC. But this won't protect your host privacy against higher level attacks which are IP version independent, such as HTML Canvas browser recognition etc, and if you're that concerned about privacy you should probably be using TOR in any case.
A penny black is a stamp collectors wet dream. It's a human created information artefact created as a limited edition - no more will be made and every stamp collector wants one and many stamp collectors are willing to trade them for other items of value.
Bitcoins have similar attributes, with one additional feature. They can be used for collecting digital ransom demands relatively anonymously compared to sending a penny black through the post. Demand for Bitcoins and the price of these can therefore be raised by infecting more computers with malware which encrypts valuable data stored on them, storing the keys offsite and forcing outsiders to the Bitcoin economy to choose between losing their data forever or buying into this malignant cancer.
So they've discovered you can't run untrusted code at full performance on a modern CPU.
Either they hamstring all code running on such CPUs to keep systems more secure, or OS and application designers and system operators figure out some means of working out which processes are trusted sufficiently not to steal secrets that these are allowed to run at full performance while other less trusted processes are not.
Clearly browser tabs should not be allowed to renice themselves to a higher priority level, and lower priority levels should be scheduled in such a way that restricts their ability to exploit these weaknesses.
How much more convenient for ICANN to obtain one of these under the laws of the State of California, compared to the more robust and tedious accountability and consensus requirements concerning how international telephone dialling prefix codes are decided by the ITU. Fortunately installing a new DNS root on a system is just a routine software update away. So if ICANN do anything evil enough to justify sufficient annoyance, such as issuing a bogus DNSSEC cert for a national TLD because the NSA tell them to via judicial warrant, the ICANN role in our currently coherent naming heirarchy (because everyone agrees to let them run their money printer at full speed) is at least technically disposable.
Whatever concerns you have about the way particular Unix-like systems manage background services, I think you miss the point of Tails.
Any software sufficiently complex beyond 1970 levels of complexity isn't fully auditable and will have a bugcount proportional to the number of million lines of code, and a proportion of these bugs will be security issues, many of these undiscovered. Let's assume a high proportion of the Tor nodes operating are likely to be spying on network traffic. The human who is so operational-security minded that they can avoid leaving any trace of a real world identity behind in relation to coherently organised digital enterprises probably hasn't been born. So absolute security is unlikely to be achievable against an extremely well funded and determined adversary, proven by facts such as Russ Ulbricht's arrest and conviction despite his best efforts to cover all of his traces.
As I understand it, Tails and Tor doesn't attempt the impossible, but instead addresses the following genuinely interesting and challenging engineering problem:
When it comes to online privacy, which risks are sufficiently high that these need to be managed, and how can the cost to attackers be raised by the highest multiple in relation to an acceptable level of inconvenience of the technology used for this purpose by a technically adept user ?
I was emailed by LETSEncrypt to say that I was using the old protocol. This was the case on a Debian stable (stretch) server. Putting the new client just needed adding the backports repository to /etc/apt/sources.list and installing the backported certbot package over the outdated one in the main stretch repository. That's why for some purposes you should give an email address to those upon whose software and services you depend.
It wouldn't surprise me if the SFTP and RSYNC _protocols_ are also inherently capable of doing similar. If you don't trust that the client or server software hasn't been compromised, it's likely some kind of Mandatory Access Control is needed, to limit access to the files and folders the user interface says should be accessed. Protocols for transferring or synchronising files are designed to be capable of transferring files, and for the security to be handled by authentication based on user accounts. But the latter approach is discretionary not mandatory and DAC tends to allow access based on user account login.
Tightening up on this in general would require that user interfaces communicate MAC policy before passing file transfer requirements to the back end software which actually _does_ the file transfer. But that only really moves the problem to whether the user interface software is trusted to restrict object access to the finer grained access as intended.
For a couple of years before I got cable broadband and started renting a cloud server, Demon's Internet service enabled me to operate automated mail discussion list distributions which propagated 4 x daily using a crontab based dialup routine together with Linux, Sendmail and Majordomo. In order to avoid spending too much on the then per minute dialup charges I had a forced timeout after 20 minutes or so. This worked fine until one of my mailing list users tried sending a 9MB attachment to one of my lists, which had the effect similar to repeatedly trying to get an elephant through a revolving door blocking other traffic.
Another problem was the 'demonic' domain name, solved by registering the driveout account which formed the subdomain driveout.demon.co.uk .
You're generate these every time (with a probability very close to 1) every time you create an RSA keypair. The primes you're likely to generate are probably unknown in the sense there are very many more of these, which are very easily discoverable, than the number of atoms in the observable universe - let's say that's 10**82. Start with 2048 bits of random noise output from /dev/random seeded with a good noise generator , make the last bit 1 so it's odd, then test it a few times with Fermat's Little Theorem and Rabin Miller tests, and if not prime add 2, and retest it until it is prime. There are approximately 2**2037 primes lower than 2**2048, which is a very large number compared to the number of atoms in the universe. So unknown primes are very numerous and easy to find and if useful for crypto are very much smaller than the largest known primes. The latter have millions of bits, while those useful for cryptography are likely to have thousands of bits.
If the number of atoms in the universe is only a few hundred bits long, it follows that the primes you're likely to generate for cryptography couldn't be stored if every atom in the universe were to be turned into a single memory cell which could be used to store one of these.
With anything as complex as a multi CPU chip and OS kernels capable of using such for highly concurrent loadings efficiently, there will inevitably be performance versus security trade-offs with this class of bug (including Specter, Meltdown and similar). That means that not all programs running on a system (particularly a multi-tenanted data center server) should have access to certain kernel data structures or the ability to thrash the CPU to the extent it gives up predictive execution exposed secrets.
So it seems to me the developers of these systems are either going to end up compromising the ability of a system running trusted processes to operate at full performance when they patch these bugs, or they're going to compromise the performance of untrusted processes and have to let the system know which processes are trusted and which are not.
"As politicians have only one tool with which to Do Something™ (i.e. they can legislate)"
They have already done that. The Computer Misuse Act section 3ZA allows "The maximum sentence on indictment is 14 years, unless the offence caused or created a significant risk of serious damage to human welfare or national security, as defined in Section 3 (a) and (b), in which case a person guilty of the offence is liable to imprisonment for life."
What they don't seem to have done yet is carried out significant research or spending on safe drone disabling or capturing technology.
Section 3ZA of the computer misuse act which affects national infrastructure has a maximum penalty of 14 years. Same goes for supplying tools. So someone designing and supplying a drone to be pre-programmable, with stealth capabilities and to ignore exclusion zones knowing it would be used in an incident of this nature would be guilty of a section 3A offence - current maximum 2 years.
This product detects them based on RF emissions. A pre-programmed drone which doesn't need a controller won't need to emit RF. I guess the next generation in drone stealth capability will involve the transfer of technology (e.g. radar non-reflective materials) that goes into modern fighter jets. https://en.wikipedia.org/wiki/Radiation-absorbent_material
"the only way to stop the madness is an outright ban"
It's not generally possible for legislators easily to ban arbitrary activity on grounds of electricity waste. The fact that the main or only use case of crypto currencies is for money laundering is another matter entirely. Closing down the cash for crypto-coin exchanges as accessories to money laundering would probably kill the rest of the cancer including game coin for crypto coin exchanges.
If I'm wiped up off the road following an accident, I'd quite like the A&E clinicians to be able to access my record and fast. If I see my GP, likewise - these authorisations are very obvious and even implicit. But I'd also really like to be able to know after the event, who in the NHS has accessed my record and when and why they did so. If my data has been anonymised to make this available for research I'd also very much like to know to whom and under what terms and for what purpose access was given, and also to be able to know exactly how my data was processed in order to anonymise it, so I can know if this anonymisation was likely to be effective.
This is because the best policing and prevention of misuse of this highly sensitive, personal and confidential data is likely to be similar to how the banks are policed - we check for unauthorised payments if and when we go through our own bank statements line by line. For much the same reasons we should be able to know who has accessed our medical record, how and why.
"What's not to like?"
I use their certs on my HTTPS hosted sites and this meets my needs and those of my guests. However, I'd be more than a bit concerned if something looking just like the domain name of my bank, but differently Unicoded, appeared with a padlock symbol certificated on the basis of someone being able to put an arbitrary file onto the web server for whatever the domain name was. With Unicode characters within domain names, many different text strings showing the URL next to the green padlock symbol can have the same appearance as the legitimate domain name.
Extended Validation is supposed to make this kind of business name impersonation hack more difficult.
"Sorry, you've sent us an MS Excel (.xlsx) file: we don't use those. Please resave the file in the correct OpenDocument (.ods) format, or better still for future ease of use, import it into LibreSheet and use that application instead."
If you accept and run macros within office documents received from random senders outside your organisation, then you deserve to get infected and hacked by whatever's coming to you. If the office documents don't have or need to run macros, they will almost always render fine in LibreOffice.
It's considered safer for drug buyers and vendors to use the dark web to meet and transact than to meet in person. A vendor has a reputation to lose if the product doesn't arrive or do what's advertised. Then there's the avoidance of turf wars, which without availability of recourse to civil law, tends to involve debt collection and contract enforcement using violence or threats of such. These same considerations applying to illegal drugs will also apply to dark web malware and hacking services marketplaces. The possibility of anonymous payment using Bitcoin makes this all possible - to the extent significant inherent risks, hassles, costs and delays make using this system for criminal payments worthwhile. Money laundering using cash is also much more risky and for similar reasons.
Clearly the purchaser needs to check the reputation of the vendor for reliability of delivery and quality of goods and services as with any online purchase.
'The issue of IPV6 always seems to come down to "what's in it for me?". '
If you don't care about the feudalisation of the internet and serfdom in respect of having no effective ability to influence or decide who knows what about you, then IPV6 has little to offer you. Efforts such as the Freedombox will come to nothing without the ability to install within networks which allow both client and server connections.
The alternative is continued degradation of the Internet in which most connections are client only, due to address starvation, in which getting anything done requires giving all your data away to cloud providers who mediate all your connections and sell the data they gather in the process to the highest bidder.
This has to be open source and has to be developed in the open, and with reproducible build capabilities * so that anyone interested can verify it or collaborate with any number of interested others to share and discuss the verification of it. Anti-virus on closed platforms has to operate with root and kernel level access due to its very nature. Having a consortium of universities or an audit "partner" able to inspect code based on vendor criteria in the forum offered and managed by the vendor doesn't guarantee that the urgent update you need to defend against a recent and critical threat has been independently verified.
* for why reproducible builds are required see: https://reproducible-builds.org/
This influences rational behaviour. If my local sysadmins ask us to leave several thousand machines running over the weekend for "essential security updates" it makes you wonder what else they're doing with all that machinery. This goes all the way to people accepting an app which they don't pay for and has a mining trojan, viruses running on botnets and teenagers wasting their parents electricity bill.
This article conflates and confuses 3 entirely separate property rights which have nothing to do with each other, other than the ridiculous grouping term "intellectual property" as if someone could "own" an idea.
The only natural property right is what a bandit, warlord or crook seizes by force and defends by force. That is how it was before the rule of law. In a democratic society law only works by consent of the governed, and if the public interest grants private property rights to be defended at public expense, the public interest requires compensation for the cost of this, both in relation to the cost of exclusion of those fenced out, and in relation to the cost to the public purse of maintaining legal boundaries around private property. If the land registry records your ownership of a plot of land with a dwelling on this, then you get to pay taxes to your local authority and that's how it should be.
Those claiming otherwise demand from us that those dispossessed subsidise the public cost of private property.
Copyright discussion has traditionally been one sided, due to the inability of politicians to oppose this uncompensated land grab by the man who buys ink by the barrel load and get elected.
Patents are good in the unusual and classic case of an inventive idea that no-one else would have been at all likely to have come up with. But most patents granted nowadays are nothing of the sort and are artificial monopolies maintained at the public expense, raising the price of any mildly innovative product for all of us. Patent offices make their money from patent applications and for applicants to continue to apply for these in large numbers a proportion of bad patents have to be granted making most patents bad. We've given the patent offices a license to print money, and given such a right who wouldn't run their printing press at full speed ?
The only one of these 3 areas of law which works in the public interest concerns trade marks. If John Smith has built a reputation at considerable effort and expense making and selling "John Smith Widgets" (TM), it's entirely reasonably that someone else shouldn't be able to adopt his name and pass off their inferior widgets as if they were his. This should and does not generally prevent another John Smith applying his name to a different trade.
Interestingly enough I supervised a student project last year investigating post-quantum cryptography algorithms. It's basically about arithmetic. I'm not a mathematician myself, but the student already had a maths degree so was qualified to look at and compare current proposed post-quantum schemes. My main problem was understanding what she wrote well enough to give a fair mark for her paper. This promises to solve a big problem if quantum computing ever becomes a reality and we don't want to have to patch this issue very hastily as that's likely to leave very many implementation holes we'd rather not create in the first place. So it's a timely area of maths research.
For non-mathematicians, public key cryptography all hinges around a set of numbers on which arithmetic can be performed to make other numbers from them. let's call these numbers by their RSA convention, M,C,E and D . (RSA uses 2 numbers: E and N both as the public key but I'll just call this number E here for simplicity).
The algorithm needs to find a way to transform a randomly generated number: M ( M is for message, but it's actually used to encrypt the real message. It's a random 256 or 128 bit number used as an AES symmetric session key. We use symmetric algorithms for the heavy lifting, and public key algorithms to help protect the symmetric keys ).
We make M into an encrypted number: C, (for cyphertext)
so using a public key: E, we can say:
C = encrypt(M,E)
such that the private key: D can be used to convert C back into M.
M = decrypt(C,D)
If the public and private keys E and D are generated from the same input as a related pair, and knowledge of C and E by an eavesdropper can't be used to obtain M or D and having a large working quantum computer is no help, then the properties of RSA will hold in a post-quantum crypto scheme with the above arithmetic properties.
It's also useful if the scheme works in the opposite direction, so encrypting a hash H of a message into S using private key D can be reversed using the public key E to regenerate the hash, this scheme can be used for message signing and signature verification as well as message encrypting.
S = sign(H,D)
H = verify_signature(S,E)
So we've got 4 functions, each of which takes 2 parameters as input and generates a single output. How we use the inputs and outputs outside of these functions stays the same, it's what's inside the encrypt, decrypt, sign and verify_signature functions which concerns these different post quantum algorithms.
Hence the larger and more complex the apparatus, the less likely it is you've been fully able to verify it doesn't contain any unwelcome secrets or hidden backdoors making the output observable, predictable or being capable of manipulation by unwelcome parties. A simple electronic circuit you've built yourself involving a pair of zener diodes as a noise source followed by some analogue amplification and digital gates to ensure you get an even bias between 1s and 0s might be as good as it gets in this particular space. If you have to buy hardware made by someone else, paying for it cash in person makes it less likely to be replaced within the delivery chain. IBM used to advise mainframe managers to use dice for system passwords, but we need more entropy for long term and session secrets nowadays. It's possible the hardware RNG vendor may be fully security audited, but what about the delivery chain ?
" ... only 1.88% of users have beep installed. Only 0.31% use it regularly "
That's a very good example of the reason you shouldn't apt-get dist-upgrade forever (or your package management distribution upgrade of choice equivalent). This process leaves obsolete packages installed which you probably no longer want and which seem destined to come back and bite you when you least expect it. Doing a full and clean install occasionally, apart from maintaining knowledge of how to configure stuff you've become dependent upon, will keep a system in a more sane condition.
They've been trying to push GM frankenfoods on us for years based on the easily refutable lie that the world will starve if we don't all surrender and eat it. Note that this yeast strain will presumably be licensed so breweries will either be prevented from growing their own yeast in the traditional manner, or will have to pay a regular monthly license fee in order to do so. The parts of Herefordshire and Kent where they grow hops look environmentally rich and diverse to me.
Of course the employees of the evil corporation which wants to foist this on breweries and drinkers can be encouraged to say it tastes good. I guess they would, wouldn't they.
It may be appropriate to drag people thousands of miles away in relation to terrorism offences or murders carried out where they're to be extradited to. But justice is not served by doing this for alleged crimes where the individual alleged to have carried out these crimes has no other connection with the place where they were alleged to have occurred. The UK courts should first of all decide whether the accuser has enough evidence to prosecute the case locally, refuse extradition if not, and whether they're making up the claimed damages based on the cost of making secure systems which should have been made secure before the alleged offence occurred. The treaty we have with the US seems to be very one sided and needs to be torn up and renegotiated.
Various articles are referencing use of vast botnets, malware, adware or mobile apps to mine cryptocurrency. The externalised cost is your CPU running hotter, and your mobile battery being exhausted sooner. Then there's what the BOFHs do with them and your employers electric bill when they ask you to leave your workplace computers on all weekend for 'software updates'.
Any crytocurrency mining operation which gets someone else to pay the electric bill will outcompete those who have to pay the market rate. How to burn the planet sooner rather than later.
And he'd run the printers at full speed wouldn't he ? That's what a patent granting office has, in the sense each patent is a monopoly and collects application fees, more of which are likely to be paid the more likely it is for a patent application to be granted.
Low quality patents are a cost for everyone else. You run a small business, which a large business says treads on an obvious patent ? You can't afford the few million in legal fees to have it questioned ? Your business now has to pay tribute, or goes bust or can only afford to continue if taken over. If you pay for a product or service which requires patent licenses it's going to cost you more and we all pay more for such products and services.
I'd start with Postfix if you've never managed a MTA before. Simple doesn't seem to be a possibility in this space, but Postfix is relatively easy to setup if you just want to receive and relay for local mailboxes and handle transactional email from local webapps. If your human users want IMAP/POP3 you probably want Dovecot also.
I do conditional post-processing on headers using Postfix as my MTA using entirely separate programs executed using the /etc/aliases mechanism. If I wanted to do selective processing pre queuing, I'd probably use the Postfix Milter interface for this. Better in my view to modularise what you need to do into different programs, but the usual stuff lots of other sites want including CLAM-AV and DKIM seems reasonably straightforward (compared to Sendmail) to integrate.
I suspect early use cases might include where a provider of a vertical application which needs a higher level of security than otherwise available sufficiently to make it worth installing dedicated client applications - e.g. a bank or other financial trading platform which makes you use their own browser or plugin. But if an application provider can achieve that, I'm unsure that much better security is obtainable by using DNSSEC than would be provided by the application using a restricted CA list.
So if the benefits of DNSSEC will only occur when enough people use it we're down to a chicken and egg problem. There must be some benefit for a registrar which offers support in the sense more technical site operators who care about security will migrate to them from their competitors.
Personally I think patching existing systems is likely to have to involve using software to increase timing entropy resulting in the blocking of these side channels where the software access control context calls for it. So processes already running sandboxed from each other or owned by different users shouldn't be able to read each other's memory and will run slower as a consequence.
This is just a patch. If the deeper problem exposed is that proprietary hardware can't be trusted anymore due to it's combination of obscurity and complexity, then open source hardware might offer a solution for users and applications where security really matters enough, initially to be willing to pay more for hardware offering the same raw performance, until scale economics enable this approach to compete against established hardware designs. The RISC-V open source hardware project seems to be making useful progress .
It's a question of whether it's better for a muggle to learn to be more like a wizard by risking key management mistakes or to risk getting screwed by an incompetent or untrustworthy registrar which holds the keys for them. I guess if the muggle who wants looking after has the sense to pay for the less cheap registrar who relies on income from customers to not want to screw them over, that's their choice.
What's needed is for the reputable registrars to provide customers with more useful help in setting up DNSSEC in ways such that the customer retains the zone signing private key and this never exists on the DNS servers which serve the public key and signed records. The DNSSEC standard also probably needs a signed assertion available to the effect that unsigned subdomains of a zone do not exist, but if it currently has this capability I'm unaware of it.
"I am amazed at the decision, I think this is the first time in history that a UK judgement has prevented extradition to the US, but I might be wrong."
You are wrong. Garry McKinnon's case had various similarities to this one. https://en.wikipedia.org/wiki/Gary_McKinnon#Extradition_proceedings
"He will need to suspect anyone coming within a foot of him in the street of having a rag with chloroform and a car parked around the corner to take him to a "private" Cessna parked at a nearby airport. Everywhere worldwide. UK included."
Depends on whether the US want us to tear up the treaty that allows lawful extradition. If they commit crimes of assault and kidnap on UK soil because they lose an extradition case in the UK courts, this would make any future UK extradition legal cases and the treaty that requires these moot, regardless of whether these concern a silly hacker or a genuine terrorist.
"Design changes can fix most of the weaknesses that allow Spectre and Meltdown, but it will take them a while to filter through to live systems."
It's always been reasonable for processes running with the same userid to share information from an access control point of view - you can always have more userids or introduce the appropriate mandatory access controls. If you want to create better boundaries between processes to restrict information sharing, operating systems already have plenty of discretionary and mandatory access controls which are supposed to give software designers the ability to achieve this. It is appropriate to close off these side channel vulnerabilities where processes are already running in different security contexts. It probably isn't appropriate to hit performance where the software design already runs things within the same security context and available access controls which could be used aren't being used.
We expect hypervisors and sandboxed applications to be contained against side channel information leaks, so the performance hit of containment needs to be accepted as part of the processor and operating system access control design.
Geocities was bought in 1999 for $3.57B and switched off 10 years later. Providing the server and service with no revenue stream apart from paltry advertising, however temporarily popular, could only have been sold for that price if someone making the decision imagined it could become a monopoly capable of being monetized at some point.
Creating a production as opposed to demonstration/research app using such a service is likely to be high risk unless you can know in advance what it's going to cost your users and how they will pay for it. If it becomes a must-have monopoly, your heavy users will be price gouged or have to stop using what they've come to depend upon. If they imagine it will cost nothing it's unsustainable by definition and will eventually be switched off when the investor gives up funding the black hole.
In the early days of computer viruses when we used to find new ones every other month while providing a PC helpdesk and support service, I used to send samples encrypted against the public key provided by our then anti-virus vendor to said vendor so they could update their products and we could detect and remove them with less work on our part. Obviously I didn't want the malware I was sending our anti-virus vendor to infect anything else within the transmission channel so PGP encryption was a must.
"The problem is that those who hold the high value secrets might know this but their bosses have a timeline of the next prime ministers questions."
This is probably why those in the know seem unlikely to want to include politicians within their inner circle.
Perhaps the cryptographic equivalent of bank vault locks can be got through by the tiny elite likely to be in the know, but why would anyone bother most of the time ?
Those who hold such high value secrets (i.e. knowledge of algorithm weaknesses) where these exist will want to use them very infrequently and against only the highest value targets for fear of disclosure through honeypot techniques and well tuned intrusion detection systems. It's all basic spy craft - those with high value sources protect these as much as they can which means most who could usefully know are denied access, information gained from these sources has to be very carefully guarded and sanitised prior to declassification and use, and the more use that is made will increase the probability that this kind of source gets disclosed sooner rather than later.
Everything else will involve getting through the cardboard doors - the very many and various implementation weaknesses against which very few systems are likely to be properly protected. So I don't think I'll be rolling my own crypto or combining multiple forms of it or engaging in other obscurity exercises likely to fail when I'm not yet doing the thousand other things I'd have to do (including knowing all my chip technologies and binary device drivers and system software) to avoid the cardboard doors.
The targets I have to defend just aren't valuable enough for me to worry about algorithms no-one has yet discovered unsafe despite large prizes for effective attacks being on offer for those who try to discover these backdoors.
Biting the hand that feeds IT © 1998–2021