I'm beginning to think that intelligent life is a temporary phenomenon and self destructs beyond a certain level
This is actually one of the theories floated to explain lack of a signal. They only last a very short time.
222 posts • joined 22 Dec 2007
@Glen Turner 666
You are spot on.
With regard to point (2) many organisations have formal procedures for vetting people allowed into the 'inner circle'. Whilst these are fallible they at least raise the bar to some extent. I have no idea if such processes are applied in critical open source development environments.
The kernel is only one area where this problem exists and is probably not the best option for exploitation. The sweet spot is probably some component that is widely used and is not a standard component of major distributions.
If you use something direct from the (open) source then you are responsible for the due diligence.
"First, they are very few, highly trusted individuals. Second, the results of their activity is available for all to see after the fact."
A bit like Guy Burgess, Donald Maclean and co then?
We know that critical bugs can hide in plain view in open source software for years. I would be surprised if this attack vector has not been considered by actors who are prepared to take their time.
The thing is that HTML email clients provide really good support for tracking technology.
Now that's got to be a good thing!!!
Well, actually no, and maybe the folk who are aware of and concerned by the tracking implications are the sort of folk you want in the kernel.
Have an up-vote.
Different problem. Not a good analogy on my part.
You need the ability to have some information in the systems treated as classified.
In addition there may be unusual requirements around work hours which will not be found in 'normal' work environments.
They are paramilitary organizations which brings a bit of baggage with it.
Law enforcement systems can have unique requirements where HR and Payroll (and maybe other systems) are concerned. The inclusion of participants that are not law enforcement bodies is very strange and may be part of the problem here.
For example...someone is on the payroll but they will have an uncertain future if it is known they are on the payroll. But they still have to pay taxes on the income....
Making mug shots disappear is actually a real requirement.
@Olius, this may be the way things go....but the only thing we can be sure of is that the resulting API will be subject to abuse...so grid stability will be even more compromised.
Some areas may also have their own plans for excess power...pumping a bit of shit up hill never hurts .. so this will have to be factored into the equation.
Google is always self centered.
Unless Google is using its own generating capacity I would expect this would require some regulation. Having all your green energy sucked out of the grid without notice by Google would cause quite a few problems. An exaggeration of course but managing the electricity grid can be very complex and I don't think sudden load shifts are welcome.
In the late 1990s DECnet inter-networks were as big as IP inter-networks. I had DECnet on Macs as well as microVAXes and Ultrix workstations, it was widespread and pretty easy to use.
If DEC had not crashed and burned it could have been a very different world.
Once upon a time, long, long ago - well in the late 1990s anyway - when eCommerce was becoming a thing the ".com" certificates were only issued after verifying that the applicant is a genuine legal entity. You had to produce a lot of paperwork and it was not a quick process.
Roll forward to mid-2000s and all that has gone. Getting a ".com" is a trivial exercise. The certificate authorities responded by running road shows for "Extended Validation Certificates" that were only issued after verifying that the applicant is a genuine legal entity...and would cost more that the original ".com" that you had jumped though hoops to get. Oh...and they had this green stuff in the "chrome" in the browser that could not be manipulated.
Roll forward...and it's all shit again. And it will always be shit. The technology works, the process doesn't.
In my experience (over more than a decade and a half in finance industry) it was records management failures that gave rise to documentation voids. It was documented, management insisted on it, subsequent teams kept it up to date. It reaches a point where no further change is required. Then, over time, just like the Saturn V, the documentation gets lost. This is generally tied to internal structural reorganizations.
"If the docs don't exist, then we just have to learn the hard way." If you think having the working COBOL source as a starting point is 'the hard way' you have a bit to learn yet. If the source code has also been lost you are about to learn why those who came before you resisted the urge to change this code every other day in order to 'surprise and delight' the customer.
Telling the difference between 'goodies' and 'baddies' when dealing with encrypted traffic is nothing new. The same problem has existed with physical messaging forever, that's why the plain brown paper envelope was invented. More recently we have "burner" phones. Traffic analysis can potentially fingerprint software but sticking with widely used applications provides the anonymous envelope.
Traditional methods, such as human intelligence sources, still work but scaling them to deal with the Internet is the unsolved problem.
In the days of COCOM, and in fact early Wassenaar, encryption was recognised as dual-use and export controlled. Banning strong e2e is just 'back to the future' and, having been there already, we know how that works out. The algorithms leak, new algorithms are created, and those who are outside of the immediate reach of the authorities roll their own. And, of course, you can always resort to a one time pad. Difficult to decrypt communications is not easy to ban unless you ban encrypted communication completely...but then you have things like steganography.
"...because Microsoft uses DCE (Distributed Computing Environment) as developed by the Open Software Foundation in the early 1990s"
I think that should read:
"...because Microsoft butchered DCE (Distributed Computing Environment) as developed by the Open Software Foundation in the early 1990s"
From memory, they 'tweaked' certain 'standards' and built a wall between the NT and DCE worlds. Then, seven years later, they realised that this had not been such a great idea and sucked up to Kerberos. But it had badly damaged DCE by then.
FORTRAN was quite happy with multiple RETURN statements as well as multiple ENTRY statements. In a memory constrained world the RETURN would save you one or two bytes over a GOTO to a single RETURN statement. When running out of code space meant resorting to manually loaded overlays this was a serious consideration.
HP Calculators and plotters...memories. The desktop calculaters in the early 1970s were programmable. The programs occupied register space starting with the high registers and coming down. The program could reference those registers which led to the interesting, and immediately grasped, option of modifying itself. Take the square root of R15 and see what happens next. Now hook a plotter up.... What we learnt from that was that those early plotters were tough. :)
They are all doing it and they know they are all doing it. The world then gets divided into two groups - those who can manufacture stuff and those that have to buy stuff because they have lost the ability to manufacture. Those that can manufacture are obviously in a better position than those who have lost this ability. That's globalization for you.
You should probably look at this: https://support.google.com/pixelphone/answer/4457705?hl=en
Nexus support died in November but I think they did ship a December update but it has been quiet since then. Pixel is now the supported device but only for three years from the release of the model.
All other manufacturers have their own policies that are independent of Googles policies.
I think the list you looked at was misleading.
I would certainly agree that it should be much longer than 3 years but the reality is that the majority of android phones out there are unpatched at the operating system level because the manufacturers and telcos don't bother with the updates.
Nothing incredibly bad has happened because of this. The stagefright vulnerability was a non event. The action is in the Apps.
That could all change overnight of course but until that happens there will be no pressure to change things and the 3 years of updates matches the replacement cycle of the majority of phone users.
Very good points. Maybe there is an opportunity for a bit of 'innovation' and 'disruption' here in the form of service that does the heavy lifting for the consumer, a 'small claims broker'. Make it 'convenient', add an 'App', it ticks all the boxes. Create the tsunami of claims.
That is true. But software companies that issue patches for critical vulnerabilities month after month are still in business.
So then we get into another interesting discussion, which I think is part of some other discussions on this piece, are the bugs actually hurting? Or, are the users just conditioned to the inconvenience. Or, is action against the vendor not a practical option for the average user.
I've certainly experienced issues that have required days of work to recover from. Taking action against the vendor would have made that effort look insignificant. My financial capacity to take any action would also be questionable....they have very deep pockets if they are big and they can just go bankrupt if they are small.
So we come back to constructing an appropriate and proportional legislative framework to take on the problem.
The EULA, or in olden times Terms and Conditions, have always stated that the software was not warranted to work as described on the box. When I was writing commercial software using refurbished, but still hideously expensive, microVAXes in the late 1980s we copied the DEC Terms and Conditions almost word for word. No guarantee that the software was going to work. If it didn’t work then we were going to be in deep shit so, with a few relatively minor exceptions, it worked as advertised.
The economics have changed now. Minimal development platforms do not represent 25% of the value of your house. Failure costs you nothing as a developer. And the EULAs still have that big out.
Changing that will require legislation. Safety and Privacy are possible avenues that can be used to achieve this. But consumer apathy will make this an uphill struggle, convenience and shiny will win every time.
I had the same thought. In terms of percentage I think you would probably need to use a population figure excluding the very young and the very old so maybe a third is not unrealistic.
So where do the 1.8 Billion replaced devices go? How much energy and pollution is involved in their disposal?
The planet is right, it needs to get rid of us, quickly.
Smart heaters? Smart ovens?
Botnets created from rooted routers to take down critical infrastructure.
My point is that we actually have reached the point where insecure devices can cause harm and destruction and we need to start thinking about that because there are billions of them out there.
Now electrical equipment that is plugged into the electrical grid are expected to be safe. There are regulations in place that attempt to protect consumers, and the grid, from unsafe equipment. The electricity grid has safeguards built into it to minimise the impact of unsafe equipment. I don't think anybody thinks this is a bad thing.
So why such strong objections for equipment plugging into the RF grid which, I think, lacks the kind of safeguards that apply to the electricity grid.
We know that all the gadgets being plugged in are completely fucked. They are full of bugs and are actually dangerous when you consider how they can be exploited. So this legislation is basically saying you need to be compliant before you get on the grid...just like electrical equipment, just like cars before they get on the road, just like aircraft before they carry passengers,..... I don't hear objections in these cases.
Is this really so bad?
Cue down votes.
The last link given in the Reg piece goes to a neat piece of research that used adversarial examples thrown at an image recognition application to conclude that it triggered on texture rather than shape.
This highlights the real problem...they do stuff but we don't know how and as a result have no idea how they will behave if the input goes off-piste. So, dressing a wolf in sheep's clothing actually works with the current technology.
Probably similar timeframe - VAX 11/780s in late 1970s. Kill switch next to a phone on the wall... Engineer makes call on phone, leans against wall,.... lights out!
Same installation. Telecomms 'electricians' removing a cabinet. Not sure if power is off. Large screwdriver between active and earth shows, momentarily, that power WAS on, lights out once again.
Some air-conditioning fun from same location if On-Call is interested in that sort of thing.
As far as I understand it they segment the network so that if Internet access is required for work purposes then you (the employee) have internet access. if Internet access is not required for work purposes then no access. This includes email. Devices with Internet access do not have access to the protected segment.
There are many roles that do not require Internet access in an organisation. Technical roles are often considered an exception but there are ways that this can be minimised.
Indeed, and the western world should probably follow Singapore in removing Internet access from most public service accounts. They committed to this in mid 2016. See this commentary related to this incident:
Interestingly there is an opinion from Marsh LLC, part of Marsh and McLennan who are in the same business as Zurich and about the same size as Zurich, that is was NOT Cyber War.
I would have thought that contributory negligence - failure to patch - would have been the tack used by the insurance companies.
Actually it shows that people do not think the investment companies make in developing ground breaking technologies and turning them into a successful market should be returned to those companies along with a profit. This enables them to pay the researchers and continue their research and so continue to employ and pay researchers.
Biting the hand that feeds IT © 1998–2020