Re: The big dish
Hope that somewhere we still keep the knowhow and old gear needed to produce such remarkably slow but reliable equipment like we did in the past ...
38 publicly visible posts • joined 19 Jun 2017
The majority of cybersecurity incidents is triggered by malware infections at end user devices. Endpoint security software is a major market but has essentially failed to reliably prevent such malware infections, and there is no hope whatsoever that this will change in the future.
The root cause: All of today's end user devices are software-controlled, and hence are threatened by malicious software. In addition, they do accept code downloads via the network. Furthermore, typical end users just aren't security experts and sometimes can be tricked, and there are so many more vulnerable end user devices than those usually better protected central applications. The latter can be infeted too (eg. by ransomware), but this very often happens via an previously infected end user device.
The cure: For critical applications, we ought to switch to hardware-controlled end user devices. This is very good and proven practice: Before PCs were introduced, all our end user devices were hardware controlled - and we then had no malware problems whatsoever.
We would need to develop new hardware-controlled devices supporting today's needs including grahics, multiple screen windows, multimedia, teleconferencing etc., which is entirely possible but requires a significant architecture change. Those new and secure end user devices would be cloud/edge-oriented, and wouldn't contain an OS such as Windows or some Linux variant. This results in much better functional stability, reliability and ease of use.
... which soon led into misery, despite some quite promising points. And of course it did not help when the initial Itanium release had significant flaws, quickly leading to the deadly "Itanic" nickname.
The main problem was that Intel did commuincate poorly, so Itanium was widely misunderstood in the market. Many people dismissed it, simply because Itanium did not reach the same level of shipment numbers as x86 chips. Such people would also not understanding why trucks are more costly and somewhat slower than cars, yet do make very good sense in many cases.
Many programmers did not fully understand the implications of Itanium's EPIC architecture and the need to design and code differently than on x86. Not willing to adapt their behaviours, they got less-than-ideal results and did put the blame on the chip - not on their own ignorance.
In a wider context, we do not find not much big iron in IT any more. As per the above analogy, trucks have become very rare, and we do use way too many cars to get things moving. Not very efficient, and pretty expensive in total (although the single boxes are indeed cheap). Too many drivers needed, and IT budgets did not shrink.
Back to Itanium - yes, this relatively modern architecture is now dead, while we still use an 40+ year old architecture despite all its shortcomings. Is this a good idea ?
Just a small example - Itanium is not affected by Meltdown, Spectre and other speculative processing vulnerabilities.
There was a number of Emotet attacks in Germany recently, some of them affecting public entities like universities, courts and city administrations. Emotet is pretty effective by spying on mailboxes and composing fake mails appearing to originate from well known contacts and relating to reasonable subjects. Even cautious users can be tricked into opening attachments or clicking links ...
While not all details are known yet, it almost seems that some youngster(s) with limited hacker skills went into the darknet and bought lots of stolen passwords. Over several months, they skimmed that stuff and picked hundreds of German politicians and other well-known people such as actors and TV showmasters to extract private data from accounts belonging to them. While they picked any politician not belonging to the far-right equivalent to the UKIP in Britain, they treated other celebrities more selectively with a clear preference for those who engage themselves socially and support more liberal views.
This is seen as a big affaire here in Germany, some politicians speak of "a big attack against democracy" and many non-experts blame the victims as digital idiots. The BSI cyber security agency is blamed for poor communication and for not constantly snooping on each and every Twitter account for potential data leaking. And of course, much stricter laws are demanded - as if this could be of any help ...
OT used to be somewhat clumsy, but pretty reliable and secure. Contemporary IT is much more agile, but not overly reliable and not at all secure. It would be great if some of OT's positive aspects would flow into IT, at least in critical application areas. But one would not expect this to happen when big IT barns drive the convergence of IT and OT. Bad apples usualy do spoil good apples in the same basket, and very rarely get healed by the good ones ... (;-))
… so I've just bought an oldfashioned MS Office package for my new notebook, rather than subscribing to Office 365. Works nicely, but for some reason the Outlook profile won't load. This article seems to explain why …
Yes there alternatives to MS Office, but unfortunately it's very hard to boil an ocean ...
"So- don't buy Miscrosoft products, then?"
It's not that simple. While the Microsoft stuff is indeed awfully complex, the competion isn't much (if any) better. No wonder these overly complex and hence ugly constructs aren't reliable. And it is not getting any better as long as we keep focussing on low price tags (with hidden but tremendous high cost) - rather than fighting for simplicity and reliability. It starts way down at the hardware level ...
These days many people do focus on analytics, AI and big data because that's the current hype, but the beauty of (serious) IoT is that it usually delivers specific events and data best utilized when acted upon immediately. Actually, IoT should more be seen as an OLTP game than an analytics game.
@ trydk: That call for stricter IT security laws sounds good, but won't help very much. Such legislation might cause tiny startups to improve their IoT product's password protection from "hilarious" or "none" to "very basic", but that does not solve the much wider and much older fundamental problems in IT security.
We run IT infrastructure that is utterly vulnerable, offering myriads of holes making nasty attacks like WannaCry possible. When taken to court, Microsoft will certainly be able to prove that they are doing the best they can and are not neglecting their duties. In the WannaCry example, they had published a related Windows patch two month before the malware outbreak.
Other cases are even more difficult, it will often be hard to determine who should be held responsible at all - like in the Heartbleed case, which was caused by a bug in Open Source code.
Who is to blame for the fact that practically all of our IT gear is based on the vulnerable Von Neumann computer architecture ? In contrast, the Harvard architecture features solid seperation beween data and code, thus providing much better protection. But can vendors be sued for not investing many billions into something entirely different that would be extremely hard to bring to market ?
Legislation can help to create awareness, as shown in the GDPR case (it will take some time until the positive effects will prevail over the initial difficulties). However, politicians and lawyers cannot fix fundamental shortcomings in technology.
One of the important things that people need to learn is that you cannot fix fundamental technology problems just by issuing new laws, rules, certificates and other boring paperwork.
So why should one ask politicians and civil servants who typically have rather limited insigth into the problems to produce even more laws and rules ? At best, that would lead to a false impression of improvements in security and also to more lenghty, pointless and expensive lawsuits.
Help can only come from experts and a shift in paradigm - leaving behind that currently prevailing messy IT infrastructure which is pretty unreliable and vulnerable beyond repair, and coming up with something new that has been designed for reliability and security from day one.
Another problem with today's clouds is that they are designed, built and operated for utmost cost optimization, not for high reliability and running critical applications. Such reliable clouds might be possible, but it is not very likely that they could become economically successful as a standalone offering - the beancounters would shy away from the price premium. Another obstacle is that cloud providers cannot know enough about their customer's individual businesses to provide them with the right scope of reliability they need.
So cloud providers do promote various options that *theoretically* allow their customers to achieve the reliability needed for critical applications themselves. In real life that approach doesn't work particularly well, as customers are not deep enough into the complex art of making applications really failsafe. This would also require to have at least some control over the infrastructure of the not-so-reliable clouds availble today. More common than hard failures (like the defective fibre cable in the Gatwick example) are temporary overload situations causing application timeouts thus making services unavailable.
Back in the 70's, some company invented fault tolerant (=failsafe) computers which by design had no single point of failure. They even extended that fault tolerance into their system software - if one CPU tripped over some sporadic software bug it was immediately halted, and parallel CPU's took over. Applications continued flawlessly without loss of data or any impact to the end users. By the way, that product line still exists and you can buy such fault tolerant computers today.
Back at that time, large airports ran their critical operations locally and did link up their devices to their own local computers, not to someone else's cloud. Nor did they outsource any critical IT task that they could run locally by their own staff. Those were the days of reliability ...
... as by design it contains a mechanical cache that would help greatly at least during short outages of the flight information system. And there is no dependency on external clouds, no chance to cut fibre cables leading there, and no impact by all those many other cloudaches so often interfering in today's wonderful marvellous Internet world ...
Maybe we should distinguish between consumer-oriented gadgets (typically connected to the Internet just to enable consumers to brag with their ability to toy around with said gadgets via smartphone apps) and serious technology used in production, logistics, transportation and other areas. The Germans have coined the term "Industry 4.0" for such technology.
Here, typically some industrial control system (ICS) would be involved, and the machinery controlled by this would not necessarily need to be connected to the Internet. For instance, you could order some furniture in a webshop specifying the exact dimensions you want, but the related production gear might run totally separated behind an air-gap ...
We have the year 2018, but still much of the ATM/POS authorisation stuff runs on oldfashioned big iron gear - which has a big advantage, those systems are usually very stable, reliable and predictable - unlike that clumsy PC-derived technology to be found at the heart of contemporary systems everywhere. A good portion of the worldwide ATM/POS workload even runs on fault tolerant systems originally designed decades ago to be failsafe, and since having an excellent track record in that regard.
The recent problems with MasterCard and VISA were certainly real and very annoying, but do we really know whther they were actually caused by the backend systems ? It is somewhat more likely that the bottlenecks / fault areas are somewhere in the complex networking (typically involving several parties) to be found between the card readers and the authorisation systems. We only have to look at the performance of other services delivered via the same kind networks - they are often impeded too. Maybe we need premium networks to support critical services, as opposed to the usual "social networking" stuff.
Okay, also many grown-up people like to toy around with technology, a few decades ago model trains were quite popular. But anyone toying around with "smart home" stuff somewhat reasonably would find ways to set up some (maybe even random) lighting scheme without having to connect to the Internet. That would perfectly cover the first two "use cases". And the third case (parents coming to your home before you arrive) implies that you believe your parents are not able to find and operate the light switch on the wall. Assuming that you were clever enough to install standard wall switches too (if only as a fallback option in case your smartphone's battery is depleted), this would raise some questions ...
If your business is running US-centric, that's perfectly fine. However, you might be surprised when looking at market sizes. The EU market is significantly larger than the US market.
Still wanting to avoid the extra effort ? Again, that's perfectly fine. But don't complain afterwards ...
... which means that decent people should also not accept free lunches from strangers unless they are fully willing to align to the profit-oriented goals of those strangers. In the last century, there was an utterly stupid misconception that everything that is available via the Internet has to be avalable for free. That was of course complete nonsense, but to a surprisingly large degree such crank ideas do still persist today.
GDPR is a good attempt for killing such stupid myths. Everyone who dedicates work should be entitled to get financial rewards for that, and it is up to the market to establish how much value (if any) such work is worth. I think the ISP's should increase their tarrif's to also cover the cost of search engines, Wikipedia, social networks etc. and in exchange, all data snooping done by the likes of Google, Cambridge Analytica and others should be banned unless users do expressively declare that they are willing to accept any abuse that is done with their personal data. I for myself am deply desgusted when a result of my recent searches shows up on any website I'm opening in a new session. I might be opening that new session together with somebody else to show that person something on the Internet, and it is part of my computer privacy that this other person does not get indications about any of my previous Internet activities.
I am willing to pay a higher ISP charge to maintain computer privacy, but applaude to GDPR if that set of regulatuons drives Google and other data slurpers out of their most profitable parts of their business. Just a reminder - these are civilian rights and part of our constitution, and are certainly more important than the right to buy and carry deadly weapons kiling tens of thousands of innocent victims every year ...
From today's article: "Shujun Li, professor of cybersecurity research at the University of Kent, said the main issue was not the initial failure – modern IT systems are too complicated and dynamic to be totally bug-free, he said – but because of the bank’s poor risk management."
Many people would agree to the above opinion, but only few would be willing to draw the resulting further conclusions: Modern IT systems (ie., the currently prevailing "good enough" stuff) are far too complex to provide deterministic behaviour and predictable results automatically by themselves. Hence, a lot of additional and pretty difficult work needs to be done to raise service levels beyond the threshold of "good enough". Which means that the number and severity of complaints need to be reduced far enough that an overall impression of somewhat acceptable system behavior can be achieved - which however isn't exactly the level of reliability one should expect from critical IT systems, and it is pretty costly too. How about another IT architecture that delivers more predictable results ? That's not rocket science, it has been done in the past and it is done in other areas like industrial IT and OT.
Maybe sometimes not having access to your account or somebody else having access to your account isn't seen as a major problem in today's banking industry. In industrial IT, the equivalent would be frequent production outages and small explosions all over the plant every once in a while ... (;-))
From yesterday's article: "TSB migrated from former parent Lloyds Banking Group's systems to shiny new ones" ...
Moving from centralized and highly deterministic systems to "shiny new" systems that (from the outside) may even look centralized too, but are in fact a highly complex conglomerate of many thousands of "PC's" all doing more or less their own thing but also being dependent on the outcome of many other "PC's" to complete their tasks isn't easy. It doesn't help much that these "PC's" are no longer small separated physical machines like in the early days of distributed computing, rather myriads of virtual machines running on some kind of x86 infrastructure coming with bombastic marketing wording but behaving like a bunch of PC's anyway. Predictability suffers, such systems are certainly "good enough" to handle enormous workloads for less critical applications like Facebook and Twitter but might be less than ideal for really critical stuff like banking operations.
This is not crying for the good old past based on legacy systems, as it has already been pointed out that old systems eventually become a real pain when too much new functionality gets added. At some point, it is better to start with a clean slate - but also on a highly deterministic system providing better reliability, predictability and security than the "good enough" gear that has become the de facto default for each and every new application these days. Unfortunately, most of the younger IT folks do not even realize that alternatives do exist.
The prevailing hardware stuff comes relatively cheap, but the business results tend to be mixed as reliability, efficiency and security have "room for improvement" and the cost to run and support those very complex systems becomes too high. Many user organisations now do escape to the public cloud, even the military are now considering such moves. However, it is unclear how cloud providers having less knowledge of the business requirements and less incentive to provide superior service levels for critical applications will be able to serve their customers better.
From the artcle: "TSB migrated from former parent Lloyds Banking Group's systems to shiny new ones" ...
Moving from centralized and highly deterministic systems to "shiny new" systems that (from the outside) may even look centralized too, but are in fact a highly complex conglomerate of many thousands of "PC's" all doing more or less their own thing but also being dependent on the outcome of many other "PC's" to complete their tasks isn't easy. It doesn't help much that these "PC's" are no longer small separated physical machines like in the early days of distributed computing, rather myriads of virtual machines running on some kind of x86 infrastructure coming with bombastic marketing wording but behaving like a bunch of PC's anyway. Predictability suffers, such systems are certainly "good enough" to handle enormous workloads for less critical applications like Facebook and Twitter but might be less than ideal for really critical stuff like banking operations.
This is not crying for the good old past based on legacy systems, as it has already been pointed out that old systems eventually become a real pain when too much new functionality gets added. At some point, it is better to start with a clean slate - but also on a highly deterministic system providing better reliability, predictability and security than the "good enough" gear that has become the de facto default for each and every new application these days. Unfortunately, most of the younger IT folks do not even realize that alternatives do exist.
The prevailing hardware stuff comes relatively cheap, but the business results tend to be mixed as reliability, efficiency and security have "room for improvement" and the cost to run and support those very complex systems becomes too high. Many user organisations now do escape to the public cloud, even the military are now considering such moves. However, it is unclear how cloud providers having less knowledge of the business requirements and less incentive to provide superior service levels for critical applications will be able to serve their customers better.
... and this was of course long before the big splitting of HP into HPE and HP Inc (thus separating the PC and printer business from "real" IT stuff), HP was the world's largest IT vendor. They had everything from end user devices and networking gear over servers and storage to services, consulting and software. At that time, they had the potential to set a new standard that could have dominated the market, just like SNA dominated in the 70's and 80's. This could of course only have happened if that new standard would have provided significant benefits to IT users, such as higher reliability or security - things that are badly needed in today's IT.
However, HP did not have the kind of leadership and enough phantasy to envision such bold ideas. Like everybody else, they prayed to Excel and the existing standards, essentially competing on price - a fight they simply could not win. When simply cutting some fat did not help any more, they started cutting off their arms and legs ...
Depends on what you would call a "reliable cloud" ...
Many people would call today's clouds "good enough" and hence, consider those also as reliable enough for their purposes. Okay, so they are willing to live with less-than-perfect reliablity, occasional outages and frequent performance degradations.
But then there are others having pretty critical applications, not compatible with "good enough" clouds. They would need another infrastructure, and it would cost more money to build it. Now the bean counters come into play ...
Moore's law is indeed on doubling the number of transistors, but in former times that had the effect of also doubling speed. This is no longer the case - rather we are now doubling theoretical throughput (useful only if we had software that could make use of ever growing parallism), and we are also doubling complexity, unreliability and vulnerability. We are still in electronics, and electrons fly around atoms - so the atom size is a hard limiting factor. Another one is heat - the more layers we put on top of each other the harder it gets to dissipate it from the middle. We'd probably have to turn down clock speeds to avoid meltdown. Furthermore, making the chip structures even more tiny and brittle and thus even more susceptible will force us to invest more transistors into error correction and fault repair circuitry. At some point adding even more transistors becomes ineffective as we have to use them for unproductive purposes and also we get overwhelmed by complexity. All those effects will ultimately stop that trend we know as Moore's law.
Is this a tragedy? Probably not. We can make things a lot simpler, and thus work more productive and more reliable. We can put more functions into hardware, eg. via ASICs. And maybe we could even educate software whizzards to become more humble and to concentrate more on user needs rather than on the latest software fashions and their own ego.
One would assume that by scraping gazillions of postings from social networks the US military wanted to add real value for their own purposes. Whether such activity is good or legal is yet another discussion.
But they certainly did not intend to voluntarily serve that added value to anyone else having access to the Internet (in particular, they probably did not want to provide the results of their work to potential enemies). Even from that very simplicistic perspective, those responsible for handling the collected data did a horrible job that was highly counterproductive.
But they also created a very dangerous asset that was potentially available to everyone. Some postings here and there from some people may be relatively harmless, but creating enormous amounts of data about nearly everyone can be a very dangerous weapon in the hands of other nations/groups with bad intentions. Effectively doing that kind of dirty work for them can't be something not be taken seriously. Some nations including the US and the UK haven't yet experienced a dictatorship ruling their country - some other nations have or had that ugly experience, and hence do value such oldfashioned terms like freedom and democracy.
Earlier this year I've filed a patent application with the EPO and now did receive their initial research results. My application is seen as valid, so formally I have nothing to complain about here. However the brief comments given provide just one reference to another document - and that one has very little to do with the subject of my invention. Seems that a poor soul under heavy pressure to close as many open cases as quickly as possible just did that ...
Maybe Microsoft's bit barns aren't that bad and network congestion is contributing to the poor service levels frequently observed. Can anybody explain to me why dull entertainment ought to be streamed via the Internet when there is ample bandwidth available via traditional TV channels (cable/satellite) ? And there is even the offline alternative of using CD's and DVD's - nice bandwidth achieved by walking to the next store, and good for health as well ...
According to the outage monitoring website I'm using, there have been high numbers of problem reports for various Internet-based services on Sept. 18th in Western Europe, in particular from the South of UK, Northern France, Belgium and the Netherlands, and western parts of Germany
The poor souls being paid for having to try to keep the Microsoft cloud working have admitted yesterday that they have trouble with load balancing. While this certainly can bring their cloud into serious trouble, and the resulting snafus can be the reason for all kinds of subsequent hiccups like emails going into Nirvana land, the underlying root cause - at least potentially - might still be too much load coming in and too much capacity eaten up by overloaded network error recovery protocols. Again, other service providers also were under pressure yesterday - maybe they did have somewhat less trouble because they provisioned their clouds just a bit better. There were more unhappy users than just the victims of Microsoft ...
Everybody seems to assume that clouds and the Internet by definition have endless resources. Unfortunately this is not true, you still need a lot of bandwidth and compute resources to keep up that miracle. It may make sense to use resources more wisely - wasting those for streaming dull content to dull people isn't absolutely necessary. In earlier days, one could rent such content on DVD in a shop at the next corner ...
To some extent, Microsoft needs to be criticised for running notoriously unefficient Microsoft system and application software in their cloud, and also for underprovisioning even harder than other big cloud providers.
On the other hand, it seems that other providers also had their share of trouble theses days. Here in Germany, besides Outlook.com also Deutsche Telekom, Vodafone, Unitymedia, O2 and 1&1 had problems across the board with their IP-based services which now also means telephony in many cases. Are we experiencing general Internet capacity shortages, probably caused by users wasting too much bandwidth watching silly stuff via Netflix and other streaming services, or by sending stupid cat videos all the time ?
That's an pretty old piece of software, lurking around since about 30 years, and never certified by anybody. Many of those local public servants down at individual county level supposed to have the voting offices under their control just didn't care about replacing it, rather they kept it running time over time. No central rule from the BSI (Germany's IT security agency) or any other top level government organization, probably that could have been seen as interfering with local government's freedom to do their own thing. By the way, a good number of local governments did upgrade to something more recent and supposedly better, but Wahl-PC is still the most used software for uploading voting results in Germany.
I'm certainly not a fan of Carly, who was posing as a CEO pretty much as another actor is now posing as an US president. Killing off the "HP way" (a quite motivating management style) was certainly not a good idea, either. However I need to give her credit for saving HP at that time, by pushing the Compaq acuisition through. Old HP had lost it's drive, and was about to shrink down to just a printer company.
If you do look at the sad remainings of the HPE portfolio now, it's pretty much that ancient ProLiant server line acuired from Compaq that keeps the company still alive today.
That particular Tesla vs. truck accident was caused by using improper sensor technology. You just can't rely on optical sensors alone (just like the human eye, they can get blinded under certain conditions) and so you need radar and/or laser sensors in addition. Basic technical design flaw, not just a software bug (Tesla has meanwhile added radar sensors).
But even then, you can't rely that electronics and software alone will be perfect for driving cars. That environment is extremely complex. There will always be some cases where technology makes stupid misinterpretations and hence, terrible mistakes which the average human driver would not make.
This is not to say that human drivers don't make mistakes, they probably make more. But they are better at detecting and correcting these in real time when driving actively. However, being disengaged from the driving process and only being alerted when the software throws the towel is a perfect recipe for trouble - human beings are not good at task switching. It takes too long to switch from watching a Harry Potter movie to realizing basic driving-related things (like where am I ?, What is going on ? etc.). Authorities would probably have to limit the driving speed to 5 mph to achieve a reasonable level of safety.
Wild marketing claims that autonomous driving will be far more safe than conventional driving are just that - wild marketing claims. The CEO of a very large european insurance company recently stated that the number of accidents will go up, not down when autonomous driving is introduced. However, he asserted that modern driving assistance systems clearly do improve road safety. Obviously, the driver still being involved makes the difference.
Insurance companies by their very nature are experts in managing risks ...
Old mantra - it is very hard and expensive to build secure systems based on unsecure platforms.
However the bean counters demand cheap platforms, and neither know nor care what this means for IT security. When being told, they ask their techies to just retrofit security onto the unsecure inplementation as an add-on which doesn't work very well. But they will always find some consultants/salesmen who claim to have some snake-oil product or service ...
The big problem is the more and more growing investment in Windows-based software. Who dares to ask management to make this investment obsolete - and to spend quite a lot of time and money to build something new, with IT security in mind from day one ?