A human is checking?
Yeah, let's be honest about how these things work in the real world. The human checking process will consist of a quick glance followed by "Mmmhmmm, ok, push button."
238 publicly visible posts • joined 16 Jul 2013
As every organization - governments, corporations, and even nonprofits - grows and ages they tend to become corrupt, both in purpose and in ethics. This happens to the organizations as a whole, but it tends to be exceptionally pronounced at the departmental level. Slowing / ameliorating this is a function of leadership that we have not yet been able to adequately define, let alone duplicate. The academic managerialist push in the mid-20th century to make organizations as large as possible has horribly exacerbated this problem, and the latest push of notions like DEI and ESG as replacements for the lack of genuine purpose in huge organizations is transforming them from soulless abominations into absolute hellscapes.
Boeing is a very old, and very decrepit corporation that has become deeply corrupt. Instead of being kept alive with Darth Vadar-levels of government life support, it should be allowed to whither, die, and be replaced by something new.
SpaceX is something new. They're doing amazing stuff right now. But one day SpaceX, too, will become old. I'm willing to enjoy what they're doing in the present.
This is all stuff that's straightforward to address.
End-users cannot install software, period, ever. End-users do not get admin access to their devices, period, ever.
Nothing on the internal network connects to the Internet except through a filtered proxy. Don't allow end-users to download executible, library, or script files unless they're devs. Make exceptions painful and whitelist only.
Nothing on the admin network connects to the Internet, except through a very strict, whitelist-only (site and content-type) proxy. No exceptions, period, ever.
Proxies are DMZd and isolated with the expectation that they will be breached. This can seldom be made perfect, but make it as tight as possible.
Company-owned remote clients run in VPN always-on mode. No split tunnels - VPN clients are filtered just like internal clients. They can watch Netflix on their own !#$% device.
I've implemented every one of these rules (and more - I've been doing zero-trust since way before it was cool) and made them stick. In 25 years of IT management, my networks have NEVER been hacked. We had one outbreak of ransomware that was automatically isolated to two hosts, with no data exfiltrated, and only 40 end-user-hours (total) of productivity and six IT support hours of productivity lost at one site. The end-user hours of productivity were lost due to two network shares that were encrypted; we just restored the hourly backups and went about our business. That's it.
Security is possible, but it takes extreme thoroughness and discipline. Beyond that, I was able to do it because I had support from the C-Suite and the Board of Directors - learning to communicate with and "sell" policies to these people is just as critical as the technology part.
Some imbeciles at IBM Red Hat believed they could squeeze CentOS users into RHEL customers by essentially crippling CentOS. The problem is that few of these organizations are going to pay for RHEL. I've used both products; and I generally only specify RHEL if formal support is needed for compliance issues or if somebody running the server is going to need that level of hand-holding (we try to avoid this, but it's not always possible). But for the most part, our in-house troubleshooting abilities exceed Red Hat's support team and it's difficult to rationalize paying for support contracts that we never use.
This has always been a significant aspect of the open source business model. Not everyone is going to pay, especially for Linux where it's not like it's an actual product of Red Hat. They do some development, they bundle things into a nice distro, and they get some paid support business. I have no problem with other business models, including freemium ones like FreePBX where I've bought a lot of add-ons and have no problem doing so. I've bought quite a bit of OpenBSD merch to support their projects like OpenSSH, and I could go on and on with examples (back to the days of registering shareware). The point isn't to talk myself up, but to explain why IBM / RedHat have lost me as a customer, forever. People like me are their "marginal customers:" the ones they risk gaining or losing. We made substantial investments based on Red Hat's very public commitments of support for CentOS. They're free to change their minds, but in doing so they harmed our investments. Therefore, they've moved from "maybe get paid" to "never get paid."
Based on comments from when this happened, I don't think I'm alone in my thinking.
Because Apple Card has very good terms and conditions, competitive interest rates and currency exchange rates, a great cash back program, no nusiance fees, separate card numbers for "card present" and online purchases (and you can the online card number via your app). It's not the absolute best card out there, but it's better than most.
There's a solution for this: IOSafe. They're basically Synology NAS devices with the hard drives packed in a fire- and water-resistant enclosure (see specs for details; they're designed to survive a typical home or office structure fire but nothing crazy). The downside is that they're pricey. One might note that their 5-bay expansion chassis are basically the same as a Synology x517 5-bay expansion chassis and use them with a cheaper Synology NAS, even though that's not officially supported. Still, at RAID-5 with 20TB drives that's around 75TB of useable fire-resistant storage in one package.
Linux has traditionally stayed mostly true to the UNIX ethos of having lots of small, preferably interchangeable, tools do specific things extremely well as opposed to one giant blob of code that everything has to rely on. Systemd goes 180º against that ethos. In many ways it's just as bad as a BLOB - a giant tangle of garbage code that you are more or less forced to put up with if you want to deal with a commercially-suportable distribution (sorry, people, this is important for a lot of businesses even if it's just a useless security blanky for upper management).
I was skeptical of cloud infrastructure from day one because in my entire career I had never and have never seen anything close to the hard sell behind the cloud.
We all know what this was about: companies forcing customers into recurring revenue models even though they're no longer adding much (or any, or even negative) value to their products so that they can maintain their P/E ratios and stock prices.
That being said, there are many companies that can benefit substantially by moving their infrastructure to the cloud. These are the companies whose CIOs and Vice Presidents of IT are so thoroughly inept and run such bloated, inefficient trash fires of departments that moving literally anything further outside of there sphere of influence can save lots of money.
OK, I'm going deep nerdcore on you:
GeoWorks Ensemble, a little-remembered Windows 3 competitor with core functionality written in hand-optimized assembly language. It ran reasonably-well on 8086 machines in 640K and smoking fast on an 80286 with 2MB. Great multi-tasking, and I never saw a crash on it that couldn't be traced to a bad memory chip.
They never released and SDK, therefore nobody could write apps for it, and it died in obscurity. But it was seriously sweet for what it was.
Many of these protocols run in air-gapped environments which limits the utility of many of these exploits to requiring somebody to be physically on-site in environments full of deadly hazards. If necessary, all of these protocols can be wrapped in infrastructure that secures them all the way down to at the wire level. Also keep in mind that simplicity can be a physical security / safety feature. If a connection between two devices suffers an authentication failure and that failure, say, causes an explosion at a chemical plant, was the addition of authentication to a system that could only be exploited at the wire level by cutting into conduits really a smart trade-off?
What's worse than nation-states not being able to keep their cyberweapons under control?
The endess self-righteous pearl-clutching over other contries doing it (especially here in the US).
Selling the bizarre fantasy that this will ever stop being a thing.
Pretending that solutions other than better security exist.
As mentioned before, organizations use mainframes for applications that cannot ever go down, ever, for any reason. Mainframes usually deliver somewhere close to their 99.999% uptime guarantees in the real, messy world that we actually live in. It's far from unheard of for them to have zero downtime over the course a year. How many other systems offer that kind of uptime anywhere outside of their marketing materials?
I mean, there's in article in El Reg today about how many vendors lie through their teeth about their uptime metrics, and everyone with experience is nodding along as they read.
In fairness, not many applications really require five 9s of availability, but for those that do a mainframe is still a very respectable option.
Customers (and Ubiquiti, for that matter) had no way of knowing the difference and had to react accordingly. "No significant damage was done" only if you assume this costs nothing.
We must deal with information that we are given. We then evaluate the credibility vs. the costs / benefits of reacting. In this case, the most reasonable response was to react as if the information was true.
The problem is that Ubiquiti made themselves custodians of data whose security was absolutely vital and wound up in a position (due to decisions they made) where they could not determine the security of that data.
In fairness, this is an extremely difficult problem to tackle well. But if a company is making that commitment on a large scale, then they need to be able to deliver on that commitment. Ubiquiti failed catestrophically.
Even if all of these allegations are true, I'm not sure if this makes Ubiquiti come out looking better or worse. If one person can cause this much infrastructure-level damage, what does it say about their infrastructure security architecture and overall commitment to security?
One of the reasons I've been sharply critical about the mass-centralization of vital data is that it increases the value of a security breach to obscene levels. Even if an inside threat isn't inherently malicious, what about blackmail, extortion, etc.? There are many parts of the world where grabbing somebody's family and cutting off parts until compliance is reached is not exactly out of the question. I would never blame that person for complying. And if the value of a large-scale breach of, say, Google or Microsoft's cloud-hosted workspaces is in the hundreds of millions or even billions of dollars / Euros / pounds, how do you even defend against some group with the budget and discipline to make a serious, no-holds-barred attempt at that? With the current state of international relations, can we even rule out governments (including the "civilized Western" ones) if they're not in it for profit, just creating mass damage?
Our industry has had many bad experiences caused by the technological equivelants of biological monoculture, and instead of learning from these it seems to be betting harder and harder on this.
Even before information technology, there was an adage about putting all of your eggs in one basket.
One of the big reasons people don't update Magento as much as they should is that the update process is a complete trash fire. Since Adobe took over the updates have been of the quality we expect from the people who brought us Flash. For example, a recent security patch in the 2.3 release train cut out compatibility with PHP 7.2, and if you have critical third-party modules that don't like PHP 7.3 or 7.4 yet then tough luck. For complex sites it can take several weeks or months of re-development work to fix this, and to have it dumped on you without any notification is just sloppy.
Exactly this. We were Cisco fanatics for years because they had the cool features we wanted and more importantly they delivered the closest thing you could get to a guarantee of no unscheduled downtime.
Neither of these is the case anymore. The features in question have long since been commoditized, and Cisco reliability is nowhere close to what it once was. In fact, we're seeing much better reliability from products that cost less than 1/10 as much - because they have far simpler software stacks based on mostly on generic Linux functionality that's stable, tested, and mature. I'm fine with that if it gives us the features and performance that we need. And I'm delighted with spending less money for better uptime.
I do most of my work in the SMB space ($10s to $100s of $Ms in revenue) and we've had lead-time issues with Cisco for years. Granted, these were "weeks and months" not "months and quarters." But still. I get that companies don't want to have inventory on the books, but Cisco has been taking JIT to such a ridiculous extreme that any disruption was going to be painful and a huge disruption has created absurdity.
Let's just call this what it is: a company deliberately shifting its inventory management risks (and associated costs) onto its customers.
Yes, carrying inventory costs them money - it ties up capital and capital has cost. But inavailability costs their customers far more. It is perfectly clear where Cisco's priorities lie, and they are not with their customers.
I started moving my customers away from Cisco a while ago because of the costs of project delays, along with their noticeable decrease and ongoing decline in software quality over the last decade or so.
Perhaps other customers and consultants should consider the same.
The one itsy bitsy witsy little difference being that Apple’s implementation *forces* all apps to respect the user’s browser preferences, which is a huge plus for user privacy. And which is the complete and total opposite of what Google is doing.
Of course, in The Register this is not worth mentioning. The comparison is made without context. The article just spews the Google propaganda response, almost making you wonder if the author’s real intent is to say “Yes, Google is doing something terrible, but so is everyone else” when nothing could be further from the truth. Even the mention of Microsoft’s annoying behavior with Edge doesn’t really compare in any meaningful way.
It would indeed be nice and it was confusing at first but now the reason is obvious. With the planned transition to Apple Silicon they probably didn't want to validate another CPU architecture and deal with the differences instruction sets and other CPU-specific features, optimizations, bug workarounds, etc. for two or three years of products.
The funny thing is that the relative instability and forced pace of change with Office365 is undermining Microsoft's biggest lock-in: people being unwilling to switch to an unfamiliar office suite. Now that there are alternatives for Word and Excel that are reasonably feature-compatible and Outlook is no longer as "must have" as it used to be, there just isn't as much of a reason to care about Windows outside of vertical market systems that aren't cross-platform or web-based. I'm finding myself using Windows server less and less, and Windows desktop almost not at all.
Outside of certain, relatively narrow cases, the concept of continuous release software is an absolute dumpster fire. What most businesses want and need is stability and control. Every change is both an expense and a risk, and most changes being pushed incur this without adding value.
Cisco has been problematically committed to JIT inventory for years now, leading to unnacceptable supply delays. Right now with the current supply chain messes, lead time on Firepower gear is over four months. So, in a sense, the problem has resolved itself.
No one dare speak of the other major mobile device vendor that is currently providing updates on even their low-end devices for 5-6 years past introduction and 2-3 years past end-of-sale, without anyone having to make a fuss about it. (note to butthurt downvoters: your tears are delicious).
The courts in this district have essentially created a "business" of being extremely litigant-friendly in IP disputes. By encouraging patent trolls to file there, they need more judges, more staff, etc. It's very shady, and anything decided there should be viewed through that lens (and these are federal courts, so don't blame it on Texas - they have no say in the matter).
I love the idea of both, and both can be of great use to hobbyists and enthusiasts. But as a mainstream mechanism for running Windows apps in Linux? No freaking way. I know how to troubleshoot issues with these systems, and even I simply lack the patience. Someone without the background knowledge would just be frustrated beyond all belief. At the end of the day, it's easier to run Windows as a VM if there are Windows apps that you simply can't get away from (or Windows on bare metal / dual boot for gaming).
That being said, for users who simply need a consistent look-and-feel and aren't hopelessly married to apps like Outlook and Visio (for example), this looks like an interesting project. Ironically enough, Microsoft's constant monkeying with their app UX has made transitions to alternatives like LibreOffice much more palatable. But promising or even suggesting Windows compatibility will likely backfire horribly in the market they're trying to enter.
I don’t know if the results need to be integrated into the official kernel, but reverse-engineering bleeding-edge hardware enough to make Linux even semi-functional is a cool project that builds and exercises all kinds of worthwhile skills. I’ll probably never do anything with it, but I have tons of respect to this team for their work.
I currently have a 4:1 upvote:downvote ratio, which I think is healthy. If I'm not getting blasted with downvotes on occasion then I'm probably not contributing anything interesting to the discussion. If people can't detect irony, sarcasm, or satire then... oh well. Their tears taste sweet to me.
And, yes, I already know which groups of people might upvote this and which groups of people might downvote this. Whatever.
So many companies assume that because their systems are cloud-based that they don't need separate backups. This should have been a straigthforward restore operation - still very damaging and deeply inconvenient, but not a half-a-million-dollar problem. Also left unanswered is how the criminal was able to get access to delete these accounts. With 2FA required for admins, the most likely explanation is that the client or contracting company was sloppy with access control. This is extremely common with outsourced IT work - lots of password sharing with few controls and audit trails, and passwords aren't changed even when a disgruntled employee leaves. I strongly doubt that it was some sort of "sophisticated attack."
"What they need in cellular settings is a toggle between 'best speed', 'best signal' and 'least power use'."
The iPhone 12 has this.
You can set it to use 5G only when the signal is strong enough to avoid meaningful excess battery drain (Settings -> Cellular -> (phone number) -> Voice & Data -> 5G On / 5G Auto / LTE), and there is also a setting it to control more- or less-aggressive cellular bandwidth consumption when 5G is available to improve streaming quality (Settings -> Cellular -> (phone number) -> Data Mode -> Allow More Data on 5G / Standard / Low Data Mode).