Re: Who is the Vendor?
You've never heard of Dell EMC, NetApp, Nexenta, IBM, HPE, Pure Storage, etc? Even when you own and operate all of your own gear, you still have a vendor that you occasionally make a panicked call to.
484 publicly visible posts • joined 17 Oct 2006
Two volumes is what I mean by two (logical) drives, it's exactly the same scenario: It pushes the logic all the way up to the application or OS, which still won't be any good at handling it without specialized knowledge of the drive it's interacting with -- when was the last time you saw an OS or application that was any good at scattering files across multiple volumes evenly? Most of them will just store all the most-accessed stuff on one and hardly anything on the other, reducing access times instead of raising them.
Whereas they could just stripe every couple of megabytes and create a reasonable default, and if they really wanted to go hog-wild, keep statistics to try to even out access patterns over time by moving files around disks.
Why on earth would you need to expose it as two drives? SATA/SAS already queue up tons of requests and the drive is already allowed to service them in non-linear order, as long as it's within the timeout. That's one of the pivotal parts of AHCI that makes it a huge improvement over ATA (Legacy) mode. If you have a parallel workload that wouldn't benefit from the improved random workload, then you can gain no benefit out of the dual heads at all anyway.
Displaying it as two hardware drives just sounds like a good way to confuse the hell out of most operating systems. Just internally split it into zones of some megabytes each, that'll nicely split up data. I suppose include an initialization command so that the OS can see both if it REALLY wants to micromanage it.
Someone traded 14,400 bitcoin for... something. No one knows why, for what, for how much, or with whom. There's no way to know what they got in return, but the transactions were immediately "mixed" (laundered) so that might explain a few things. Someone was willing to pay the ludicrously high BTC transaction fees thousands of times to make that money untraceable.
"What about after hours service?"
Starting around 2010 most stations I visited after midnight just straight up turned their pumps off. They'd accept credit cards, but pump a grand total of five cents of fuel before cutting off. In one town I coasted into on fumes (Gilroy, California), EVERY station in town did that in 2012, and I had to pray I'd make it to the nearest truck stop. Maybe it's a fraud-prevention thing? I don't know.
So apparently it's not human attendants making service suck, it's the owners.
The devops guy didn't steal them. He accidentally nuked the code to decrypt them, which apparently can't be restored, so now they're just random bits in the wind.
It's as if some web server had exposed an initWallet() function that destroyed and recreated one, and an initWallets() that destroyed and recreated all of them. And they were both 100% public. The facepalm is strong with this company; the fact that he was involved with Etherium's founding is a strong knock against Etherium itself at this point.
So they created a badly-trained machine learning algorithm, limited it to 32x32, and then created an easy attack against it? This is the kind of spam publishing that floods the lower-tier journals. I'm not even remotely interested until it's at least tested against one of the dozens of existing commercial machine learning algorithms.
It might have been relevant in the 90's, when algorithms actually did downsample to such an extreme just to work at all in the processing power available, but this has literally zero implication on anything today, it's pure wankery by academics way out of touch with the state of the industry.
If IT says "no" to supporting a piece of software that the business bundles, you have much bigger problems. I can't believe Michael Dell wouldn't just summarily fire anyone who would flat out refuse to support a legit business need.
Some manager in the chain probably got a bonus from giving the support contract to a third-party and saving Dell from having to hire or buy anything, though.
It's pretty trivial to live relocate as long as certain conditions are accounted for, as hinted in the article: Turn entry points into mere trampolines to the real code. When you're ready to cycle the code location, copy the code to the new location, rewrite the trampoline, and tear down the old code when you're sure no one is executing it anymore. Code's changed and no caller knows the difference, just like a stable API/ABI.
Only these nutty Etherium wonks would raise hell over the fact that someone put another tool in the toolbox, even if it's only rarely going to be used. There are lots of uses of SHA-1 (and MD5, and CRC32) that aren't even related to security at all, so the push to phase it out in favor of something stronger is a lot less compelling. Do they cry that every other major programming language's standard library also has an implementation?
MDN's big strength compared to crap like W3S is that it includes a number of in-depth examples, documentation on inheritance order and how modifiers affect it, and other information that can help both novices and pros track down problems and solve tricky things more efficiently. It's not just the fact that they write English clearly, they also write code clearly. (And yes, they do integrate good stuff from Stack Exchange.)
Unlike MSDN, they aren't written primarily by first-year junior interns and only reviewed by senior developers when they want to, and unlike W3S, they don't just give a barely surface-level overview of with a trivial 3-line example of usage.
> The issue is not the age of the existing digital standard, it's the time taken since the last time that people were forced to upgrade their sets or settop boxes on pain of them no longer working.
Like I said, what's the point? By the time the standard is hashed out, ratified, implemented, and finally cut over, you're looking at a minimum of another decade, maybe even two. But thanks for ignoring that.
That's mainly because the standard was way ahead of video technology of the day; it wasn't until the late 80's that televisions could even show off the full fidelity of the standards. Admittedly, for its time, both NTSC and PAL were good technology that used an enormous amount of bandwidth to make up for their simplicity. Raw NTSC is about 50-100MB/s, depending on how accurate you want color to be, meaning that you could store a whole 1.5-3 minutes of raw video on a DVD-9. It took a LONG time to outgrow that, but once HD showed up, that was that.
On the other hand, there's now lots of investment in continually improving the state of the art, and where ATSC could meet the needs of HD easily, it's again not going to work for 4K or HDR/deep color. This changeover is as much consumer-driven as industry-driven.
It's not like ATSC 1 barely came into being and now it's time to toss it, it's over 20 years old as well (though the H.264 extension is only 10 years old). By the time the new standard is ratified and anyone starts broadcasting with it, we're probably looking at another decade at least. There's only so much future-proofing you can put into digital technology with fancy algorithms, since it still has to be cheap enough to purchase early on.
Making every hash default to all zero, and actually hashing dirty blocks for real during periods of lower disk contention or after a set time expires? Seems straightforward enough. (Obviously also communicating with the OS, though interesting possibilities if you could get the OS to send a Trim when a file is deleted.) That would suck for blocks that randomly do hash out to zero, but they just get put in the "sorry, you don't get dedup" bucket. Even a 32-bit key pretty much obviates any need to care about that, losing one billionth of a percent of theoretical efficiency overall.
ZFS was an amazing feat of engineering, but "overengineered" doesn't even begin to scratch the surface. All of its competitors have struggled to achieve 90% of its efficiency while reducing the huge disk and memory footprint it requires, and it looks like X-IO might have really cracked open the nut.
Sadly, this just means NetApp, EMC, or Oracle is going to buy them out and silo their tech forever.
Microsoft went all-in with better quicksearch over threading, topics, manual organization and tags, etc, after Google completely blew away the idea of manually organizing mail for most of the population. It turns out that only about 1% actually care that much, the rest just want some way to access it. Granted Office 2007 sucked balls in almost every way, but most of the Outlooks since 2010 have been relatively solid if you don't need it to act like a 90's Usenet reader.
It is obvious that investment has stalled for a long time, though; the answer to most Outlook feature requests has been "Use Sharepoint!" for a decade now. Great, now I have two problems.
Did the geologist also talk about Atlantis? Because that scenario sounds about as likely to happen as Godzilla climbing out of the waters to destroy the island. In case you hadn't noticed, the other Hawaiian islands that were formed by the same moving fissure are all still there, slowly eroding away. Please look up the "Hawaiian–Emperor seamount chain" for a more realistic idea of what happens to the island chain as the fissure moves.
If windbg wasn't supposed to be used by beginners, then !analyze -v wouldn't exist. Think about that for a second, your argument is essentially that all conveniences should be stripped away and everyone, pros and neophytes alike, should be made to suffer more, because suffering through it is what makes you a pro.
Far better to get beginners used to working with windbg and ease them into the more complex parts of debugging so that some of them can become pros. Anyone who would use windbg in the first place is already someone who wants to be a pro anyway, it's not exactly a mass-market application.
You managed to completely miss the point with both replies. No one was asking for some kind of historical perspective on the protocol, no one cares, it sounds like you're trying to excuse away problems by claiming that there's nothing we can do because it was designed years ago.
The whole point of the posts you're replying to is asking WHEN are they going to be fixed, so that a rogue actor can't maliciously bring down the internet easily, even if for a short time. (And ranting that no one seems to care enough about a gaping hole to do anything.)
Not just the late 90's; I did that in 2013 or so with relatively recent HP gear. Brought a desktop into the datacenter to act as a network capture device, plugged it in, and POW. No auto input switching. Fortunately, it wasn't hard to scrounge a power supply, but you certainly learn your lesson after that.
I mean that works if you have lucid API documentation. If it doesn't, you're basically spending weeks spelunking the source code and/or throwing calls against the way to see what works. And hopefully writing the API docs yourself, since no one else bothered to.
Aside from shelling out, Python also has fully-working dll/so support, with the ctypes library or one of its pretty wrappers, saving even more overhead versus spinning up an executable and parsing its stdout. Practically all of the important libraries have cpu-intensive operations in compiled .pyd (which is just a dll/so), and quite a few wrappers exist to call out to standard libs.
Programmers who consider Unicode an "unnecessary incompatibility" are the reason why so much software is fundamentally broken anytime it encounters anything that isn't Latin-1. I don't know about you, because you probably never had to touch foreign words or names at all, but Code Pages were a damned nightmare to anyone who actually wanted to do things right.
It really isn't that difficult to figure out bytes vs strings. You guys have had 10 years to wrap your heads around it, and all you have to do is do the right thing. It's not like Python 2.7 is going anywhere, literally all you have to do is convert your shell files from calling python to python2 to make them work, but you're too incompetent to even do that!
This is literally no different from the worthless sysadmins that still complain about Perl 6 and Linux 3, because it violates their comfortable safe space, and they just want to get paid to never have to learn anything ever again.
gTLDs broke a LOT more internet hardware and software that for some reason included a hardcoded list that it wouldn't deviate from. Heck, some were so bad that they didn't even allow ccTLDs. There are some times when breaking bad assumptions is the only way to go, and given the non-impact on the vast majority of OSes, hardware, and software, might as well just make it happen.
All of which go out of date about 5 minutes after you walk away from the machine. Or so long and bitter experience tell me..
Learning to let go lessened my stress significantly. Once managed switches became a thing, it was much simpler to just track the MAC through a breadcrumb trail of ARP & mac-address tables until I found the final port, then it usually wasn't much effort to find the PC. (The massive sales office switch being the only exception.)
Finding wireless devices, on the other hand, that's the REAL fun.
"It shouldn't be" is something kids say. It just is, and the better you are at it, the more clients love you. I actually joined my current business partner partly because he's a basket of nerves and hates dealing with client rage, and I can just shrug it off and take the brunt. You'd be surprised how much letting someone vent calms them down. (I still prefer it when they find a more suitable target, of course.)
I'm pretty sure DevOps still includes the Ops part, and while a lot of "DevOps" kiddies I've met are basically hotshot programmers who've learned a couple of tricks about deploying and debugging the OS and slap the hot title du jour on themselves, there will always be room for operators who intimately know their software and hardware, even if they didn't develop it themselves. A big part of the value proposition of DevOps is that we can be fairly seamlessly pulled off of a development project to manage an operations project.
With any luck, we can leverage their development background to make something better than the usual Perl monstrosities that function as glue code. At its best, it's not just that we fuse the roles, it's that we can step into whatever role we're needed in and do better.
On the other hand, consultants are consultants, and any buzzword you hear is no better than any other buzzword. Any business hoodwinked by that deserves their fate.
Honestly, if a business wants to grab an ERP and try to shoehorn it in on the cheap, more power to them. When they need to go beyond the basic COTS customization capabilities, hopefully they'll call or hire someone capable.
What do you mean, "If there was a feature," just use TLS, don't use the pre-shared key method. It's explicitly recommended against in the documentation. TLS (with or without an additional PSK auth) already gives you perfect forward secrecy and has for over a decade.
Just stop being lazy and use certificates.
Nope, doesn't have to succeed; it's during the processing of the initial certificate exchange that it happens. An actual RCE hasn't been demonstrated, just a crash, but of the sort that an RCE could probably be created from. Another potential RCE, as well as multiple information leakages, are available if the attacker actively manipulates data MITM (which is usually only possible if server verification is turned off).
"After a stunt like that on a credit card:"
Despite being a debit card, it's still processed on the hotel's side as if it was a credit card. Their payment gateway is going to have some words for them, if they aren't dropped entirely, and Visa is probably going to have some very serious words with both the processor and the bank for allowing so many obviously anomalous transactions to go through.
I've yet to see a single piece of ransomware that would transparently decrypt for the convenience of its users for a whole month to run out the backup clock, while at the same time serving encrypted bytes to backup software. Can you name a single one? Despite the obviousness, that's not a trivial creation; ransomware never bothers because they're all about the smash-and-grab, not nation-state injection.
"....here are the bugs the review did turn up:
* There's a buffer library API that handles dynamically allocated memory safely;
* Wrappers like strncpyt() and openvpn_snprintf() protect unsafe C standard libraries by protecting against buffer overflows and unsafe NULL termination; and
* Keys and other sensitive data are securely wiped from memory to prevent information leaks."
A bit more explanation might be needed?
One of the first things they pound into HR's heads is that you can't bring up why someone left, or you can be faced with a lawsuit. He brought up that it was all over being Gorean, HR (or in this case, the lead) can refute the specific claim, but they still don't get to air all the dirty laundry, especially if there's a lot of bickering and he-said-she-said.
To me, it sounds like he was involved in a lot of internal strife, and it was him or someone else (or maybe even both). It's perfectly reasonable to fire someone who is causing office issues, unless it's for being a protected class.