Re: We need those rules too.
Sounds like it was a third-party seller, not Amazon Prime. Amazon washes their hands of the matter if you buy something outside their warehouses.
454 posts • joined 17 Oct 2006
FWIW, despite everyone doing their best to convince me that Moto drops their phone support the day they release, they've kept up with both security updates within 1-2 months and major OS updates within a year for me. (Just got Android 10 on my G7.) Samsung isn't even close to that.
I'm probably one of the tiny handful of people on a site like El Reg that actually likes the new Ribbon interface. I'm glad they finally implemented it, and I feel it's improved my productivity, despite two decades of using OOo/LO.
Also, if you have ever needed charts in Calc, those are MASSIVELY better now than at fork. Like, a whole world of betterness. Charts went from being broken and useless to being better than Excel's.
Those are 16-bit ActiveX controls then, lots of VB6 apps were 16-bit or mixed 16/32-bit, as insane as that might sound. Then again, VB6 is from 1998, after all, and many businesses still ran Win 3.1 then.
The silliest thing Microsoft did from Win8 on was to get rid of XP Mode, it made a lot of Win7 transition headaches much easier.
Unless every single car in the city is now self-driving and navigating entirely off of Google Maps, and all heading in the same direction, this didn't "fuck up the city." It caused a few people using GM to pick alternate routes, probably mostly rideshare drivers, and confused the hell out of a few people who stayed on the road anyway.
And, you know, it's white hat hacking that points out a potential problem in a system in a relatively benign way.
Thank you for the TED talk on how things should be, but back in the real world, how do you propose any of this actually happens? Where will all of the cyber-savvy officers come from? What budget will pay for the equipment, software, training, and salary for each department's new task force? Who will make hostile nations cooperate with our investigations? Without an action plan, a goal will never be more than a goal and a feel-good TED talk.
At this point, no one in their right mind would adopt a second-tier Google app because Google will shortly abandon and eventually ax it, and Google will abandon and eventually ax every second-tier app it makes because no one in their right mind would adopt it. They created this bed, and they're going to lie in it until they stop treating big projects like someone's hobby project on GitHub.
Yeah, I got an emergency panicked call and had to uninstall MalwareBytes from someone on Saturday morning. Apparently by the time I was done, the update was pushed, but there was no way to actually update, because it was chewing up over 12 GB on a 4 GB laptop, continuously allocating more, and it took ten minutes to be able to kill the damn process via task manager, after first wasting time trying to stop the service cleanly. It's going to be a bit before I trust MalwareBytes again, I'm not going to reinstall it just because they say the one-off goof is fixed.
It's obvious: Customers demanded it. Not just a few, but an overwhelming number of corporate(!) and high end home customers demanded that Microsoft's Pro OS include everything the home version does. Most higher-specced OEM systems only come with Pro, no Home option available, so anyone just buying a system for themselves would also expect at least everything in Home. And some people just want the top edition of everything despite just wanting to browse the web and play games.
They set easy ways for IT departments to lock things down, but it turns out executives like to play games too.
"That's only true if you had a Windows 7/8 version to upgrade from, and you upgraded in the allotted time. Otherwise, you pay for it upfront, then pay for it again through telemetry."
Just yesterday I was still able to upgrade and activate a few systems to Windows 10 that had never been reserved (domain policy preventing any hint of upgrade), by starting a fresh install and plugging in the product key. Did a couple OEM and one retail, same result. Even if you'd rather upgrade than start fresh, you can still find multiple ways (the "accessibility technologies" link is the most popular).
It's patently obvious that Microsoft actually wants everyone on 10, come hell or high water, and all those deadlines are just there to get some holdouts nervous enough to do it.
Might just backfire, if the earnestness to please his corporate masters brings more damnation and regulation on them than if he'd just left well enough alone. Even if he was just doing exactly what they told him, they can still leave him to twist in the wind like a good scapegoat.
I doubt he even got more than vague verbal promises of future employment from anyone. He doesn't seem like the sharpest tool in the shed.
While this is a major step up from the last two "machine learning fail" studies The Register has breathlessly reported on -- at least this time it's not just testing some crap created from scratch by the researchers themselves -- they chose DeepSpeech, of all the speech-to-text algorithms, widely considered so bad that this might be the first study to actually bother testing it. It's no surprise that it fails so badly. Even if they have to confine themselves to open source (which makes no sense in this case, since they neither analyze the algorithms nor modify the code), CMU Sphinx and Kaldi are the gold standards.
No one cares how DeepSpeech fails, it's widely regarded as a failure. Waste of time testing that. Wait until it has another year or two to mature before it's worth testing.
Two volumes is what I mean by two (logical) drives, it's exactly the same scenario: It pushes the logic all the way up to the application or OS, which still won't be any good at handling it without specialized knowledge of the drive it's interacting with -- when was the last time you saw an OS or application that was any good at scattering files across multiple volumes evenly? Most of them will just store all the most-accessed stuff on one and hardly anything on the other, reducing access times instead of raising them.
Whereas they could just stripe every couple of megabytes and create a reasonable default, and if they really wanted to go hog-wild, keep statistics to try to even out access patterns over time by moving files around disks.
Why on earth would you need to expose it as two drives? SATA/SAS already queue up tons of requests and the drive is already allowed to service them in non-linear order, as long as it's within the timeout. That's one of the pivotal parts of AHCI that makes it a huge improvement over ATA (Legacy) mode. If you have a parallel workload that wouldn't benefit from the improved random workload, then you can gain no benefit out of the dual heads at all anyway.
Displaying it as two hardware drives just sounds like a good way to confuse the hell out of most operating systems. Just internally split it into zones of some megabytes each, that'll nicely split up data. I suppose include an initialization command so that the OS can see both if it REALLY wants to micromanage it.
Someone traded 14,400 bitcoin for... something. No one knows why, for what, for how much, or with whom. There's no way to know what they got in return, but the transactions were immediately "mixed" (laundered) so that might explain a few things. Someone was willing to pay the ludicrously high BTC transaction fees thousands of times to make that money untraceable.
"What about after hours service?"
Starting around 2010 most stations I visited after midnight just straight up turned their pumps off. They'd accept credit cards, but pump a grand total of five cents of fuel before cutting off. In one town I coasted into on fumes (Gilroy, California), EVERY station in town did that in 2012, and I had to pray I'd make it to the nearest truck stop. Maybe it's a fraud-prevention thing? I don't know.
So apparently it's not human attendants making service suck, it's the owners.
The devops guy didn't steal them. He accidentally nuked the code to decrypt them, which apparently can't be restored, so now they're just random bits in the wind.
It's as if some web server had exposed an initWallet() function that destroyed and recreated one, and an initWallets() that destroyed and recreated all of them. And they were both 100% public. The facepalm is strong with this company; the fact that he was involved with Etherium's founding is a strong knock against Etherium itself at this point.
So they created a badly-trained machine learning algorithm, limited it to 32x32, and then created an easy attack against it? This is the kind of spam publishing that floods the lower-tier journals. I'm not even remotely interested until it's at least tested against one of the dozens of existing commercial machine learning algorithms.
It might have been relevant in the 90's, when algorithms actually did downsample to such an extreme just to work at all in the processing power available, but this has literally zero implication on anything today, it's pure wankery by academics way out of touch with the state of the industry.
If IT says "no" to supporting a piece of software that the business bundles, you have much bigger problems. I can't believe Michael Dell wouldn't just summarily fire anyone who would flat out refuse to support a legit business need.
Some manager in the chain probably got a bonus from giving the support contract to a third-party and saving Dell from having to hire or buy anything, though.
It's pretty trivial to live relocate as long as certain conditions are accounted for, as hinted in the article: Turn entry points into mere trampolines to the real code. When you're ready to cycle the code location, copy the code to the new location, rewrite the trampoline, and tear down the old code when you're sure no one is executing it anymore. Code's changed and no caller knows the difference, just like a stable API/ABI.
Only these nutty Etherium wonks would raise hell over the fact that someone put another tool in the toolbox, even if it's only rarely going to be used. There are lots of uses of SHA-1 (and MD5, and CRC32) that aren't even related to security at all, so the push to phase it out in favor of something stronger is a lot less compelling. Do they cry that every other major programming language's standard library also has an implementation?
MDN's big strength compared to crap like W3S is that it includes a number of in-depth examples, documentation on inheritance order and how modifiers affect it, and other information that can help both novices and pros track down problems and solve tricky things more efficiently. It's not just the fact that they write English clearly, they also write code clearly. (And yes, they do integrate good stuff from Stack Exchange.)
Unlike MSDN, they aren't written primarily by first-year junior interns and only reviewed by senior developers when they want to, and unlike W3S, they don't just give a barely surface-level overview of with a trivial 3-line example of usage.
> The issue is not the age of the existing digital standard, it's the time taken since the last time that people were forced to upgrade their sets or settop boxes on pain of them no longer working.
Like I said, what's the point? By the time the standard is hashed out, ratified, implemented, and finally cut over, you're looking at a minimum of another decade, maybe even two. But thanks for ignoring that.
That's mainly because the standard was way ahead of video technology of the day; it wasn't until the late 80's that televisions could even show off the full fidelity of the standards. Admittedly, for its time, both NTSC and PAL were good technology that used an enormous amount of bandwidth to make up for their simplicity. Raw NTSC is about 50-100MB/s, depending on how accurate you want color to be, meaning that you could store a whole 1.5-3 minutes of raw video on a DVD-9. It took a LONG time to outgrow that, but once HD showed up, that was that.
On the other hand, there's now lots of investment in continually improving the state of the art, and where ATSC could meet the needs of HD easily, it's again not going to work for 4K or HDR/deep color. This changeover is as much consumer-driven as industry-driven.
It's not like ATSC 1 barely came into being and now it's time to toss it, it's over 20 years old as well (though the H.264 extension is only 10 years old). By the time the new standard is ratified and anyone starts broadcasting with it, we're probably looking at another decade at least. There's only so much future-proofing you can put into digital technology with fancy algorithms, since it still has to be cheap enough to purchase early on.
Making every hash default to all zero, and actually hashing dirty blocks for real during periods of lower disk contention or after a set time expires? Seems straightforward enough. (Obviously also communicating with the OS, though interesting possibilities if you could get the OS to send a Trim when a file is deleted.) That would suck for blocks that randomly do hash out to zero, but they just get put in the "sorry, you don't get dedup" bucket. Even a 32-bit key pretty much obviates any need to care about that, losing one billionth of a percent of theoretical efficiency overall.
ZFS was an amazing feat of engineering, but "overengineered" doesn't even begin to scratch the surface. All of its competitors have struggled to achieve 90% of its efficiency while reducing the huge disk and memory footprint it requires, and it looks like X-IO might have really cracked open the nut.
Sadly, this just means NetApp, EMC, or Oracle is going to buy them out and silo their tech forever.
Microsoft went all-in with better quicksearch over threading, topics, manual organization and tags, etc, after Google completely blew away the idea of manually organizing mail for most of the population. It turns out that only about 1% actually care that much, the rest just want some way to access it. Granted Office 2007 sucked balls in almost every way, but most of the Outlooks since 2010 have been relatively solid if you don't need it to act like a 90's Usenet reader.
It is obvious that investment has stalled for a long time, though; the answer to most Outlook feature requests has been "Use Sharepoint!" for a decade now. Great, now I have two problems.
Did the geologist also talk about Atlantis? Because that scenario sounds about as likely to happen as Godzilla climbing out of the waters to destroy the island. In case you hadn't noticed, the other Hawaiian islands that were formed by the same moving fissure are all still there, slowly eroding away. Please look up the "Hawaiian–Emperor seamount chain" for a more realistic idea of what happens to the island chain as the fissure moves.
If windbg wasn't supposed to be used by beginners, then !analyze -v wouldn't exist. Think about that for a second, your argument is essentially that all conveniences should be stripped away and everyone, pros and neophytes alike, should be made to suffer more, because suffering through it is what makes you a pro.
Far better to get beginners used to working with windbg and ease them into the more complex parts of debugging so that some of them can become pros. Anyone who would use windbg in the first place is already someone who wants to be a pro anyway, it's not exactly a mass-market application.
You managed to completely miss the point with both replies. No one was asking for some kind of historical perspective on the protocol, no one cares, it sounds like you're trying to excuse away problems by claiming that there's nothing we can do because it was designed years ago.
The whole point of the posts you're replying to is asking WHEN are they going to be fixed, so that a rogue actor can't maliciously bring down the internet easily, even if for a short time. (And ranting that no one seems to care enough about a gaping hole to do anything.)
Not just the late 90's; I did that in 2013 or so with relatively recent HP gear. Brought a desktop into the datacenter to act as a network capture device, plugged it in, and POW. No auto input switching. Fortunately, it wasn't hard to scrounge a power supply, but you certainly learn your lesson after that.
I mean that works if you have lucid API documentation. If it doesn't, you're basically spending weeks spelunking the source code and/or throwing calls against the way to see what works. And hopefully writing the API docs yourself, since no one else bothered to.
Aside from shelling out, Python also has fully-working dll/so support, with the ctypes library or one of its pretty wrappers, saving even more overhead versus spinning up an executable and parsing its stdout. Practically all of the important libraries have cpu-intensive operations in compiled .pyd (which is just a dll/so), and quite a few wrappers exist to call out to standard libs.
Programmers who consider Unicode an "unnecessary incompatibility" are the reason why so much software is fundamentally broken anytime it encounters anything that isn't Latin-1. I don't know about you, because you probably never had to touch foreign words or names at all, but Code Pages were a damned nightmare to anyone who actually wanted to do things right.
It really isn't that difficult to figure out bytes vs strings. You guys have had 10 years to wrap your heads around it, and all you have to do is do the right thing. It's not like Python 2.7 is going anywhere, literally all you have to do is convert your shell files from calling python to python2 to make them work, but you're too incompetent to even do that!
This is literally no different from the worthless sysadmins that still complain about Perl 6 and Linux 3, because it violates their comfortable safe space, and they just want to get paid to never have to learn anything ever again.
Biting the hand that feeds IT © 1998–2020