I infer from this article that the "unix" brand name (user land) has recently been merged with the "POSIX" brand name (API)
True, if by 'recently' you mean '20+ years ago'...
83 publicly visible posts • joined 14 Jul 2007
This is the removal of a lot of ancient driver code specifically hard-coded for old PATA controller hardware. The same controllers are still supported with smaller, more modular drivers based on libata and presenting the drives as pseudo-SCSI devices.
So no, old 40-pin cables have not just been obsoleted. Just old device names in the /dev/hda style, which now become /dev/sda style.
l imagine there will be some exciting quirks to discover while moving to the new kernel on systems with the old hardware. You 'get to' discover all the places which have unwisely hard-coded hda-style names. (I also imagine that most of the affected systems will be VMs, here in the early 2020s...)
It's a matter of self defense.
Since manifest v3 has a lot of API changes, extension developers who port to it will be under pressure to drop v2 support. New extensions developed after v3 is widespread and fully operational will of course target the gigantic Chrome market, ignoring or only addressing as a lagging afterthought the relatively tiny v2 market.
Browsers which do not support v3 will be left behind in a shrinking pool of sad remnant extensions. Look at the state of extensions for Firefox forks which haven't picked up WebExtensions (manifest v2) support.
None of this would apply if it were truly just a different manifest format; if a single extension could easily be packaged in either 'v2' or 'v3' format with a few simple build procedure changes. The name is misleading.
'tentatively starting with Chrome 94 in Q3 of 2021'
Interestingly, Firefox already releases on a 4-week cycle and currently has this schedule for mainline releases in 2021Q3:
2021-07-13 -- Firefox 91
2021-08-10 -- Firefox 92
2021-09-07 -- Firefox 93
According to the Chrome plan, Chrome will perpetually be 1, 2 or 3 releases higher than Firefox (depending when in Q3 they switch to the 4-week cycle).
Makes you go 'hmmmm'.
> I’m pretty sure “SX” and “DX” don’t stand for anything meaningful at all
Oh no, it was amply demonstrated over time that 'SX' stood for 'sucks'. (Which meant that 'DX' stood for 'doesn't suck as much'.)
Sometimes 'SX' meant 'we removed half the I/O bus', sometimes 'we removed the integrated FPU'; whatever the tech details, it was clear what it meant to the user.
In addition to Stylus, it is worth noting that each(*) 'style' on userstyles.org has a link 'Install style as userscript' (just above 'Discussion'). Clicking on this is supposed to immediately invoke your Greasemonkey / Tampermonkey / Violentmonkey to suck it in as a userscript; this mechanism isn't currently working for me, but using `wget` on the URL pointed to by the link does work, and is preferable anyway since I can vet the result.
Said result is basically a 30-line userscript wrapped around the raw CSS(**) of the style. You could download it once, then subsequently download other styles from userstyle.org using the 'Show CSS Code' button and pasting the received CSS into the middle of the static userscript. But why bother, when the site will already do it for you.
(*)Of course I only checked a few so it is always possible that this feature isn't universally available...
(**)One of the ones I downloaded had a bit more complexity having to do with adapting the CSS to different sites; this may be an additional reason to actually use the userjs downloader rather than the 'Show CSS' downloader.
Is that picture in the article for real? They bring in cold liquid, heat it up on one TPU, pipe it to the next TPU, heat it some more, send it back to the cooling unit?
So presumably TPU #1 'mysteriously' always runs overly hot and has to throttle back, while TPU #0 gets to blithely crank along at full speed?
I think most currently shipping Intel CPUs have integrated GPUs, even if they are sold as a GPU-less model. The silicon is there, disabled for some combination of marketing & manufacture-time test failure reasons.
Even the GPUs which failed as full marketable GPUs probably work well enough to be used as background security lurkers.
... I wonder if they'll roll out a set of microcode updates which partially enable those GPUs to be used as security sidecars.
Swipe-across to answer a phone call is a reaction to several common failure modes:
- phone answers while you're fumbling it out of your pocket and would have chosen (on seeing the caller) to reject
- phone answers in your pocket before you even start digging it out
- phone answers in your pocket without you having noticed at all; then the caller gets to listen to you & your surroundings for as long as they want
A simple on-screen pushbutton just isn't reliable enough.
Should the phone UI have an option to turn off slide-to-answer, if you'd like to experience all those failure modes? Maybe. Or at least a hardware alternative like up-or-down-volume.
THIS is what I expect from The Register. I don't even care if it's totally wrong (and I'm not saying or even trying to imply that it is). It is interesting analysis, the polar opposite of the barely-chewed press releases which constitute an unfortunately large portion of recent output.
Do it again. Keep doing it.
FB and WTF aren't actually particularly 'integrated', they just provide a place where their respective chat things *can* lurk if you so choose. I don't.
The entire spectrum of Chrome extensions are usable on current releases of Opera.
I switched to it from Firefox about 2 years ago, have been reasonably satisfied since then. I have up-to-date FF and Chrome available as well, poke around with each from time to time, return to Opera without regret.
Um. There are approximately 50 window managers in the Ubuntu repositories.
This returns 57 matches:
$ ### sudo apt install aptitude ### if necessary
$ aptitude search '?and(!~ri386,~Pwindow-manager)'
-- of which half a dozen are essentially duplicates.
$ aptitude search '?and(!~ri386,~Psession-manager)'
returns 7 non-duplicates.
Neither of those are a good comprehensive list of "desktop environments" available in Ubuntu. That would be some sort of matrix of window manager x session x who knows what else, numbering in the thousands of possibilities.
The problem isn't that you're stuck on Unity, but that there's an overwhelming sea of possibilities with nary a map in sight.
My guess: some crucial security fix which is just too difficult to backport to the ancient 11.2 code base. For this reason to make sense, there also has to be at least one gigantic paying customer or strategic partner who firmly insist on continued Linux support. This was already a necessary condition for the previous ongoing 11.2 patching, but now we know that the insisting customer is even bigger or more strategic than we might previously have imagined...
Operating system initialization is extremely CPU- and chipset-specific. Showing that your OS tests successfully in 32-bit mode on a modern 64-bit x86 CPU is not at all the same as showing that it actually works on real 32-bit hardware. I'm talking about differences in page table setup, various control registers, workarounds for ancient bugs like "f00f" and the FP divide bug, etc.
(Aside: a good rule of thumb about OS testing is: if you haven't tested it, IT DOESN'T WORK. This isn't an actual identity, but it's close. Changes *here* have unanticipated effects *there*, so it's really necessary to test every supported scenario against every release.)
So, don't make any changes to 32-bit init code? But huge swaths of the x86 code is shared between 32- and 64-bit paths. An attempt to freeze the 32-bit init code would involve changes all over the shared x86 arch part of the kernel. This would be potentially disruptive of the 64-bit path, thus nearly impossible to get merged into the mainline kernel. So now you're asking distros to maintain a forked kernel for arbitrarily long.
Init code isn't the only pain point. Even if you get the OS to boot, you'll eventually find other subtle issues leading to data corruption, panics, etc., unless you are rigorously testing on real 32-bit hardware.
Meanwhile, old hardware can continue to run the same old software that it already runs. Nothing a distro does is going to reach out and retroactively destroy existing x86_32-supporting OS releases.
Embedded users shouldn't be too bothered since they rarely use "full fat" desktop versions of any distro. Many embedded systems build their entire userland themselves and don't really rely on a "distro" at all. It is irrelevant that they can't acquire things like modern browsers able to handle the latest web site tricks. (For most embedded applications, it would be a serious security problem if it was even *possible* to install a full browser...)
Actually http://theregister.co.uk/2015/11/06/blackberry_priv_review becomes http://theregister.co.uk/2015/11/06/blackberry_priv_review/print.html which redirects to http://theregister.co.uk/Print/2015/11/06/blackberry_priv_review and works; but including both /Print/ and /print.html redirects a second time, to (404) http://theregister.co.uk/Print/Print/2015/11/06/blackberry_priv_review
(Sloppy Crapmonster lives up to the moniker :)
The article fails to mention the single worst feature of the PCjr -- at least the early version which was inflicted upon me.
Real IBM PCs had 15 characters worth of typeahead: if it was busy while you were typing, what you had typed was stored in a little buffer and played back later, when the next prompt arrived. If you typed too much (the 16th and subsequent chars), it would BEEP! to let you know that the extra chars were being ignored.
PCjr? Oh my.
There was still typehead on the PCjr. There was also still a beep. The semantic interaction between these, however, had been diabolically redesigned.
For some reason, the PCjr wasn't always able to receive a typed character while it was busy. Someone once claimed this was because of its lack of DMA; I never learned why. In any case, it *did* apparently have some inkling that it had lost a character.
The PCjr's somewhat more modest "bip!" therefore meant "I lost the one character you just typed".
At least that was the theory. Unfortunately, even the signal telling it that it had lost a character was flaky. What the sound actually meant was "I MIGHT have just lost a character".
Which meant that as soon as you'd typed 1-2, maybe even 3 chars, you got an audible signal meaning "give up, you have no idea what's in the input buffer now".
Arrggghhhh!!!!!
Android apps are mostly Dalvik (cough*Java*cough) bytecode; they should run just as "fast" on x86 as on ARM.
Presumably where there's ARM code, the phone uses some sort of JIT ARM-to-x86 compiler. This stuff used to be terribly slow (10-100x penalty). These days there is no technological reason it should cost more than about 2:1. That is, *if* they cared to develop or buy the very best, the penalty shouldn't be too bad. If they just slapped something naive together then it's probably back to 10:1 or worse... Benchmarks will eventually tell the real story.
And presumably popular apps which use native ARM code will eventually be recompiled as fat binaries or separate x86 packages.
My guess is that the current generation of Atom SoCs will prove to be perfectly adequate also-rans in the cell phone CPU arena. They will not compare successfully against the latest multicore ARMs like Tegra 3, QualComm S4, etc. Atom is only barely touching the compute-per-watt range of the newer ARMs.
When writing an app for a tightly controlled platform that has only one screen size, you can be forgiven for designing to the size.
On a platform with two screen sizes, you would be sort of stupid to do so, but many developers could be expected to be on that side of the line.
Android cell phones collectively have at least a dozen different screen dimensions. Add tablets and you're up to at least 20. Coding Android apps to care greatly about screen size is just plain stupid.
Desktop apps have a resize control in the corner of the window. Web apps get fed into browsers on all size screens, which live in windows with resize controls. Any strong sensitivity to window size is idiotic.
BTW there are a lot of idiotic pages on the web. This does not excuse them...
Perhaps... "accepted" but being held by the moderator while he comments on them? That's cheating, you know :)
All three of my recent set of blather read "Accepted by moderator at [time stamp]" on my posts page. Plus my three from yesterday. It seems like the 6th, at least, should have qualified easily under the "5 happy posts in 3 months" rule.
Therefore, apparently it prints "Accepted by moderator" whether it's referring to a human or an automated system.
I'd prefer if it said "Accepted by automoderation" or something like that. Perhaps with a nice link to the guidelines anchored on "automoderation".
[I thought I posted this but can't find it in either the forum or "my posts"... going senile...]
I wasn't asking about how moderators handle anon posts, but whether the <i>system</i> retains knowledge about who posted each anon post and whether the resulting scores accrue to the real poster. Then I decided you probably had to retain authorship information for various legal reasons; and it really would make sense to charge people for their anonymous misbehavior. So I probably answered my own question, but still seek confirmation.
Plus I get to check myself for HTML Super Powers...
I wasn't asking how the moderators handle anon posts, but how the scoring system does.
When an anon post is accepted, rejected, or removed after acceptance, that's a scoring action that *could* accrue to the actual commentard account that created the post. If the database keeps track of that, etc.
Or anon posts could be truly anon (at least in that regard), i.e. their ownership could be completely whitewashed as soon as they were injected into the review queue, leaving no way for the system to accrue the score.
I guess for liability reasons, if nothing else, you probably need to hold onto who posted what, even anonymously. So I'll venture a guess that anon posts do accrue to your score...?
Seems like it should be more sophisticated than that. I seem to have posted 57 times since April 2007, so that's what, 58 months, almost exactly one a month. Sporadically, of course.
You should either have a "lifetime achievement flag", or do it in terms of good:bad ratio over the commentard's entire posting life span.
Hmph. Commentard (and hmph) not in Opera's dictionary.
by posting a stream of inoffensive low level drivel, just to keep above the 5-per-3-mo line.
Some random questions along those lines:
- If you post AC and it's accepted, does that accrue to your account's total?
- If you post AC and it's accepted, then flagged/reported by a bunch of users and eventually removed, does *that* come out of your account's hide?
- Finally, if you have posted several messages before the moderator got to any of them; and one of those causes you to reach the 5-per-3 threshold, do the rest of your queued posts suddenly self-moderate?