-- and, of course, you can use the 30-line script to inject any CSS of your own. I would cut it down to about 10 lines since the bulk of the script relates to dealing with old pre-Greasemonkey userscript injection frameworks...
76 posts • joined 14 Jul 2007
In addition to Stylus, it is worth noting that each(*) 'style' on userstyles.org has a link 'Install style as userscript' (just above 'Discussion'). Clicking on this is supposed to immediately invoke your Greasemonkey / Tampermonkey / Violentmonkey to suck it in as a userscript; this mechanism isn't currently working for me, but using `wget` on the URL pointed to by the link does work, and is preferable anyway since I can vet the result.
Said result is basically a 30-line userscript wrapped around the raw CSS(**) of the style. You could download it once, then subsequently download other styles from userstyle.org using the 'Show CSS Code' button and pasting the received CSS into the middle of the static userscript. But why bother, when the site will already do it for you.
(*)Of course I only checked a few so it is always possible that this feature isn't universally available...
(**)One of the ones I downloaded had a bit more complexity having to do with adapting the CSS to different sites; this may be an additional reason to actually use the userjs downloader rather than the 'Show CSS' downloader.
serial liquid cooling
Is that picture in the article for real? They bring in cold liquid, heat it up on one TPU, pipe it to the next TPU, heat it some more, send it back to the cooling unit?
So presumably TPU #1 'mysteriously' always runs overly hot and has to throttle back, while TPU #0 gets to blithely crank along at full speed?
I think most currently shipping Intel CPUs have integrated GPUs, even if they are sold as a GPU-less model. The silicon is there, disabled for some combination of marketing & manufacture-time test failure reasons.
Even the GPUs which failed as full marketable GPUs probably work well enough to be used as background security lurkers.
... I wonder if they'll roll out a set of microcode updates which partially enable those GPUs to be used as security sidecars.
Re: Please, I don't want your cool hack
Swipe-across to answer a phone call is a reaction to several common failure modes:
- phone answers while you're fumbling it out of your pocket and would have chosen (on seeing the caller) to reject
- phone answers in your pocket before you even start digging it out
- phone answers in your pocket without you having noticed at all; then the caller gets to listen to you & your surroundings for as long as they want
A simple on-screen pushbutton just isn't reliable enough.
Should the phone UI have an option to turn off slide-to-answer, if you'd like to experience all those failure modes? Maybe. Or at least a hardware alternative like up-or-down-volume.
Thank you for this item of actual thoughtful introspection and analysis
THIS is what I expect from The Register. I don't even care if it's totally wrong (and I'm not saying or even trying to imply that it is). It is interesting analysis, the polar opposite of the barely-chewed press releases which constitute an unfortunately large portion of recent output.
Do it again. Keep doing it.
FB and WTF aren't actually particularly 'integrated', they just provide a place where their respective chat things *can* lurk if you so choose. I don't.
The entire spectrum of Chrome extensions are usable on current releases of Opera.
I switched to it from Firefox about 2 years ago, have been reasonably satisfied since then. I have up-to-date FF and Chrome available as well, poke around with each from time to time, return to Opera without regret.
Re: Ubuntu meh!
Um. There are approximately 50 window managers in the Ubuntu repositories.
This returns 57 matches:
$ ### sudo apt install aptitude ### if necessary
$ aptitude search '?and(!~ri386,~Pwindow-manager)'
-- of which half a dozen are essentially duplicates.
$ aptitude search '?and(!~ri386,~Psession-manager)'
returns 7 non-duplicates.
Neither of those are a good comprehensive list of "desktop environments" available in Ubuntu. That would be some sort of matrix of window manager x session x who knows what else, numbering in the thousands of possibilities.
The problem isn't that you're stuck on Unity, but that there's an overwhelming sea of possibilities with nary a map in sight.
My guess: some crucial security fix which is just too difficult to backport to the ancient 11.2 code base. For this reason to make sense, there also has to be at least one gigantic paying customer or strategic partner who firmly insist on continued Linux support. This was already a necessary condition for the previous ongoing 11.2 patching, but now we know that the insisting customer is even bigger or more strategic than we might previously have imagined...
The testing burden is real
Operating system initialization is extremely CPU- and chipset-specific. Showing that your OS tests successfully in 32-bit mode on a modern 64-bit x86 CPU is not at all the same as showing that it actually works on real 32-bit hardware. I'm talking about differences in page table setup, various control registers, workarounds for ancient bugs like "f00f" and the FP divide bug, etc.
(Aside: a good rule of thumb about OS testing is: if you haven't tested it, IT DOESN'T WORK. This isn't an actual identity, but it's close. Changes *here* have unanticipated effects *there*, so it's really necessary to test every supported scenario against every release.)
So, don't make any changes to 32-bit init code? But huge swaths of the x86 code is shared between 32- and 64-bit paths. An attempt to freeze the 32-bit init code would involve changes all over the shared x86 arch part of the kernel. This would be potentially disruptive of the 64-bit path, thus nearly impossible to get merged into the mainline kernel. So now you're asking distros to maintain a forked kernel for arbitrarily long.
Init code isn't the only pain point. Even if you get the OS to boot, you'll eventually find other subtle issues leading to data corruption, panics, etc., unless you are rigorously testing on real 32-bit hardware.
Meanwhile, old hardware can continue to run the same old software that it already runs. Nothing a distro does is going to reach out and retroactively destroy existing x86_32-supporting OS releases.
Embedded users shouldn't be too bothered since they rarely use "full fat" desktop versions of any distro. Many embedded systems build their entire userland themselves and don't really rely on a "distro" at all. It is irrelevant that they can't acquire things like modern browsers able to handle the latest web site tricks. (For most embedded applications, it would be a serious security problem if it was even *possible* to install a full browser...)
Re: There once was a way for a single page
Actually http://theregister.co.uk/2015/11/06/blackberry_priv_review becomes http://theregister.co.uk/2015/11/06/blackberry_priv_review/print.html which redirects to http://theregister.co.uk/Print/2015/11/06/blackberry_priv_review and works; but including both /Print/ and /print.html redirects a second time, to (404) http://theregister.co.uk/Print/Print/2015/11/06/blackberry_priv_review
(Sloppy Crapmonster lives up to the moniker :)
The article fails to mention the single worst feature of the PCjr -- at least the early version which was inflicted upon me.
Real IBM PCs had 15 characters worth of typeahead: if it was busy while you were typing, what you had typed was stored in a little buffer and played back later, when the next prompt arrived. If you typed too much (the 16th and subsequent chars), it would BEEP! to let you know that the extra chars were being ignored.
PCjr? Oh my.
There was still typehead on the PCjr. There was also still a beep. The semantic interaction between these, however, had been diabolically redesigned.
For some reason, the PCjr wasn't always able to receive a typed character while it was busy. Someone once claimed this was because of its lack of DMA; I never learned why. In any case, it *did* apparently have some inkling that it had lost a character.
The PCjr's somewhat more modest "bip!" therefore meant "I lost the one character you just typed".
At least that was the theory. Unfortunately, even the signal telling it that it had lost a character was flaky. What the sound actually meant was "I MIGHT have just lost a character".
Which meant that as soon as you'd typed 1-2, maybe even 3 chars, you got an audible signal meaning "give up, you have no idea what's in the input buffer now".
not much ARM code should need to be emulated
Android apps are mostly Dalvik (cough*Java*cough) bytecode; they should run just as "fast" on x86 as on ARM.
Presumably where there's ARM code, the phone uses some sort of JIT ARM-to-x86 compiler. This stuff used to be terribly slow (10-100x penalty). These days there is no technological reason it should cost more than about 2:1. That is, *if* they cared to develop or buy the very best, the penalty shouldn't be too bad. If they just slapped something naive together then it's probably back to 10:1 or worse... Benchmarks will eventually tell the real story.
And presumably popular apps which use native ARM code will eventually be recompiled as fat binaries or separate x86 packages.
My guess is that the current generation of Atom SoCs will prove to be perfectly adequate also-rans in the cell phone CPU arena. They will not compare successfully against the latest multicore ARMs like Tegra 3, QualComm S4, etc. Atom is only barely touching the compute-per-watt range of the newer ARMs.
When writing an app for a tightly controlled platform that has only one screen size, you can be forgiven for designing to the size.
On a platform with two screen sizes, you would be sort of stupid to do so, but many developers could be expected to be on that side of the line.
Android cell phones collectively have at least a dozen different screen dimensions. Add tablets and you're up to at least 20. Coding Android apps to care greatly about screen size is just plain stupid.
Desktop apps have a resize control in the corner of the window. Web apps get fed into browsers on all size screens, which live in windows with resize controls. Any strong sensitivity to window size is idiotic.
BTW there are a lot of idiotic pages on the web. This does not excuse them...
Perhaps... "accepted" but being held by the moderator while he comments on them? That's cheating, you know :)
All three of my recent set of blather read "Accepted by moderator at [time stamp]" on my posts page. Plus my three from yesterday. It seems like the 6th, at least, should have qualified easily under the "5 happy posts in 3 months" rule.
Therefore, apparently it prints "Accepted by moderator" whether it's referring to a human or an automated system.
I'd prefer if it said "Accepted by automoderation" or something like that. Perhaps with a nice link to the guidelines anchored on "automoderation".
[I thought I posted this but can't find it in either the forum or "my posts"... going senile...]
I wasn't asking about how moderators handle anon posts, but whether the <i>system</i> retains knowledge about who posted each anon post and whether the resulting scores accrue to the real poster. Then I decided you probably had to retain authorship information for various legal reasons; and it really would make sense to charge people for their anonymous misbehavior. So I probably answered my own question, but still seek confirmation.
Plus I get to check myself for HTML Super Powers...
I wasn't asking how the moderators handle anon posts, but how the scoring system does.
When an anon post is accepted, rejected, or removed after acceptance, that's a scoring action that *could* accrue to the actual commentard account that created the post. If the database keeps track of that, etc.
Or anon posts could be truly anon (at least in that regard), i.e. their ownership could be completely whitewashed as soon as they were injected into the review queue, leaving no way for the system to accrue the score.
I guess for liability reasons, if nothing else, you probably need to hold onto who posted what, even anonymously. So I'll venture a guess that anon posts do accrue to your score...?
Seems like it should be more sophisticated than that. I seem to have posted 57 times since April 2007, so that's what, 58 months, almost exactly one a month. Sporadically, of course.
You should either have a "lifetime achievement flag", or do it in terms of good:bad ratio over the commentard's entire posting life span.
Hmph. Commentard (and hmph) not in Opera's dictionary.
One could game the system
by posting a stream of inoffensive low level drivel, just to keep above the 5-per-3-mo line.
Some random questions along those lines:
- If you post AC and it's accepted, does that accrue to your account's total?
- If you post AC and it's accepted, then flagged/reported by a bunch of users and eventually removed, does *that* come out of your account's hide?
- Finally, if you have posted several messages before the moderator got to any of them; and one of those causes you to reach the 5-per-3 threshold, do the rest of your queued posts suddenly self-moderate?
bandwidth concerns etc.
I live under the cloud of PG&E (Pacific Gas & Electric -- northern California). So when I went to investigate my meter I ran across some city of SF documents addressing these concerns (sorry, didn't save URL).
In sum, from memory: the system rolled out in SF uses 2.4GHz but not WIFI. Each per-meter unit emits 4 packets a day, each packet is some number of milliseconds (<100 I think). Transmission power is <1W. Transmission power and length are hard-limited by running the transmitter off of a slow-charge capacitor. Several hundred thousand per-dwelling transmitters. The receivers are on towers (existing power or phone poles), 77 of them in the city. They receive the individual transmissions and also send (at 2W) a once daily time sync packet. Collected data is transmitted over a cellular radio, not particularly different from a random person talking on a cell phone, except it's 20' in the air; data transmission could run for as much as 4hr/day per receiver, though that's a worst-case-in-many-ways calculation.
So, nothing to worry about *here*. Which is not to say that designs elsewhere couldn't be much worse.
Oh, and that's nothing to worry about in regards to interference, bandwidth use, personal irradiation etc. Feel free to freak out about whether they're reading your usage accurately or are all part of a Big Plot...
Teach me to hammer my thumb
Speaking as an RF newbie:
What equipment or other tools would I need to investigate this in regard to my own house's smart meter (in a different utility's clutches)? I suppose I should start by checking whether it has an FCC ID printed on the case.
Links to helpful do-it-yerself FAQs etc.?
I begin to wonder if some of my in-house WIFI flakiness is induced, not just inherent in the protocol...
I haven't watched the video or searched elsewhere, but ... it sounds entirely plausible that these sensors would be deployed on a sheet of siliconE, which is a stretchy material which can be made into thin sheets. Silicon, the element, isn't so stretchy.
I see the article itself has been patched to read "silicon". Which is probably wrong.
> At some point you will hold the compute power and memory storage of a Cray Y-MP in your pocket.
I believe that point would be Today.
Newer smartphones have 1GiB RAM. A common SoC implementation, nVidia Tegra 250 T20, has >5 GFLOPS in its GPU and two 1GHz integer cores.
According to wikipedia, the original Y-MP series topped out at 8 processors of 333 MFLOPS each (total 2.7 GFLOPS); and a princely 512MiB of RAM. The minimum configuration had 128MiB RAM and 666 MFLOPS.
So you can certainly have the power of *a* Y-MP, and arguably as much power as the biggest configuration you could order when Y-MP was announced. Not to mention a whole cluster of Cray-1's (4MiB RAM!, 250 MFLOPS if you really push it).
Later derivative models (which tended to drift away from the "Y-MP" designation) may eventually have gotten as powerful as a throwaway desktop available today, e.g. $900 Lenovo Ideacentre 7727-5DU with 3.4GHz quad-core i7-2600K (+ 3.8GHz turbo + hypothreading), 200 GFLOPS video chip (Radeon 6450), 12GiB RAM, 1.5TiB disk.
Yes I know GPU GFLOPS are talking about single precision and they're only about 1/4 as fast at double precision. So if your desire for a Y-MP includes double precision floating point vector processing, you'll still need to drag around a wagonload of Cray hardware to (slightly) beat your smartphone.
The Cray probably blows the socks off the desktop, not to mention the phone, in I/O bandwidth. Or maybe not. It didn't have a bunch of USB & firewire ports...
faster full refresh
Better refresh is a solvable software problem. Keep track of the last N screens (since last full refresh, if any). Watch for pixels which have been toggled back and forth (or whatever it is that makes them blurry). After drawing the new content, go back and reinforce the color states of pixels which have state histories most likely to be blurred.
IOW, do a full refresh but only do it to pixels likely to need it.
You do the page flip first so user experience is "instantaneous"; then go back and correct the few pixels that need correction. Or -- if the amount of pixels needing fixing tends to be small, do the pixel reinforcing inline with the regular page draw.
"Likely to need" is a heuristic which presumably can fail. So provide a user action to do a full refresh -- which they will hopefully never need to use.
Ultimately this action should be happening inside the e-ink display itself: each pixel remembering 2-3 past states and self-reinforcing when it likely needs to. Vaguely like having a data separator (ancient floppy & hard drive tech...) built into every pixel.
Why must they publish a new spec for this thing? Use 2^n layers (2^1 initially, I suppose) and just use some of the high address bits as the layer selector. Or some of the low address bits -- whichever arrangement performs better.
Yes, there might be some extra performance to be eked out if the memory controller is more specifically aware of the new arrangement. So OK, bake in some new out-of-band signal a newfangled controller can use to access new info, but keep it within existing signaling so the same memory can be used on old systems.