Re: What about Pixel owners?
> My Pixel C tablet updated to Android 7.1.2 this week. I'd assume that the security patches were part of that.
Maybe, but check the date of the "Android security patch level". Is "5 February 2017" here.
538 publicly visible posts • joined 19 May 2012
> In the old days of film, the sound was carried on a strip down the side of the film. Nothing to get out of sync.
Of yes it could.
There was a specific distance between the frame being projected and where the audio pickup is. If there is a little too much film (threaded through too loosely even by one sprocket hole) then the sound would be out of sync.
There is too much going on – stopping each frame while the shutter is open 24 times a second – around the optical part of the projector to also be picking up the audio track (whether optical or magnetic).
That comments link at the bottom of links is hardcoded to http. So if reading an article on https, in going to the forums you lose TLS.
Simplest fix is to make it a protocol relative link – //forums.theregister.co.uk/forum/… – so it inherits the protocol of the article page.
Given Andrew's established position on the BBC, as much as Tim Harford's position maybe questionable (and, given his position as an economist, more about pointing out that "obvious truths" often are not), I suspect that no party in this is being objective.
Perhaps if this was written by someone without such an established anti-BBC history had written it one might give it more credence.
> TFS, but corporate have already decided to stick with the old Sourcesafe back-end
TFS's non-git backend bares as much relationship to SourceSafe as CVS does.
None of the limitations apply, none of the ongoing issues apply (no need for a weekly analyse to fix corruption, etc).
> One thing I've never understood is why software is released with known bugs
In addition to the already noted: changes (including bug fixes) often introduce new bugs, there is also the problem that many bugs are benign – no one is affected – and the change adds risk (the new bug could be far worse).
There is also the case where an issue is found late. Should the release be delayed for that fix? (Especially true of test releases.)
Contemporary software systems are very complex. Even a small system will have tens of thousands of interacting parts. Mostly these do not interact (much effort is put into avoiding interactions) but sometimes they need to, and sometimes they do unexpectedly. Any change can potentially trigger an unwanted interaction.
“Some humans would do anything to see if it was possible to do it. If you put a large switch in some cave somewhere, with a sign on it saying 'End-of-the-World Switch. PLEASE DO NOT TOUCH', the paint wouldn't even have time to dry.”
― Terry Pratchett
Indeed, but like much of the press (including in this article) The Press are being disingenuous.
The press get to pay all costs even if they win if they refuse to go via arbitration. But, and this is the balance, the complainant gets to pay all costs if they refuse to go via arbitration.
The reality is that much of the press read a few headlines (not the actual proposed rules) and then spout off. This is driven by a few "leading" editors who want a toothless "regulator" so they can continue to write whatever they like knowing the vast majority cannot risk launching a libel complaint.
> Right. Because you couldn't have possibly included (de)compression
This is covered, but it is assumed know compressed files are updatable (this is another restriuction of on the compression algorithm: you need to be able to change parts of the file without re-writing and compressing the whole thing).
So one scenario is file is created on a x86 box and then updated on an Alpha box, So the Alpha system has to be able to compress in the same way, while still meeting to meet the performance criteria.
> […]and they claim a false positive rate of just 0.35 per cent.
> (Since they write that 80,000 domains are registered each day, that's still around 250 sites a day unfairly tagged as evil, so PREDATOR still needs some refinement on that score).
The false positive rate is the proportion of sites tagged as fraudulent that shouldn't be. Not the proportion of all registrations. So it will be fewer than 250 falsely tagged sites. The paper does not seem (on a quick scan through) to suggest what proportion of registrations are fraudulent, but that 0.35% should be applied to that number not the total number of registrations.
> I thought the Kernel was supposed to trap the Three Finger Salute - how can it be disabled by this application?
In the days of PS/2 (and before that PC, PC-AT) keyboard connectors the Ctrl-Alt-Del combination was handled in the BIOS. And the kernel always got control.
This is not true of USB connected keyboards.
In practice if someone has physical access they can always take control (given a little time), so having a special key combination provides no useful protection.
> Imagine if, when OpenSSL was flawed, or MD5 was cracked, we could
> just mark it as obsolete, mark an upgrade path, and EVERY piece of software
> that dealt with them worlwide was updated to use a replacement library or
> object class as soon as it was next executed?
And watch as some minor behavioural "fix" in the new version (on some other part of its functionality) causes many of those applications to break.
Behavioural dependencies can be very subtle, no amount of unit/integration testing will cover them all (100% is not enough, people will depend on officially "undefined" results).
Real world backwards compatibility can include leaving in some bugs…
> Would you care to have that stock under your own control, in your own warehouse, or would you prefer to rent space from a warehouse space provider?
Neither. I would prefer the supplier keeps it in their ownership until I call for it (and take ownership) when I have an immediate use for it (this can lead, because I've already got an order for it, to me effectively having ownership for a negative amount of time).
> IaaS is a load of shit
Often yes. The costs of buying VMs in the cloud is.
But not always.
For example your steady state is a couple of decent servers and a moderate database: using IAAS will cost more than putting your own servers into a DC. However if you need to scale that to eight servers and a big database (black Friday, sales, run up to Christmas, and similar periods) then suddenly the numbers change.
If your peak load is not much more (within a factor of two say) then having fixed resources makes sense. But if you sometimes need far more for short periods then outright purchasing makes less sense even if the "normal" periods are more expensive. Not paying for those extra six servers 80% of the time is enough of a saving to more than cover the cost of IAAS rates for that 80% of the time.
And that's before considering there are significant savings on IAAS when you purchase your base capability on an annual basis rather than daily.
For a non-trivial business the sums may be very different depending on which LoB application you're talking about.
> The world's first 'webcam' was rigged up at MIT to see the level of coffee in a filter machine.
You linked to the page which in the *first sentance* says Cambridge University. Which is named for Cambridge England.
A certain location in New England is also named for the city, and there is apparently also a seat of learning there. But the coffee pot and webcam existed in the original.
Building a complete custom app for iOS is even more expensive than adapting a working web site for the form factor (and browser limitations).
Separately: is Safari becoming the new IE6: everyone has to support it, but it costs more to support than all the others combined?
> I am a Windows sceptic, of course, […] given the awfulness of Win10
You start by assuming it is bad. And then it is: what a surprise.
Please do not pretend to perform any analysis where you've already determined the conclusion. (I'll withdraw that if you confirm you're a management consultant when, of course, your job is to confirm said management's choice.)
> Indeed, why El Reg persists in conducting monthly "analysis" of the noise contained in someone's over-precise Excel spreadsheet cells is a mystery.
Exactly.
From the article:
> down just .01 per cent from its August share
I would be surprised if the underlying data could justify +/-1 percentage point margins; I expect it is closer to double that.
Any smaller change is statistically meaningless.
That seems over complex (and would require massively more PCIe channels) when SSD drives already handle parts of the flash failing (and are thus over provisioned with flash on creation).
Just treat a chip level failure as part of the same process. If a sufficiently large proportion of the flash is out of action then it is time for drive level replacement.
Much like the process HDDs go through to remap bad sectors until there are no more spare sectors left on the drive.
As I discovered recently local file storage, and not particularly quick local file storage (destination was a 5400 RPM disk) can quite happily saturate 1Gbps Ethernet.
On folders of moderately sized files (~10MB) transfer was hitting the network buffers moving at a net rate at about 950Mbps.
(Of course when the copy hit folders of small files, sub 4kb, the net transfer rate tanked :-(.)
I doubt it will happen. SSDs are so much quicker for random ops already that the necessary investment for spinning rust, their microcontrollers and firmware would never pay off.
Cheaper to invest in (near) real time replication from SSD array (satisfying the applications IO needs) to a high reliability array (batched writes and otherwise focus on lifetime).
> since most cloud providers do not seem to be certified to anything;
But not all. Eg. https://www.microsoft.com/en-us/trustcenter/Compliance/default.aspx
Includes one for UK.GOV (towards the bottom).
[This is no way a suggestion that Azure is "secure" (whatever that means), just that there is at least one provider that is getting certified.]
> Unless you're trying to support IE on Windows XP, you'll rarely find a case
Make that pre-SP2 Windows XP. SNI client support was added in SP2.
If your client's are using Windows XP without SP2, then they have bigger problems than a few security warnings. But as Chrome now requires at least Windows 7, they won't get the warnings anyway.
> this is simply a [malware] disaster waiting to happen
Only if someone manages to break the signing and thus create a replacement file that works as an update with the same signature.
When downloading updates direct from MS today they are downloaded over HTTP, not HTTPS. But the signatures are downloaded on HTTPS and checked against the patches downloaded without a secure channel. This avoids the overhead of encrypting the patches for each client while performing the same content validation a secure channel would given (remember TLS both validates the content came from the correct server and hides the content on the network: the latter is irrelevant in this case as anyone can download the patches already).
>The fact that you think computers can do more than one thing at a time, rather
> than spend a tiny amount of time doing one thing then swtiching to another one,
> shows a staggering lack of understanding
And when was the last time you used a computer without multiple CPUs/cores?
Current systems really do multi-task.
> the figure was bumped up a mere 0.1 per cent, from 0.5 to 0.6 per cent.
I know journalists are not famed for their mathematical skills but this is a technical publication so needs calling out.
An increase from 0.5 to 0.6 is a twenty percent increase. It may only be 0.1 percentage points, but 0.1 is a large proportion of 0.5.
> through 29m of solid ice?
This is glacial ice: transparent.
The ice we normally see is full of crystal flaws and is therefore optically translucent.
Given a few thousand years of serious compression (under a km of ice) these flaws are force out and the ice becomes optically clear.
> 15 years ago a typical business workstation would have and realistically need perhaps a 20GB drive
I think you mean 25 years age: start of the 90s, 40MB was large buyt increasingly common.. A decade later – after the millennium – hundreds of megs if not a gig was normal.
> Microsoft documentation is well-known for being accurately unhelpful.
Usually a case of reference documentation is not helpful until you know the basics. Oracle takes this to the maximum: unless you know a lot about the statement already the reference documentation is completely unreadable (often within the first few paragraphs they're talking about edge cases dependent on database version and/or option settings).
> Sigh, another example of how the Internet market has become just another way to screw money out of businesses.
How?
TP-Link chose to use different domains for those functions rather than just a URL (or IP address). That they failed to maintain functions they created is their failure.
It is nothing to do with the massive expansion of TLDs.
> Also, FWIW, Chrome for example ain't exactly svelte once you add up all the various processes' RAM use.
That will seriously over count on virtual memory based systems because on such systems there will be significant sharing.
On contemporary OSs memory usage is a not a simple topic, there is no simple way to count the memory usage of even a single process. For a start what do you mean by "memory usage": working set, commit, private allocation, address space allocation, or …?