Dear Holy Cow
That's fast...
3499 publicly visible posts • joined 23 Apr 2008
I think that they have changed top up options, but have said there's no plan to get rid of it. I'm sure there are some nuances that suggest inconsistencies between the acts and statements...
Oyster is not quite like Suica. Suica became a way of travelling and also a way of buying coffee, newspapers, etc in and around stations, and spread out from there. It filled a void in a society which, at the time, was very heavily cash oriented. There was a similar but separate scheme in Osaka, now merged.
Whereas Oyster AFAIK is only for travel.
Suica got built in to Japanese mobile phones a long time ago. When The West was developing NFC for phones, etc, it was basically replicating the work already done by the Japanese and Suica. And we screwed it up; early NFC in the west was far too slow for things like ticket gates (something Oyster got very right). And flat battery iPhones being unusable for payment means we're still screwing it up; Suica phones never had that problem.
From the article:
"Apple's internal security team gets it, but at the higher up, cultural level, they've all drunk the Apple juice, and believe their way is the right way, and they don't need any external help."
Well, at the higher up cultural level, they're $trillion right.
Ordinarily I'd caveat that by saying that the higher up level was running a high risk, because they're only one very serious bug away from the entire empire falling down. However, things are so weird these days that I'm not convinced that even a major security bug leaking everyone's data and costing all Apply Pay users a ton of cash would do any actual net damage. What Apple's higher, cultural level is learning is that their customers really don't give a damn.
They done this kind of thing before of course; the very first iPhone came out with < 1 day battery life, in amongst a sea of what we'd now call "feature" phones that all had 2 weeks of battery life. Did anyone care that their phone now no longer stayed charged up? Nope. And then there was the iPhone 4, holding-it-wrong gate, and other foul ups where iPhones were incapable of roaming between cells on the same phone network. Did anyone care about any of that? Nope.
And that attitude then bleeds over into Android too. If Android is a bit shonky in some area or other, it's not like switching to Apple is going to give you a shonk-free ownership experience.
About the only thing I can think of that'd now bring down the Apple edifice is a "Ratner" moment, but even then I'd not bet on it.
What's I think is interesting is to compare this to Microsoft's acquisition of GitHub. MS went out to get control of GitHub, which they were successful in, and now also have NPM under the same fold.
Whereas no one seems to be rushing to get hold of GitLab. I'd have thought that Google at least would have pushed a bunch of cash to get GitLab, just because.
MS seems to have a fairly determined initiative to stir things up in developer land, seem to be gaining traction in a variety of areas. No one else is.
Years ago I was given a metric by someone who knows their stuff; it was something like £1million worth of server kit would take about another £1million to run it properly, every year. That's power, staff, back office staff, hardware refresh, land costs, waste disposal costs, cleaners, etc.
Cloud is a way of spreading some of those costs across the cloud provider's very large customer base; all benefit from the provider's economy of scale. They make money by not passing all of that saving on to the cloud customers.
It's also potentially just another way of putting all one's eggs in one basket, and one that you're powerless to fix when it breaks. And they do break, just not very often. Google takes a day off because of some sort of snafu on their part, you're down until they've worked out what they've done wrong (unless you can equally well stand up resource on a competing provider). For this reason some people still prefer on-prem, or hybrid; some businesses find it works best for them like that.
What's interesting about the Newton is that it was a design theme that was clearly too difficult to implement on the available tech. The much more successful Psion 3 and 5 were significantly more usable devices (and in many ways have never been matched). Having a keyboard made up a lot of having a crummy resistive touch LCD screen.
Sony also did CD and MiniDisc. The latter is widely underappreciated in the West. In Japan it was very successful, and persisted way longer than you'd think into the era of HDD-based music players. It fitted the lots of music, mobile, use case well enough that there was no pressing need to move on to something else.
Psion came up with a hdd-based music player long before anyone else. They didn't follow through, because the only HDD's available were 3.5 inch. The guy who did the work ended up in Apple. Psion also did a prototype satnav, and again it was not realistic to productionise it because of component sizes. The guy who did that one ended up in TomTom.
I guess the difference between Psion and Apple was that the former was too small / financially timid to drive component suppliers to do the expensive R&D to miniaturise parts. Whereas Apple had the financial clout / bravery to go for it. Psion were successful where they could take extant components and do something unexpected.
I don't think that Google Glass type things will ever become socially acceptable.
Yep. It's the best way to proceed. And it's amazing how much time you can save. I've been involved in all sorts of projects, and the best ICD I've ever created was a half hour phone call with their engineer describing what was needed. Six months later integration took another half an hour. We avoided about 4 weeks of documentation.
Ah well, the F35-A has a gun, but the variant that can hover / fly slowly is the F35-B which doesn't have a gun. It can carry a gun pod, but that's center line mounted and, by the looks of it, would shoot the front undercarriage off (which I believe has to be down in hover mode).
So it'd be over to missiles and such, but a Stringbag flying slowly at near zero feet is set against a difficult background for any modern target acquisition / tracking system to go up against; too much clutter, speed differential too low for Doppler clutter separation to be effective (the Swordfish can tootle along at about 60mph), minimal thermal signature.
Also, supersonic aircraft often can't be supersonic at low level. The F35A does Mach 1.6 at altitude. I wouldn't mind betting that the maximum ceiling of the Swordfish (16,500ft) is below the minimum altitude for supersonic flight for the F35...
But generally the thing that makes the real difference is that the speed overlap between something as slow as a Swordfish and something fast like an F35. The time between sighting the Swordfish and going past it is too short for the pilot to do anything about it. That's the tactic that helicopters use to outwit a fast jet - hover and scoot.
Regarding WW2 kit doing well or not today, it can be a bit non-obvious. It all depends on what comes up against what.
For example as ships evolved so have the weapons designed to sink them. That means that, for example, today's Exocet does not necessarily have the power to punch through the armour plate on something like an Iowa class battleship. And it'd be quite hard for an F35 to shoot down a Farey Swordfish.
However, a modern torpedo would do extreme damage to an Iowa battleship, and an Apache helicopter or A10 would tear a Swordfish to bits.
Plus there's a lot that can be done in the line of sensors these days that renders almost all old WW2 military capabilities redundant.
When you consider how many iPhone have been sold with 4G in them, it's less than a dollar per phone. That's pretty reasonable for what is, at the end of the day, some still very hot high tech radio technology.
Samsung etc may not be paying the company for access to their old IP. Giving the company the rights to the IP in the first place might be monetarily equivalent to a few years' FRAND fees.
What you describe is not possible under FRAND arrangements. When standards adopt IP, the patents behind that IP come under the FRAND umbrella for the standard, which essentially pre-arranges a cross-licensing agreement for anybody, even if they come along years later (like Apple did - except it seems Apple didn't want to pay a penny)
The holder of FRAND patents can't troll with them, even if they are a company that does not itself manufacture anything. They are by definition patents that describe technology that is being manufactured and sold and those patents were filed by the inventors of that technology, a fact recognised by all the players involved in setting up the standard, including the standards organisation.
Besides, you need some company somewhere willing to enforce FRAND patents, because that's what keeps the market place fair; why should some massive market player get a free ride with the standard's IP?
Having that company being independent of the originators of the IP and having no other purpose also sounds likes a good idea; they'd be motivated to 1) collect the FRAND payments (enough to keep them alive as a company), and 2) go after those industry players who aren't paying.
A non-independent company may not want to spend the time doing so (which is bad for the fairness and health of the standard), and any cases they raise may get complicated by the accused claiming that it is a commercial tactic, not a FRAND matter.
If that company then goes on to develop new IP of its own, well that's nothing to do with the standard they'd been curating IP for.
When Companies Don't Play by the Rules
I recall that Rambus once tried to play dirty with patents related to SDRAM standards. That caused a 20 year mess.
Larousse did an excellent French, French / English dictionary application for Windows 3.1, the excellence being the thoroughly comprehensive content. Later, they went full web, and there was no price point at which you could get the same content; it simply stopped being available.
So, that Windows 3.1 application represents the peak. And it can still be run.
There's no discernible difference between functionality given by a plugin and it being baked in. VSC has so many plugins offered for download at one's convenience it's becoming difficult to keep it simply as a dumb editor.
Over in C++ land, VSC is doing a much better job than the last version of Eclipse CDT I tried.
I think we're seeing the proper value of a decent IDE like VS, when one examines the latest debacle to befall PyPI. Clearly there's a lot of devs out there who lack the wisdom of practising safe hex when they go fetch God knows what from who knows where and just run it. Something like VS plus NuGet does at least walk you through the whole thing and tell you "you're about to do something dodgy, are you sure?".
UK courts prohibit juries being presented with probabilities. They're rounded up to a 1 or 0, and presented as fact.
So if the prevailing Court rules say "1 in a million = dead cert", the fact that 10 million passed that way is never a matter presented to or considered by the jury.
Indeed yes. I often thought back then that Intel weren't putting FMA into x86 simply to make Itanium look better than it really was.
They did this in other aspects of x86 too. In the Nahlem architecture they added a bunch of maths registers that couldn't be access from 32bit code. This made 32bit code look a lot slower than 64bit code, even though those registers could have been added to the 32bit ISA without breaking existing 32bit code. It's all been a bit artificial. Ok, so no surprises there, but play games likes that too much and someone like AMD can come along and, overnight, make one look foolish...
One of the reasons some people liked Itanic was because it had a fused multiply-accumulate instruction in its vector core, which is the building block for a lot of signal processing routines and a fair few image processing filters. X86/64 didn't. (Power PC did).
That kept Itanium ahead, then Intel finally added the instruction to Xeons in about 2011, after which there was zero reason for Itanium.
I have to say that, considering where they were immediately after launch, getting it there and docked is quite a feat for the flight controllers, crew, their support and other participants in the ground->ISS journey. Hats off, beer well deserved.
The fact that their talents were required in this way is a great pity, considering what it indicates.
Iridium is more than still up there, they launched a new constellation recently. They are going to be pushed further into their niches, because their data offering will be swamped by Starlink and all the other upcoming systems.
But those niches are really quite important eg the Pacific Ocean tsunami warning system relies on one of Iridium's very unique services which no one else is replicating. There might come an awkward point where they need to continue being funded for the sake of those niches, and the US DoD gets to pay for it (a bit like they run GPS and let us use it for free).
I agree with all that but I gather that the stumbling block for the phased arrays has been the phase shifters and the manufacturing cost of the elements. There has been a lot of materials research in recent years... The maths for a phased array is easy and does not have to be done very often.
It's going to be interesting to see who does well. Starlink and OneWeb are inelegant and inefficient systems, spending 60% of their time over mostly unpopulated water. But they could be cheap overall.
However, the very high throughput GEO sats can be launched in very low numbers and immediately offer a comprehensive service, which is a good way of offsetting their high unit cost. Slightly worse latency is an issue, but not for most users.
I'm not convinced that either approach will dominate, but it does look like consumers will be getting a whole lot of good choices. Which is good news unless you're a telecoms company in the USA with a local monopoly...
The USA is unusual in having a lot of people with money served by a worse-than-third-world private comms infrastructure. The upcoming services from Starlink, OneWeb, and the newer GEOs will be able to offer a competitive alternative, without any additional ground infrastructure. That alone is a market worth chasing.
As ever, bug costs vary according to system type. A bug on some unimportant website costs not a lot, probably. Whereas a bug in an airliner flight control computer costs $billions and kills people. Just ask Boeing (the MAX crashes being attributed to faults that were not fixed at the design stage, despite the coding company querying the design).
What's changed sufficiently to suggest that FAA regulations are now overbearing? Does rocket fuel make less of a bang when it explodes? Do light aircraft now bounce harmlessly off tall unmarked towers? Is the environment at less of a risk? Are billionaires more entitled to ride roughshod over regulation these days?
The idiotic thing about Musk is that he picks unnecessary fights, or stages unnecessary stunts, or makes impetuous but incorrect decisions. All he's doing is risking being closed down.