Hmmm, Rust eh?
Keeps cropping up doesn’t it?
2454 posts • joined 23 Apr 2008
I've just listened to an episode of BBC Radio 4's "The Infinite Monkey Cage", which featured Chris Hadfield and a number of astronauts dating back to Apollo. It was an excellent program in general, but the point relevant to Borkage was when the Soyuz needed rebooting on the launchpad prior to take-off. Chris Hadfield described how all three of them were laid there, waiting, things not going well, being asked to press the relevant button.
Some moments later, the Windows XP just-booted jingle sounded out inside the capsule. A most notable sound to hear, in a rocket, with hundreds of tons of rocket fuel just below one's seat, about to be lit...
Happily all went well thereafter.
Not the ideal place to see a Blue Screen of Death. Soyuz is fairly old, but it has been upgraded in all that time, to some extent. I also wondered if they used a retail or OEM licensed version...
Who knows. Galileo as a programme was fragmented due to various nations all wanting a piece of the pie. That's never efficient - N x personnel departments, N x overheads, N x learning curves, N x translations of the requirements documents, etc.
We saw a result of this when Galileo was offline for a week a while back; the reports were masterfully obscure (a bad sign in its own right), but it seems there was a lot of flapping going on between the various parties involved.
A single coherent programme for a new GNSS system ought to be a lot more efficient, and therefore cheaper. But you never know. We do have all the required expertise in country (even a launch capability soon it seems), and favourable locations around the world for ground stations. Properly marshalled, it ought to go well.
Insert whatever caveats seem appropriate into the preceding text as necessary!
Value for money? Well, that depends. This article https://www.bbc.co.uk/news/science-environment-12668230 9 years ago on the BBC paints a pretty grim picture of what would happen if GPS (or Galileo now) stopped working. The UK's share of that very large annual loss will only have grown in the last 9 years. Spending single digits billions to mitigate that risk seems well worthwhile, especially as Galileo's governance doesn't inspire confidence in it as a robust alternative regardless of our ejection (or rejoining) of it.
From the article:
That said, operating a navigation system in the Low Earth Orbit of OneWeb (rather than Medium Earth Orbit of something like Galileo) is feasible, although fiddling with the payloads of the existing spacecraft and dealing with the ground infrastructure is, at best, technically challenging.
Anything “space” is technically challenging, especially if you want it to work reliably, but given that the UK has really quite a large and successful satellite industry we’re well equipped to do this. Tweaking a design seems to me to be no big deal.
If that’s the only substantive objection to the idea, you can relax.
And even if it delivers a UK specific assured GNSS and global comms solution that only the MoD bothers to use, £500mill is very cheap. Additional use beyond that, commercial or consumer, is added benefit.
Having a significantly different design of GNSS compared to that of GPS might limit commercial take up, but it also changes the failure modes of the system; thus it could be a good backup to GPS. If it’s properly run (which Galileo isn’t), then there’s a whole bunch of people critically dependent on GNSS who’d be relieved to have a diverse alternative, and would likely pay (possibility benefits in kind) to access it.
What seems to have been overlooked by many commentators so far is that we’re all critically dependent on a functional GNSS for our comms, economy; life without one gets pretty ugly pretty quickly, such is the level to which GNSS technologies have been integrated so many things. Putting a backup system in place is probably a wise move. Galileo is looking like it’s a dud long term, GLONASS is moribund and Russian, both are GPS clones vulnerable to the same failure modes as GPS, and the other smaller systems don’t cover Europe.
Put it this way. Presently a whole shit load of really important stuff ultimately depends on GPS and GPS alone (Galileo being poorly assured) which, in case clarification is required, is available around to world purely at the gift of Donald Trump...
Another aspect of this is that £500million to give the UK space industry (which I’m not part of) a big step up is very cheap. If it gets the UK into the habit of launching things for itself, there’s no telling what creative idea might spring up after Oneweb. That might prove far more valuable in the long run.
Given that we are a nation of shed-based inventors, I’d say that it’s probably worth the bet. Especially as it’s small beer in the overall scheme of public finances.
Ha! They might at that...
Torvalds seemed quite open to the idea of using Rust, which I think is most encouraging. I've been hoping for the thinking on such a move (but on a bigger scale) would start. Using Rust ought to deliver development timesavings, which would make better use of a maintainer's donated time. And it ought to eliminate a bunch of bug classes, so that should be another long term time saver.
I think the AMD Intel thing exists as a result of an agreement to share after a bunch of court cases and AMD being an original x86 licensee. I think. It’s kinda complicated.
It’s also possible that AMD would lose their x86 license if they’re taken over. If so, Apple wouldn’t get what they’d need. Whether that’d need that, or just do a pure x64 chip (if such a thing were possible), I don’t really know. Going an ARM route is probably cheaper anyway...
From the article
Similarly, will Apple offer discrete graphics on par with the current Radeon cards found on its premium laptops and desktops?
There’s no particular reason why they can’t just keep using Radeon cards, surely? It’s just a matter of getting drivers rebuilt, mostly. They’re still going to have a PCIe bus, and the bulk of OSX will simply recompile too.
Also, isn’t there a GDPR issue to something like this being scrapped? As I understand it data can be collected, processed and stored only for a specific purpose. If something like this is scrapped after a short time, there wasn’t really a purpose for that data to be collected in the first place, was there?
At the very least, Google’s T&Cs are going to have to start saying “we’re experimenting” as the primary purpose for snaffling data, not “we’re providing a service”...
Radio waves in free space travel at the speed of light - approx 1 foot per nanosecond. To be able to tell the difference between a proximity of, say, 3m and 2m, the system would need a timing resolution of 3ns or better.
That's very difficult to achieve, especially via cheap sub-dollar Bluetooth chipsets that were never intended for the purpose. Even doing it with bespoke kit would be difficult.
The whole thing is in danger of becoming pointless anyway, if not downright dangerous. There's quite a large number of fairly respectable scientist voices pointing out that the 2m separation is totally unfounded in the first place. If they're right, then this kind of contact tracing app is potentially going to be worse than useless.
It looks pretty much like it's droplets that are the transmission method. If so, then the distance that matters is going to depend on local air currents, evaporation rates, ambient UV, and whether or not you're upwind or downwind from a carrier. So it's probably best to be at the front of the first carriage on the tube train.
It's sh$%t but highly profitable so why walk away from it ?
That's a subjective opinion. However, if the enormous pile of money that continues to be amassed by MS is anything to go by, an awful lot of people have continually disagreed with you for many years even after MS were stopped from imposing anti-competitive license conditions on hardware vendors.
I think @martinusher needs to read some of Alan Turing's papers. You seem to be failing to understand the fundamentals of computing. Any Turing complete machine can emulate any other Turing complete machine, given enough memory. This, fundamentally, is the reason why emulation and virtualisation works. So any computer made at any point in the past 70 years or so can, with enough memory, emulate any of today's machines, the software running on them and produce the same outputs. Though you might have to wait a while for some of them to do so... Modern CPUs from Intel and AMD just happen to provide hardware support for efficiently doing this for x86/x64 machines.
It's also the reason why a modern CPU from Intel and AMD can still run x64 / x86 opcodes. They're "emulating" an x64 or x86; their execution units in their cores don't actually have a scoobies how to run those opcodes, they have to be translated first.
What on earth do you mean by "POSIX compliant kernel"? There's no such thing. POSIX merely mandates that certain APIs are implemented, and certain tools provided. The whole point of POSIX is to isolate developers from the underlying kernel. Any kernel ever written is, given enough memory, capable of supporting those APIs (though again, you may have to wait a while). There is nothing magical about the Linux system calling interface that makes it "POSIX", in the same way there's nothing magical about any kernel commonly found in any unix-like OS that makes them "POSIX" either.
And there's nothing about the NT kernel that precludes a POSIX environment being plonked on top of it either. As you point out MS has had several goes at this, the most recent being WSL v1 which did it by implementing the Linux system call interface enabling the POSIX environment provided by glibc and the various utilities packaged up into a distro by Ubuntu to run without recompiling.
Also, Windows is generally quite happy these days with '/' as a path separator. Though to be pedantic about it, the representation on our screens of the byte sequence which, in a path string, is generally taken to mean "path separator" is purely arbitrary. You like '/'. Microsoft chose '\' and is happy with '/' too. Japan is quite happy using '¥', as a result of early computers being stuck with 8 bit code pages and there not being enough room to accommodate both '\' and '¥' at the same time. All that matters is that, somewhere down the API / kernel stack, there's agreement as to what to expect in a path string.
Also your suggestion that the Windows kernel is improperly designed is entirely subjective. There's many clever aspects to the current kernel, and it has some security tricks that Linux doesn't have. That's not to denigrate Linux - it's just different design choices.
However, what is possibly going to happen in the next few years is that Microsoft will start reimplementing bits of Windows kernel using Rust instead of C. That would be a very smart thing to do as this, thanks to the syntactical ideas in the Rust language and the maturity of the compiler, makes it very easy to not screw up memory usage. That, for a team implementing a kernel, is a very useful trait.
If Windows starts heading down the Rust road, and Linux doesn't, this will eventually show up in a clear demarcation between the two (and indeed any other OS using Rust, which may include Google's Fuchsia and Redox (brand new)), in that Windows' kernel would end up being devoid of memory based vulnerabilities, assured automatically by the Rust compiler, whilst Linux won't. I doubt that this will particularly affect which one gets used by end users, but Rust is clearly the modern way to go, and it'll be hard to get exploits on Windows. As for systemd, being written in C - that is probably going to cost dearly in years to come (arguably it already is).
Hmm well, I think you'll find that the ancestors of modern day Windows (which can include VAX / VMS if you really want to be pedantic about it) were doing fine work years before Linus stopped wearing short trousers.
Also, a point of terminology. Linux is an operating system kernel, not an operating system. RedHat, Ubuntu, are operating systems.
Yes, that's the successor to the K machine. The point is that an ARM based chip with a big SIMD unit on it is still a big chip with billions of transistors. Take the SIMD unit out of an 80W XEON and replace the X64 cores with ARM cores, and (all other things like cache sizes / architectures and silicon processes being equal), it's not going to be a 2W chip all of a sudden. Excising the X64 decoders and pipelines would save a good chunk though.
GPUs are good but suffer from being at the end of a bus. The time taken to load data and unload results can be pretty bad, and you have to have a meaty enough CPU to host it in the first place. For supercomputers, and high power embedded processing on aircraft, the "TeraFLOPs per cubic foot" measure is actually quite important. What is quite often found is that, by the time you've stacked up a suitable CPU to feed a GPU, and then overcome the load/unload time with additional hardware, the TeraFLOPs per cubic foot isn't very competitive compared to having just a CPU with a lot of internal SIMD units. AMD's APUs are quite interesting because they overcome this.
This shows up in the real world performance of supercomputers. What made the K machine special was that the real world accessible performance was very close to the benchmark performance that put it at the top of the 500 list, due entirely to the fact that data movement units and SIMD units were very closely integrated inside the SPARC based CPUs Fujitsu built. They're doing the same again, just using ARM instead of SPARC. There's been plenty of GPU based supercomputers that haven't really delivered performance outside of their benchmark scores, or their intended problem type.
For the vast majority of software this is literally just a build option. For things which really need specific chip instruction sets, then that's where you get into interesting territory - a simple rebuild will yield poor performance.. probably.
Absolutely. If software has, either in itself or in any of its dependencies, gone and made heavy use of AVX SIMD extensions, then ARM is going to be something of a disappointment. AFAIK there's no ARM that has a vector unit anywhere near as grunty as Intel's AVX.
This was a problem last time Apple switched architectures. Software like Photoshop had been heavily optimised on PowerPC, making very effective use of that architecture's Altivec SIMD vector processor. At the time, Intel were all over the place (MMX, SSE, lord knows what else), and Photoshop for Intel was substantially slower at some tasks. It took quite some time for Intel to finally catch up with Altivec.
[Intel didn't fully match Altivec until around about 2012, which is when I think that Intel finally added an FMA instruction to their x64 line up. Prior to then they'd had an FMA instruction on Itanium processors, and the suspicion was that they weren't putting one into x64 in the desparation to give Itanium an artificial edge. By 2012 they'd given up with that, for they'd finally read the writing on the wall which customers had carved in letters 10 feet high and 4 inches deep.]
Another aspect is that there's way more to it than simply shoving in a comparable sized SIMD unit alongside an ARM core. To keep such a processing monster fed you need a very extensive cache system. So you end up with a chip that's pretty much as monstrous as Intel's are (in terms of transistor numbers). The savings come from not having all the transistors dedicated to pipelining x86 opcodes which will save silicon space / current / heat. Also, Apple can get ARMs made on TSMC's 5nm process. In contrast there's no sign yet that Intel have mastered their 10nm process (equiv to TSMC's 7nm), let alone worked out how to get smaller still.
Apple could have got x64 processors from AMD, which will get made on TSMC's 5nm process; in fact, they could buy AMD with some loose change. If they're not doing that, and are going all-in on their own ARM; well I guess they've got the money to do that, and it would allow them to forge their own path from here on.
From the article:
This is something Microsoft largely failed to accomplish with Windows RT**, its first foray into Windows on ARM*.
And fail they did, largely due to the lack of an imagination. Before then MS had shown at some trade show Windows 7, full fat, plus Office recompiled for ARM running on a dev board, working just fine (if a little slow). They also reportedly bought an ARM license. The expectation was that they'd do what Apple are now rumoured to be doing. Had MS decided to go for it back then, either assemble the silicon dev team or partnered with someone who'd already got one (Qualcomm?), they could easily by now have gone for it.
I know, I know, Wintel etc. It would have severely disappointed Intel, and that might have had consequences for MS. However, lets consider Intel. They could have licensed ARM ages ago, again, or still, depending, and could now have a healthy Intel SuperARM line as well as a honking great x64 server line. They might even have a presence in mobile devices had they done that. Instead, they've left the field wide open to Apple, Samsung and Qualcomm. Even AMD are dabbling with ARM.
All it needs to do is keep my fucking beer cold!
There is an advantage in a fridge that suddenly goes on the fritz all of a sudden. It's a perfect excuse to drink all the beer, all at once. Can't let the stuff warm up afterall.
On the same theme, clearly a smart fridge needs to be able to hold a lot of beer. There's no point in such an opportunity arising if all that's in the fridge at the time is a forgotten can of Fosters and 2 bottles of Corona.
From the article:
Dell offers supported RHEL and Ubuntu on its XPS13 and Precision mobile workstations, plus the Precision tower workstations.
They don't make a song and dance of it, but if you dig around deeper into their documentation you can see that Inspirons are also rated to support Linux. See Specs for an Inspiron 7591. Says "Ubuntu".
And I can report that Ubuntu did indeed install smoothly, even through secure boot. The only trick was having to disable Intel RST first, and run the SSD in AHCI mode. Other than that - seems flawless.
Teslas do have a radar, but not an imaging radar. They also do Doppler processing. So they can tell how far away something is and how fast it's moving relative to the vehicle, but it can't say in precisely what direction that something lies.
This is a problem if there's a stationary obstacle, because the radar can't really tell the difference between a stationary obstacle directly in front or a signpost on the side of the road. So it ignores stationary objects in its collision avoidance / mitigation algorithms. It's therefore entirely reliant on the video processing it performs. Which, clearly, is still inadequate.
Lidar is better, but not that much better. It attempts to build up a 3d map of the local terrain, using the time-of-travel of pulses of laser light. Because a laser beam is very narrow (whereas a radar beam is quite broad, unless you have a large antenna), a Lidar can more or less paint an accurate high resolution picture of the surroundings. That's fine if the obstacle immediately in front reflects back towards car, but if it were a mirror at an angle or, worse, painted in Vanta black, then a Lidar wouldn't see it properly either.
It's difficult to fool a human brain with mirrors, or indeed anything "odd". A human will almost always spot the mirror. The Pepper's ghost illusion relies largely in making it difficult to see the edges of the mirror used in.
One thing that I'd like to try (in safe, controlled circumstances obviously) is letting off a glitter bomb in front of a car with Lidar; that should be spectacularly confusing for it. If we do ever get self driving cars with Lidar en masse, it won't take kids long to realise they can have a lot of fun on bridges above highways with nothing more dangerous than a tube of glitter. It'd be an improvement on the bricks they currently throw.
This is why I like technologies like ASN.1, and well thought out JSON and XML. In all of these its easy to be very comprehensive in their schema languages as to exactly what forms a valid message / PDU, and therefore what does not. All of those schema languages allow you to define valid value ranges for PDU members, and valid array sizes. With the right tooling PDUs can be trivially validated before sending and whilst receiving. Makes it very very easy to spot when someone is breaking the agreed specification! XML is often let down by the tooling, JSON validators AFAIK work properly and good ASN1 tools that generally do everything properly can be found.
Anyone using GPB for a standardised interface is likely missing out on some tricks...
With a resistor.
As an emergency dump load it’s usually a sodding great hunk of bus bar strung up as a loop somewhere. It’s less necessary with gas turbine plants, which can be more or less instant off, but with steam turbines there has to be somewhere to dump the power if the grid connection goes offline, otherwise things start breaking apart.
Didcot A’s was, from distant memory, was a bus bar all around the outside of the turbine hall.
But for a five day test? No idea. Perhaps the bus bars are cooled somehow.
The flexibility and responsiveness are indeed certainly the big plus points.
Potentially any airport, yes. In practice though you have to ask nicely before anyone will let you fuel up the rocket on their airside tarmac and take off from their runway carrying what, with only a tiny amount of squinting, looks very much like a very large bomb slung under the wing.
People don’t like even the remotest chance of that kind of thing being dropped on top of them!
And this may remain essential for those who have impaired or no vision at all. It seems that Wayland has made it nearly impossible to have a “screen reader” that read out the text in an application window. I’m not sure how commonplace such tech was with XServer, which was at least architecturally capable of supporting such things, but Wayland isn’t suitable at all.
Basically an XServer, which does all the text rendering for applications, is in a position to support screen reading; it could easily pass the text over to the reader for text-to-speech conversion. Wayland forced all applications to do their own text rendering, so the assistive technology has to be built into each and every application. Ooops
Gnome are building something into their desktop, presumably through the GTK libraries. But if you run a non GTK app, you’re screwed.
If I understand correctly, historically Windows has taken a slightly more MacOS-esq approach of assuming apps will redundantly install various versioned copies of libraries in their local Program Data (or .App) directories - rather than orchestrating shared ones (which bigger Linux distributions have the luxury of being able to coordinate. Would that make this more of an MSI-on-steroids rather than a "traditional package manager"?
Not sure myself yet.
One of the things I dislike about the way it's done on Linux is that it places a heavy emphasis on a distro adopting a piece of OSS software and including it in their repositories. Ok, for the able Penguinista it's perhaps not so difficult to use cmake or, heaven forbid, ./configure to build from source. But it's at that point where you've pretty much lost the un-*nix-savvy user. Even if a software developer chooses to go to all the phaff of maintaining packages for all the myriad different systems out there, it's still beyond the un-*nix-savvy user to add the software developer's repos to their apt, or yum, or dnf, or snap, etc setup.
The duplication of libraries thing: in my view its swings and roundabouts. Just this afternoon I've been needing an older version of the lexxer generator Flex. And, with Ubuntu / apt, finding and installing an older package version is an absolute nuissance. Microsoft's way does at least make this kind of thing totally trivial; you just uninstall the new version (though even that's not totally necessary, it's down to the app and how it manages its install), install the old, et voila you're off and away.
Also, with a large distro-centric package repository there's a need for each and every application within it to be built against the versions of libraries that are chosen for it. This has never been achieved 100%; dig around in some of the more obscure corners of a repo and its generally quite easy to find something that, though all dependencies are claimed to have been met, has been packaged up with the wrong dependencies for that particular version of the app.
Also, I'm just not convinced that storage is, generally speaking, sufficiently scarce to warrant a package management system that strives to ensure that the bare minimum of space is taken up by shared libraries. It's not exactly resulting in a modern full-fat Linux distro being anything less than a few GB installed. With a lot of software these days what takes up space is all the pretty bits - bitmaps and such - and they're generally not shared between applications anyway. Mass market IoT stuff in priinciple benefits on price from efficient use of storage space, but then again how much of that stuff is actually updated ever, anyway?
Anyway, it's sounding like MS have realised that getting rid of application installers ("Use the MS Store") was a bad idea, and is undoing that somewhat. Good.
From the article:
Among the best features of Linux is the availability of package managers, such as Debian's Apt, that can install, remove and manage dependencies for applications from the command line.
Also among the very worst features of Linux distros is the availability of FAR TOO MANY DIFFERENT PACKAGE MANAGERS.
That is all.
And I strongly suspect that TSMC will be under some pressure from the Taiwanese government to keep the core vital technologies in Taiwan. Having the USA worried about its strategic supply being in Taiwan is a reason for the USA to be militarily and diplomatically interested in maintaining Taiwanese independence. So I won’t be surprised if this new Arizonan factory takes quite a while to get going and is obsolete before it does.
There is a grey area, and it's important. Gathering information about what someone does on your platform so you can make recommendations to them on that platform usually doesn't draw much ire. For example, I don't care if Amazon records a history of things I buy and uses it to suggest products while I'm logged into the same account.
Yes, and this is a well established, acceptable thing for a seller to do. Back in the old days when we had local grocers and butchers who'd deliver, their account book would be a record of who had bought what, and when, and for how much. Amazon does deviate from that to some extent, being also a market place and payment broker.
Wrong. We pay for phones. It's called the purchase price and it's quite high.
I'm afraid that's not quite right any more, at least not always so. For example, Facebook pay money to phone manufacturers to have the Facebook app pre-installed, subsidising the cost of the phone. Apple, who are making more money out of services these days and less from selling hardware, will price their hardware according to the revenue they can expect to get from the services they build into the hardware.
Ever wondered why an Alexa is so cheap?
Basically, manufacturers now are busily looking at excuses to internet connect everything, because then they might be able to monetise the "thing" that's been sold beyond the original purchase. An IoT security system for a house is a dream for the manufacturer. It tells the manufacturer when the house is occupied or not. Such data, aggregated, is valuable information for TV advertisers, energy companies, etc.
Basically, anything that can have an excuse to be Internet connected and is somehow desirable to the householder in its own right for being connected, is primed to be laden down with all manner of sensors to determine whose in the house and when. Give it a WiFi and Bluetooth sniffer, IR motion detect, audio sensing, the lot, regardless of whether those help it do its overt function. Put weight sensors under the shelves of fridges, so that you can tell when the weekly shop has been done...
You're missing the point.
The article also says that ID changing is only effective long term if it's done regularly and often. The problem is that other information can be used to eventually associate multiple advertising IDs with a single identity, so long as the ID isn't changed too often. The problem is that on Android you have to have an advertising ID and Google make sure that it is always unique, which is what makes such an association possible. Consequently, Google doesn't control whether other parties process for that association, and in fact is constructively assisting them in doing so by giving them access to a guaranteed-unique advertising ID. Doing such processing is probably illegal, and Google is choosing to be complicit with it.
Apple's system (if reported accurately here) means that you can have an ID that is not unique (all zeros), and therefore other processing cannot associate them together into a single identity.
Ha! Yes, I recently had to run some old DOS software (had to - well, really just for nostalgia), and needed to run up an XP VM for that purpose.
VMWare Player still offers "MSDOS" as a guest type.
I think AMD's decision was fairly sensible; 16 bit modes had to go, and if not then, when? And of course they chose wisely by keeping a 32bit mode too. I expect that'll go too in due course, and then another generation can mourn the passing of the technology of their youth.
Another thing I forgot to mention in my last; I think MS has done a fantastic job of backward compatibility. To be running (quite happily) a piece of 1990s software on a modern, maintained version of Windows 25 years later is really quite an achievement.
32bit Windows 10 does have a use - it'll still run Windows 3.x software.
Ok, so that's a very rare requirement these days.
A relative does use 32bit win-10 for this reason though. Larousse, noted French dictionariers, did a fantastic Windows 3 edition of their English / French dictionary way back in the 1990s. It had literally everything in it, it was massively comprehensive.
Then up popped the Internet, and it went online. Trouble was, and still is, that the on-line edition (even the paid-for one) is massively lacking in content in comparison to that Windows 3.1 software. So this relative has been keeping a 32bit Windows PC running ever since, simply to keep that software working.
Looks like I'm going to have to spin up a VM, running XP or 32bit Win 7, or something.
<BTW, it's not me down voting - it really has been exceedingly rare to see 32bit windows for, what, 15 years?>
Yes indeed, it seems as if Intel have hit a brick wall on their silicon processing know-how.
I'd read that Intel's 10 is akin to TSMC's 7. However, if TSMC move smoothly down to 5nm and Intel still can't make their 10nm work, there's a good chance that the US would fall permanently behind the curve forever being one step behind.
What About the Shareholders?
One might read into this situation the possibility that Intel has told the US gov that they've hit a brick wall and they can't see a way of resolving it. If so, I wonder if their shareholders would like to know that too???
Try out the Lego Duplo Bluetooth enabled train app. Requires locations permission on Android to be allowed to use Bluetooth. That app goes to significant lengths to explain why it needs the permission, almost apologetically so. And says a that it doesn’t actually use location data whatsoever.
It’s just another way for Google to try and force Android users to share their every movement with Google. Google’s assumption likely being that Bluetooth is often used in cars, and they want car location data to keep Maps traffic data current.
The Australian gov dept tried to license it under a new license and claim copyright... which is not how this works...
Unless they’d been given permission to do so by the copyright holder.
I’ve no idea if that’s what’s happened, but I dare say that as it originates in a Singapore gov funded effort, and the two governments get on pretty well, we can’t discount the possibility.
As with spoofing of other open unsecured radio systems of this type, this one is not really something to worry about.
First, as the article references, pilots are actually pretty good at sifting the crap from the normal.
To have an impact a spoofing transmitter would have to be in range. So to make the spoofing work you either go somewhere near the take off / landing flight paths of an airport (where you'd need to transmit some power), or you'd have to sit underneath a known flight lane (and transmit more power). For both, reports of duff TCAS activations is quite likely to result in OFCOM's surveillance aircraft (they have one) being launched pretty quickly, and they've got a track record of pinpointing annoying transmitters to within meters. That's if the numerous military aircraft capable of doing the same thing don't get involved first.
So second, someone actually trying this out is going to get noticed and found pretty quickly. And if they keep trying it on, that could be within seconds of them switching on their transmitter.
Third, whilst it would be possible for a nation state to do this within their own territory (they're in control of their version of OFCOM) they're unlikely to do so; countries get money from flights passing over their territory.
All in all, unlikely.
I'm fully anticipating that their next piece of pointless research will be spoofing maritime AIS, "causing ships to crash". Well, they'd have to spoof the ship's nav radar, and unless they're doing this from another vessel they'd have to do it somewhere like the Straits of Dover; there's a whole load of traffic monitoring radar systems round that area too, so those too would have to be spoofed. And anyone trying AIS spoofing is as likely to be geolocated pretty quickly these days too; AIS validation is a topic these days. The only hard part about that is having the signal collection assets in place (e.g. waking up OFCOM or the RAF); the processing is easy.
I don't know whose funding this bunch, but I'd suggest that they consider whether or not they're getting value for money. There is some merit in the occassional poke at such radio systems to remind people that they're intended to supplement the Mark I eyeball / brain, not replace it, but funnily enough the regulators and practitioners in various fields of transport are already pretty hot on that.
A far more valuable area of concern is GPS spoofing / denial, but there's already a load of other researchers working on that. There's even a properly thought out solution, it's just a matter of persuading countries to fund it.
For the record the solution is a combination of 1) GNSS systems, possibly enhanced to improve resilience, 2) eLORAN to provide an alternative location and timing source (pretty accurate, and usable by all but the smallest applications i.e. it might not fit in a mobile phone), 3) use the existing radio clock transmitters like MSF for another source of timing.
Biting the hand that feeds IT © 1998–2020