* Posts by bazza

3396 publicly visible posts • joined 23 Apr 2008

Microsoft's code name for 64-bit Windows was also a dig at rival Sun

bazza Silver badge

Re: "the OS titan missed an emerging trend"

Can't catch them all I guess. And, to be fair to MS, they did create (through various means and ways, not all as pure as the driven snow) a trend which has stood them in very good stead...

It's interesting that Android was referenced as the most successful 64bit OS. That's fair enough, by install count. By dollars earned. iOS is ruling supreme.

Taking this kind of thing into account, the fact that Google are doing all the hard work on Android but it's the vast Chinese market that's benefitting from it without paying a cent to Google, one could consider Android to be something of a disaster for Google. OSS / don't be evil (cough) / it's good to share is all very well and good, and Google makes a fair bit of money. However, had they actually got strong monetary control over who can install / use Android they might have been making an awful lot more money out of a market 1 billion bigger than they have access to. There's not many companies that let a market that size get out of their control and get away with it.

Musk tells advertisers to 'go f**k' themselves as $44B X gamble spirals into chaos

bazza Silver badge

Re: As the old saying goes, freedom of speech is not freedom from consequence

>No, that's a recent invention by the Woke Taliban.

Pah. People have been suing each other for libellous comments for centuries, and the laws on causing public offence are as old as the hills. People have been writing criticisms of "the great and the good" for centuries, in olden days in the form of phamphlets printed cheaply and handed out on the streets.

>I notice you and your ilk are free to type such, without getting threatened with “consequence” from Musk.

That remains to be seen. And, write something truly offensive here and even El Reg will take your post down.

bazza Silver badge

Yaccarino

I’m already amazed that she’s stayed in the job this long. I’d have been out of there, well, probably the moment Musk uttered a single problematic tweet.

She does now have a kind of a strange power. If she does walk away no one would blame her, but the damage done to Twitter could be immense. How many advertisers would think that the last restraint had gone, get out whilst you can? All of them?

For her own sake I think she does need to get out. If she’s still there when the company finally fails, one would have to question her motives for doing so. She must see that there’s no saving this one. Working for Musk doesn’t sound appealing at all. Why risk her future career hanging on to this job?

The best explanation she can give would be “For the huge amount of money“…

A dangerous question for her is, does she sympathise at all with Musk’s views? As her continued presence in twitter becomes ever more inexplicable, people will inevitably start forming other explanations that are unsavoury…

Wayland takes the wheel as Red Hat bids farewell to X.org

bazza Silver badge

Workplace Difficulties

These days companies simply cannot ignore accessibility legislation, one can get into a lot of trouble if you do. From what I’ve heard Wayland tosses all the accessibility challenges over to the app developers, not the desktop environment.

If Wayland based Linux really is unsuited to those needing help, then that practically guarantees that it can’t be offered to an employee. That ensures that there won’t ever again be anything like a large corporation / city council that is going to be willing to adopt a Linux desktop. Which condemns Linux to remain obscure…

Data-destroying defect found after OpenZFS 2.2.0 release

bazza Silver badge

Oooo Nastie

This kind of bug can be a nightmare to track down. It warrants a thorough code review, as that might just be the quickest way.

Also, ftrace was designed to help with this kind of thing, ii wonder if anyone has interrogated that yet?

Firefox slow to load YouTube? Just another front in Google's war on ad blockers

bazza Silver badge

Google: “The audience is wrong, they must do this”.

Not normally seen as a way to win people over…

I’d perhaps entertain the possibility of paying Google money, if they could guarantee to not harvest and exploit any of my data or any data they could generate about me. While Ad free YouTube premium is probably devoid of ads, it’s still harvesting away, more valuably so because it’s tied to a solid ident.

Will anybody save Linux on Itanium? Absolutely not

bazza Silver badge

Nah, just say “Liam Proven”. They’ll nod knowingly…

bazza Silver badge

DSPs aren’t general purpose and that’s what sunk them. As soon as projects started thinking “whole system” and realised that, as well as the DSP functions, there was generally a whole lot of general compute to do as well to make a practical and useful system, the inadequacy of a DSP started to become blatant.

And then Motorola did a PowerPC with AltiVec (Intel’s SSE and AVX are the later equivalent). Motorola really nailed AltiVec, and had access to better silicon processes too. This, to certain key projects, was the answer, and surprisingly small piles of PowerPC CPUs could do anything one needed done. With the right COTS kit it was perfectly possible to swallow fat signal streams and process it realtime, and do other things too.

PowerPC remained competitive for a surprisingly long period of time. A 400MHz 7410 using carefully crafted libraries could hold its own against a 4GHz Pentium something or other.

And there’s a lot of systems still using them. Whilst CPUs have since marched off into the distance, the spectrum hasn’t got any wider, and as PowerPC was adequate then, it still is today. Today other factors come into play, like the cost of rewriting and requalifying code for modern CPUs compared to the cost of renewing moderately old hardware. This is also why GPUs haven’t quite taken off in this field.

A modern Intel or AMD CPU is of course a fearsome DSP beast, but one runs out of ideas as to how to keep one busy doing only DSP. One may as well run one’s DSP workload on, say, 10 cores and use the remaining 118 for something fun for the system operator.

Ubuntu Budgie switches its approach to Wayland

bazza Silver badge

Re: ssh? @Tom 38

Whist waypipe might be able to operate over an SSH tunnel, I do wonder what the clipboard does, especially if the different ends of the pipe are on different endiannesses...

Apple exec defends 8GB $1,599 MacBook Pro, claims it's like 16GB in a PC

bazza Silver badge

Re: *Placed*

Well if the chips aren't available, that's even worse, isn't it? To upgrade from 8G to 16G, first source a 16G machine, desolder its memory chips, etc.

bazza Silver badge

Re: *Placed*

Not exactly an end user 5 minute upgrade, is it.

Replacing chips in a multichip module is none trivial. It's a long way removed from desoldering a chip from a PCB.

Atlassian cranks up the threat meter to max for Confluence authorization flaw

bazza Silver badge

Re: Humble question to those affected or at risk

Public facing or not is irrelevant. In any org your as worried about insiders as anyone else.

Insider can mean customers too, if you're developing software under contract and following an Agile dev cycle using Jira / Confluence for the customer engagement.

GhostBSD makes FreeBSD a little less frightening for the Linux loyal

bazza Silver badge

I've Used GhostBSD

I've run it as a desktop for a while - as a VM.

It was fine. It just worked. I was quite impressed. I found it refreshing.

One thing I did do was to build a well known piece of *nix software that is popular and highly respected amongst academics. It built just fine. However, what emerged when run was some memory faults and crashes. Why, I asked myself, did the same software in its normal Linux environment not crash?

The answer I suspect was that FreeBSD's rather more defensive memory allocator wasn't being fooled, whereas glibc's default allocator being more optimised for speed meant that a memory leak / bounds breach problem was able to "get away with it" on Linux. It was quite interesting to come across such a result.

bazza Silver badge

From the article,

>There aren't many other FreeBSD distros around.

I thought that was kinda a founding point of the FreeBSD community, that there wouldn't be a vast proliferation.

bazza Silver badge

>if you want something serious, go macos, it has BSD userland.

Hi, I'd like a server please. Nothing too serious you understand, just a small, common or garden server.

Musk's broadband satellite kingdom Starlink now cash flow positive – or so he claims

bazza Silver badge

Re: Price not related to internal cost

>Cruise ships?

From what I hear, cruise ships are indeed one of the biggest demands out there for mobile network bandwidth. With several thousand bored passengers (and, these days, crew) demanding Internet like that get at home, the comms hub on a cruise ship is a pretty major piece of engineering. And, so it is with the satellite. The problem with the LEOs is that you need an awful lot of them with that capability to provide permanent coverage of the ship, were as the corresponding GEO system needs only 1 of them.

>Selling bandwidth to a military might be something that allows the rank and file to phone home...

...which is pretty much what the military's commanders don't want their rank and file to be able to do from within a war zone! When not at war, generally they're on tap with civilian networks nearby anyway.

Military usage of bandwidth is probably quite light weight, really. They've long had to contend with minimal bandwidth and I guess they've become quite good at brevity; for example, the Royal Navy was perfectly capable of running a global navy prior to WW2 using nothing but Morse via HF, blinker lights, and sailors waving semaphore flags..

And, after all, one needs only 5 bytes to communicate, "Fire!".

>The pitch that Elon is going to bring internet to the world's out of the way places isn't a total lie, but it's only going to to that for those that can already afford it.

I think a lot can be learned from the history of Iridium. They set out to provide the world with 2G mobile satellite telephony. They failed, because by the time they got there everyone already had 2G. I can see why Elon is in a hurry because if they ambled into providing the full service there really won't be any market at all.

I'm quite interested at reports that SpaceX is considering floating StarLink as soon as it's stable with cash positive (which, they've recently claimed it sort of is). There's every possibility that they get it to some sort of profitable state, float it, and with a gentle foop! the market starts evaporating underneath them as cheaper-priced terrestrial comms spreads further and further. If that is their strategy, getting StarLink "finished" ASAP is necessary otherwise it might not achieve sufficient profitability for an IPO to yield a good result.

bazza Silver badge

Re: Price not related to internal cost

>There is more to LEO vs GEO. LEO is nearer so the ground antenna size can be smaller, radio power can be lower or bandwidth can be higher (reality is a mixture of all three).

I don't think you are as good at calculating link budgets or designing systems around them as companies like Viasat...

bazza Silver badge

Re: Price not related to internal cost

It's worse than that. For those 60 launches, there's the lost $50million they would have got had the launch been sold to a paying customer. That's a loss of $3billion per year, or a total cost of $4.5billion. So, StarLink sustainment on Falcon 9 is a pricey proposition. If StarLink isn't bringing in at least $5billion per year (there's a load of other costs to worry about too, like the price of the satellites), it's not worth it. Though, this does suppose that if StarLink were not there and not using up 60 launches per year, there would be paying customers who would.

You've hit the nail on the head in identifying the rural US market as the only paying market. It is the sole sane market for StarLink. The trouble is that putting up a load of LEOs is an extremely inefficient way of serving a geographically confined market. It's spending a load of money on supplying a low latency link, but is the low latency really worth $5billion a year to all those US rural customers? The alternative - say, 1 really big GEO sat costing only $1billion and lasting 15 years, is a whole lot cheaper (x75, over 15 years). So what would a rural customer prefer? 100Mbit with half a second latency for $X/month, or 100Mbit with a few millisecond latency for $75X/month.

I know what I'd take. Or if StarLink stuck at it, I know who'd be creaming in the profits (the GEO supplier).

I think StarLink is going to find it very tough to compete against the ViaSat / Inmarsat company. They've had some hiccoughs recently, but they need only get a few of the new class of GEO up into server to become a ferciously good offering.

Intel's PC chip ship is sinking with Arm-ada on the horizon

bazza Silver badge

Re: Like Linux Desktop within 10 years

There's lots of people running Windows 11 on RPi's quite happily.

bazza Silver badge

Re: "Intel's deep history of innovation failure"

And the one good reason for not ditching a big hairy old ISA is there's no point doing so if the software devs aren't going to follow you (as that senior Intel engineer seemed to know!).

For me the most damning thing about Intel is that, having the world lead in silicon processing (as they used to) they could have cleaned up with an ARM license. Imagine what Intel could have done with an ARM design, back when Intel's transistors were the best, smallest and cheapest in the world. If Intel had decided to dominate the mobile space by leveraging ARM's designs and their own silicon fab prowess, they'd have cleaned up. Innovation level? Zero (well, limited to their silicon fab tech). Profit level? Sky high, and it's profits that count, not "innovation".

They might not have enjoyed writing "ARM" on top of each and every chip produced, but as they'd have owned the desktop, server (x86) and mobile (ARM) markets I think they could have come to terms with that slight dent in their pride. But no, they thought they knew best, they thought they could swing the mobile world back to x86, etc.

Intel forgot that it is, fundamentally, a purveyor of transistors. It makes money selling transistors. If it's wiring them up in ways customers don't want (e.g. as x86's), and also isn't making the sort of transistors that people want to buy (e.g. they're hot and slow), then they're not going to sell very well.

BTW, 2000+ pin packages; the A15 from Apple isn't too far behind that, about 1000+, looking at some pics on YouTube. For a SOC that is supposed to have more of the devices electronics all on one chip, it's got an awful lot of external contacts.

bazza Silver badge

Re: "Intel's deep history of innovation failure"

I'm unconvinced about that. Sun used to publish the SPARC CPU designs, and countries like Iran made their own versions. There's numerous GPL-licensed SPARC designs. Some of these date back to at least 2006, rather earlier than RISC-V. RISC-V is just another one, arguably with better publicity, and arguably more closely aligned to mobile applications than other open source ISAs and their designs.

And this should be part of the lesson. We've been there before. It made no noticeable difference. Dominant ISAs dominate regardless of the availability of other ISAs, until there is a good enough reason to adopt another. People didn't suddenly rush off and make SPARCs or POWERs to break the dominance of x86/64; it wasn't worth it. ARM succeeded only because it was a much better fit for mobile devices than x86 at a time when ISA power efficiency really, really mattered to get any useful functionality at all. Now we can put 15 billion transistors in our pockets (that's how many an Apple A15 has), easily enough to implement a pretty potent version of any ISA we've ever had (even x86), though ARM have been (to date) careful not to give anyone a strong reason to do so.

Several of the key players in the mobile industry making their own ARM devices are also members of the OpenPOWER foundation, and nominally have unfettered free access to that ISA, even the very tiny Microwatt version, but haven't bothered to shift from ARM to it. Some of these companies are also founding members of the RISC-V foundation. If Apple, Samsung, Google, NVidia or Amazon wanted to save a few bucks by losing ARM's license fee, they could have long ago but haven't. Even though they are perfectly capable of taking ISAs like POWER (which they have access to) and re-engineering them for mobile applications. I note that Google have begun a RISC-V port of Android, but they're not exactly rushing to make a chip themselves for it to run on...

bazza Silver badge

Re: I do find myself wondering

Every company is trying to maximise income. That's their job.

Don't expect any company making RISC-V based cores to behave any differently, or to somehow have not-VCs and not-Bankers at their helm. The RISC-V ISA may well be open source, but that's essentially pointless. You can't as an end user make your own chips or hardware, you'll have to grease the palm of someone who needs to make a profit to get hold of them, and they'll be looking to maximise that profit no matter what. If they think they can make more money by adding in exclusive bespoke extensions in the hope you get addicted to them, they will.

ARM may or may not now have a management that understands ARM's position in the market. If they do try to gouge the market they'll soon learn the error of their ways in the shape of anti-trust actions. That's what happens to dominant abusive CPU ISA owners, especially in the USA. The precedent is Intel, who were obliged to license x86 to competitors (including AMD) at a reasonable rate. Indeed, the reason NVIDIA did not buy ARM was because the competition authorities all over said "no way" (plus a bunch of other objections).

Also, Softbank Group is not a venture capital fund, and it is also not a bank. Softbank Group is an investment holding company. It retains 90.6% of ARM having bought back just before the IPO the 25% of ARM that it had previously sold to the Softback Vision Fund (which is a VC fund). As an IHC it's intended to be in it for the long term, not the short term that a VC is interested in. Softbank Group is perhaps not the best or wisest IHC, but the IPO and the transfer of 25% from VC to IHC could be taken as indications of an intent to make money eventually, not quickly. The "eventually" is be welcomed, indicating an intent to preserve ARM's role more or less as we currently know it. If they screw it up, so be it; but they don't have to try very hard to succeed (just by not upsetting the apple cart...)

bazza Silver badge

Re: I do find myself wondering

That is a highly revisionist view of how ARM got to where they are today. ARM's earliest CPUs were lightning fast in comparison to PCs of the day. I know, because I used them both.

Since the days of the Acorn Archimedes, ARM has always designed CPU cores to match the market segment interested in using it which was an awful lot microcontrollers and small application processors. There's is not to design final products themselves; that's not their business model.

It's only comparatively recently has there been a market for bigger CPUs with higher performance, first in the mobile devices sector and now increasingly in desktop / laptop and server. And, this is not necessarily ARM developing these; they've licensed the ISA to companies like Qualcomm, Apple, Samsung, Amazon, Google, etc. and it is they who are designing the large scale devices. ARM isn't, and doesn't have to; they just sit at the top acting as a paid referee ensuring that when companies put "ARM" on the lids of their chips they will indeed run ARM op codes as expected. Risc-V cannot easily compete on price - the difference between "very cheap" (Arm's license price per part) and free is not going to improve the bottom line much.

If Arm continue in their role as "paid referee" to most manufacturer's satisfaction, they're probably going to be around for a long time. The recent IPO in the US appears to have been successful, and that's a big vote of confidence (that's a lot of US investors giving money to Softbank for a company many said they'd overpaid for; a lot of investors really, really like ARM).

It's interesting to see the Risc-V foundation trying to mould its ecosystem towards the kind of disciplined consistency that Arm has (the difference being that Arm has the contractural sticks to be able to do that, the Risc-V foundation can only ask politely).

China cannot drive Risc-V to the same levels of performance on purely architectural design prowess. They'd need access to the same silicon processing capability that TSMC has. The US is being firm on denying them that access.

Whilst there's business-critical software running only on Windows, China and everyone else needs Windows. You may or may not consider Office to be business critical, but strategically important software such as Catia seems to be solidly Windows-based. And on the reliability front - I have more trouble with running Linux bare metal than I have running Windows. I prefer to run WSL rather than bare metal Linux.

bazza Silver badge

Re: "Intel's deep history of innovation failure"

It's not entirely fair to say that Intel has a deep history of not innovating.

Sure, they've repeatedly screwed up every new ISA they've tried since 32-bit x86, needing AMD to get them successfully into the 64bit era. However, they were kings at silicon process innovation. Who needs a different ISA, when you can blitz everything else using the old ISA but on an improved silicon processing node? Intel successfully kept that up for decades. This proved insufficient in the mobile arena, but it continued to stand up to scrutiny in the desktop / server market. And it still does - except that it's AMD and they're leveraging TSMC's superior silicon process prowess.

The problem was that they got into the US's stereotypical hire/fire ways with their silicon process designers / engineers, fired them all and then wondered why they couldn't make 10nm work for them at all whilst TSMC waltzed off into the 7, 5 and now 3nm distance. Until they can get back into silicon process dominance like they used to have, the ISAs / chips they have to offer pretty much doesn't matter at all. For instance, an ARM built on an Intel second-rate silicon process is going to run slower / hotter than an AMD-designed ARM SOC build on TSMC's superior process

Regarding the ISA itself, there's precious little point innovating. ARM has left very little room between their designs and any notional "ideal". MIPS and RISCV aren't significantly better from an ISA performance point of view. If Intel created yet another ISA, it'd be doing it or no good reason.If they want to get on some kind of par, licensing ARM and just aiming to match TSMC would be a good start.

bazza Silver badge

Re: Like Linux Desktop within 10 years

Windows already does run easily on ARM...

bazza Silver badge

Re: I do find myself wondering

That is a wildly optimistic estimate of the progress Riscv will make. Arm is almost everywhere already, Windows and MacOS already runs on available machines. Microsoft even showed of windows 7 and office running on an Arm system back in about 2008; 15 years ago.

Windows won't run on Riscv unless microsoft make it do so.

Arm's best strategy is to not give customers a strong reason to move off the platform. The price they charge is pretty low anyway, which is partly why Arm is so popular. Chip houses are certainly going to take a good look at Riscv, but it's far from certain that it's worth abandoning Arm for minimal gain and a whole lot of market risk.

Arm don't need to foment fragmentation of the Riscv ecosystem, it seems pretty good at doing that itself.

bazza Silver badge

Re: DEc Alpha

I've seen the PowerPC version of Windows NT on VME cards. Worked a treat. Same hardware booted VxWorks for full on hard core real time programming.

bazza Silver badge

Re: The big change is HBM

X86's problem is that to make it fast you need to have long instruction decoder pipelines, correspondingly complicated caches, etc. So the number of transistors the lie between an instruction arriving in the CPU and the ALU that's actually going to execute it is quite large.

Arm doesn't need all this. which is where it wins out. In an age when a good x86 core was needing millions of transistors, Arm was needing only a few tens of thousands for the whole instruction set.

On-by-default video calls come to X, disable to retain your sanity

bazza Silver badge

Re: Not Holding Junk Debt

Often a big part of the remuneration package is shares. In this case, that'd not be an attractive option. The only reason to work for Musk in this capacity has to be cold, hard cash.

Doing for the purposes of tarting up a resume is somewhat doubtful; people are going to question one's wisdom for doing it. There's a difference between taking on a shit job that's got to be done, and taking on a shit job that is also a non-job like this one. If one cannot spot the difference, that's surely a black mark against one's resume?!

RISC-V champ SiFive confirms it's laying off 1 in 5 workers

bazza Silver badge

Re: Hello (Real) World !

Indeed so. Open Source means nothing at all, unless the end user can practicably do something with it. If the end user needs a silicon fab to make use of open source designs it may as well be a widely available keenly priced proprietary design for all the difference it makes.

Open source software is pretty much useless too, other than making it legal to copy it. No end user can practicably do anything with someone else's source code other than maybe build it, which can be prodigiously difficult to set up. So no one really bothers. For example, I bet the source repos for the average Linux distro and rarely troubled...

ULA's Vulcan Centaur hopes to rocket into Christmas

bazza Silver badge

New Galilleo Launches

Seem a little early. What's wrong with the ones that are already up there?

There's a slight problem with the view that Galileo is independent of the US. GPS, Galileo, GLONAS are all basically the same concept, operating in more or less the same spectrum, providing more or less the same service. So all GNSS receivers are set up to use all three. So, whatever in Europe is using Galileo is also using GPS and GLONAS. That's not operationally separate from GPS; that's co-operational. And judging by the outages Galileo has suffered, that's probably a good thing. In effect, Europe is reliant on GPS as a back up for its own system, and has had to use it.

In IT terms it's a bit like having a data store in your data centre supporting the services you sell, but relying on another supplier for data back up having already said that you do not trust that back up supplier despite years of faultless service provision. Weird.

Another problem with all three GNSS systems is that whatever it is that can wipe out, disable, jam or spoof one of them can do the same to all three. For example, one big solar flare and they're all gone. So the only thing Europe has gained is partial political independence (see point about backups above).

Intel stock stumbles on report Nvidia is building an Arm CPU for PC market

bazza Silver badge

Re: What's with

I know that they have a x86/64->ARM translation layer for some time, but the implication is that it doesn't store the translated result long term, so far as I can tell, unlike Rosetta 2.

MS's own docs says that it works, but you're better off rebuilding for ARM to get native performance. That's what makes me think that the translation is done Just In Time. A stored x86/64->ARM translation should perform more or less as if it had been rebuilt for ARM and run with native performance. It is, after all, effectively a rebuild using Intel opcodes as source code...

bazza Silver badge

I'm fairly sure they didn't take off because Intel ruled supreme in terms of performance per $, and when they didn't they soon caught up and overtook again.

That's not really been the case for a while now, so perhaps there's a chance that another architecture could sneak in there and gain traction.

They'll have to hurry though, Intel are themselves now using TSMC so their chips will once again gain parity (at least on silicon process node). Much is made of ARM's power frugality, but that doesn't really impact desktop users, and so long as an Intel laptop can do a working day on a battery charge there's little gained in lasting two. So there is a good chance Intel will become good enough / cheap enough to last.

But Intel are currently totally reliant (in the desktop / laptop world) on MS not making an effective equivalent of Roesetta 2 for Windows, because if that happens then for most people the Intel / ARM thing won't matter at all and an ARM machine is likely cheaper.

bazza Silver badge

Re: What's with

Agreed. A lot depends on Microsoft adding in something akin to Rosetta 2, if the "I don't care if its ARM or Intel" aspect is going to extend to a large fraction of the Intel Windows market.

They're reasonably well placed for this; in Windows 11, applications run in mini VMs hosted on top of HyperV. It wouldn't be too hard to add a translation layer at that point, and it's then not too much of an extension to store the "translated" VM, much as Rosetta 2 stores the Intel binary pre-translated to ARM op codes. A lot of the tech for this is just lying around the place as OSS - QEMU for one - though I suspect Microsoft would prefer to write their own. The implication of what's there at present in Windows 10/11 is that the translation is on the fly; shouldn't be too hard to store it. There are potentially some licensing issues - some EULAs prohibit "translation" of software into another form, which is exactly what Rosetta 2 or any other emulation is doing. Apple appear not to be getting sued, so perhaps MS would get away with it.

I think Microsoft may eventually do this, if it becomes clear that ARM is the future. They've got a pretty good record for supporting software long term. Good support for Intel binaries on ARM would simply be an extension of this.

It was a terrible shame that Windows Mobile didn't gain enough ground to survive. By the time they'd finished it, it was technically pretty good. The software ecosystem was the problem Better still, MS had defined a hardware standard for it, so anyone (for some measure of "anyone") could make a phone and trivially get Windows Mobile running on it. Much like a desktop PC is a hardware standard that originated in IBM and molded by Microsoft later in the shape of PC System Design Guides (Wikipedia). The implication was that, if the hardware was standardised to boot Windows Mobile, it was also standardised to potentially boot something else altogether (like an Ubuntu distribution).

That openness of hardware was a vast improvement on the closed proprietary designs that dominate today, and it's a real pity that it died along with Windows Mobile.

Japan to probe Google over 'suspicion' that antitrust laws are being broken

bazza Silver badge
Alert

Slowly Happening

It depends on one's definition of "stands up", but the EU has been regularly dishing out fines for quite some time now. However, the very fact that they're having to keep doing so lends wieght to the sentiment in your post.

Google in particular does seem to be irrevocably unammendable. They evidently don't care one jot about being caught out engaging in "anti-trust" activities. If they did care one would imagine that, having suffered the public shame of one fine they'd work hard across their entire company to ensure that they weren't crossing the line anywhere else. But no. They're just sat there, waiting for regulators around the world to eventually get round to another part of their business. There's now enough interest around the world to potentially seriously dent their hold. The key test will be the eventual enforced unbundling of Google services from Google Play Services in Android.

One does wonder what this means Google's long term strategy is. Or rather, what strategy the key share holders (you know, the ones that own the premium shares that have voting rights, not the ones that own the ordinary shares that do not) are following. They've split Google into Alphabet and a subsidiary called Google. I think this is part of an exit strategy.

When the going gets tough, as the voting shareholders they have full control of all of Google's assets, including its money pile. They can, if they wish, vote to transfer that money pile to another corporate structure, one that's insulated safely away from the anti-trust proceedings against their subsidiary, one that's in their control and not in the control of the ordinary share holders. When the going gets too tough, they can make off with the money and leave the ordinary shareholders owning a hollowed out husk of a company.

So I'd not be wanting to own Google shares as part of a long term investment.

Millions of smart meters will brick it when 2G and 3G turns off

bazza Silver badge

Re: Farce

Completely agree.

For me as an engineer, the worst thing about a lot of the current schemes being pushed on country are that they're potentially (or proving to be) a complete waste of time, money, materials, energy. For example, if we end up prematurely disposing of of millions of smart meters, we've just made the "throw-away" economy a whole lot worse.

Regarding your point 2), the problem is the housing stock. It's only comparatively recently that decent insulation standards became mandatory, and even then they're not up to the levels commonplace in other countries. We have rubbish housing stock because much of it was built when coal was cheap and commonplace, and energy was "plentiful". We basically need to demolish Britain and start again, to get anywhere close to an efficient stock of buildings. Obviously that is prohibitive. Previously it was worked out that it was more efficient to build a load of nukes and give away the electricity than it is to retrospectively insulate the current building stock. The only other way is a sustained program of compulsory purchase / demolition / rebuild going on for decades and decades; one would need good luck getting political buy-in for that scheme!

To extend your point 3), it's becoming clear that large fleets of Lithium Ion powered vehicles are a bad idea. The frustrating thing is that far better chemistries (like Aluminium-Sulphur) are being sidelined by an industry that's already got too much invested in LiIon, yet AlS batteries are probably where we're going to have to go. Trashing the world in pursuit of rare lithium resources first is just lunacy, and another expansion of the throw-away economy on a grand scale. What a waste.

Personally, I blame the lack of discipline in science. It's far too easy for activists (which includes a lot of scientists) to bump politicians into adopting the first half-baked solution to environmental problems. It's almost impossible for a more sober reflective views such as, "hang on, is this a good idea?" to get any traction. The scientific world is pretty poor at understanding the damage half-formed advice can have at the macro-economic scale, and very powerful in forcing the adoption of half-formed policies.

And there's plenty of disasters already; the cutting down of rain forests to grow palm oil for European bio-diesel. The sudden up-tick in dirty mining for lithium, cobalt, nickel for batteries. There's an area of Portugal that is about to be blighted by an enormous lithium mine project. We're going to ultimately ditch lithium ion as a power store, but no one is going to restore the countryside afflicted by the mining...

Worse, the one thing that could actually save us (nuclear fusion) needs lithium (it's used to capture the neutrons from the plasma, gets hot as a result and used to raise steam). If we've got rid of all the lithium by wasting it in EVs first, we're going to look stupid... Oh, wait a minute. We do look stupid!

bazza Silver badge

Re: Powerline?

It's one thing to get PowerLine working within a house. It's quite another to get anything like an adequate signal going unknown distances through all manner of cables and junction boxes before it gets to a substation (the only place where it'd make sense to have a "receiver"). It's a non-starter, really.

bazza Silver badge

Farce

The use of 2G / 3G by smart meters, and indeed the installation of smart meters is beginning to reach astonishing levels of farce.

There's no great argument in favour of smart meters in the first place. The fact that they're now going to have to be swapped out is ridiculous. What a ****ing waste.

If the government wants us all to have smart meters and to have the control / influence over how much energy people use, then it's going to have to act seriously to bring that about. That means thinking about the entire ecosystem. Either one builds a dedicated network for the data link, or one obliges the telcos to provide it. One does not let two entirely separate markets act independently. It's notable that the energy supply market is regulated by one government department, and the telecoms market is regulated by another.

We've not been very good at building dedicated network infrastructure in this country (harrumph harrumph Tetra / Emergency Services Network replacement), so it's perhaps no surprise that this is turning into a major cock up.

The problem with "the market will provide", or the "use commercial services" mantra for what is essentially critical national infrastructure is that there is no commercial incentive on commercial suppliers to provide a level of service adequate to meet the "Critical National Infrastructure" needs. It takes either law, or government ownership to make this happen. Some industries have been regulated to drive behaviours towards particular national goals - e.g. TV and public service broadcasters (not that there's many of them), and the BT and the core telephone network. If Government has similar requirements for, say, telecomms in support of, say, smart meters, there is no reason why it couldn't have similar laws / regulations for the same effect.

The problem of not having thought about the requirements in advance of an industry coming into being (e.g. the mobile telecoms industry) is that, once established, if government then moves the goal posts government then has to pay for doing so. Having said that, I think it would have taken quite a leap of imagination to realise back in the early 1990s when mobile phone networks first got going that there were going to be use cases like smart meters. Having not thought about it, the damage is done, and the only way to fix it is to take a deep breath and pay for someone to alter their network to become something different to what it is today.

Here's a Thought

If we really do need a national network adequate for things like smart meters, how about the following.

1) Build a proper network for the emergency services - I don't care what, so long as it's heavily government controlled (that being the only way to ensure it ticks the "Critical National Infrastructure" box) and actually does the job.

2) Tetra, which has been blooming marvellous as a voice-only / small data network for the emergency services for decades, could get repurposed for a smart meter data network. This would re-use the Tetra infrastructure, probably reach to a load more places than mobile phone networks do.

LORA

This is probably a reasonably good candidate for building an entirely fresh, Smart meter dedicated network. It would be madness if there were more than one network; it needs someone to build a common network that all suppliers / consumers use.

Multiple Suppliers is Part of the Problem

The use of 2G / 3G by smart meters rather highlights the lunacy of "multiple suppliers" for things supplied by utilities. A really good reason to use 2G / 3G or any cellular standard for utility metering is that 1) the modems are cheap, and 2) there's already an existing network to hook into. So, if you're an electricity retailer obliged to supply a smart meter option, the cheap / fast way of doing so is to use what's already there. It's not economically feasible to expect all the energy retailers to independently build their own smart meter data network.

Yet, if they were to all get together to build one that they all share, then really that's just highlighting the lunacy of there being more than 1 energy retailer in the first place. It's probably not politic to suggest in government (of any flavour) that the whole energy supply utilities thing should be forcibly coalesced back into just one organisation; that'd be tantamount to re-nationalisation.

bazza Silver badge

Re: 2G is perfect for this

2G (GSM) has actually been much updated in the light of the security issues surrounding its earlier incarnations. The original specifications were designed around the amount of compute power that could be put into a battery powered devices back in the late 1980s, early 1990s, so the crypto algorithms used were just about good enough to ensure adequate security for subscribers and networks, Now, over 30 years later, we can do much better and so the GSM standards have been updated to use much better crypto algorithms.

The operational overhead is more subtle. GSM had astonishingly low operational costs, largely because network planning / expansion was so easy (the standards designers really knew what they were doing when they wrote GSM). And these days it is trivially easy for a base station to support GSM and, say 5G; the compute load on the base station to support GSM is also pretty trivial compared to what they're packing today to run 4G, 5G. However, the use of valuable and limited spectrum resources for only 2G is perhaps the unconscionable aspect; 4G, 5G are so, so much more spectrally efficient compared to GSM that the "lost opportunity" operational cost is indeed very high.

As it prepares to abandon its on-prem server products, Atlassian is content. Users? Not so much

bazza Silver badge

I largely agree except for on particular use case.

Cloud allows anybody to briefly access prodigious amounts of compute power for just a small amount of money. This is something that on prem cannot compete with. If one has the need to occasionally use vast amounts of compute, cloud wins.

It's a use case that the cloud providers probably aren't keen on really, because they'd prefer long running routine workloads that allows them to hone their capacity closer to the market average for maximum profit.

And apparently you can, as a cloud subscriber, run into the cloud's resource limits quite easily. I had heard that getting dev access to Amazon's GPU offering as a dev in the UK was difficult because the Europeans who wake up an hour earlier sign them out for the day first. They may have added more, since.

But yes, cloud is just someone else's computer, and the bespoke ways of programming them reek of vendor lock in.

bazza Silver badge

Re: Peak cloud ?

Bean counters are the ones driving the dash to cloud. Because, bean counters never ever take account of possible eventualities such as "Amazon is broken today". They assume that cloud providers are some how infallible...

It's also bean counters who underinvest in on prem, quite often. I recall British Airways nearly going out of operations because a hot weather spell nearly took out their aged, creaking server building (their only one).

Microsoft gives unexpected tutorial on how to install Linux

bazza Silver badge

Re: Choices?

They do seem to be getting on with Ubuntu / Cannonical pretty well.

I wonder if MS will buy Cannonical. If they bought it and ran it like they do Mojang (i.e. completely hands off, but able to access the resources of the mothership), that could work out quite well. And Cannonical have got good developers - always an asset in a software house.

bazza Silver badge

Re: WSL2 is not a VM in the sense that VMWare is a VM

It's even more complex!

Strictly speaking, on recent Windows 10/11 (or which ever version brought it in), every single process (including WSL 1 processes) are run inside a VM. This is why VMWare Workstation performance has taken a nose dive; it now has to use Hyper V instead of VMWare's own hypervisor, things are a lot slower and more difficult (especially 3D graphics acceleration), all because the VMWare hypervisor now cannot have direct access to virtualisation support provided by the CPU hardware (HyperV has already nabbed that).

So, in a sense, there is not really that much difference between a process running in WSL 2, WSL 1, or a native Windows application; they all boil down to be hosted in HyperV one way or other. It's just that WSL 2 processes have an intervening layer called "the Linux Kernel" that is doing things like scheduling and interacting with HyperV on the application's behalf.

On the assumption that the Linux kernel MS provide into WSL 2 has been well and truly gutted of everything it doesn't need (like, device drivers), really all it's doing it managing pools of memory scheduling stuff, and providing the network stack. I notice that if I run 2 WSL command lines, it's inside the same VM (ps -ef in one sees the processes started in the other).

Getting back to WSL 1. Of course, Windows is not the first to attempt this; QNX (BB10), Solaris (and I think FreeBSD) have all had Linux kernel interfaces for hosting Linux binaries. QNX's one was pretty excellent; Android apps (those few not irrevocably bound to Google Play Services which BB10 couldn't have) ran pretty well. I never tried Solaris's spin of it, but it was supposed to be pretty good.

I recall reading that in WSL 1 / Aurora MS were having difficulty in replicating every single facet of the Linux kernel interface (beyond fork), some obscurities related to hardware events I think. Getting everything right, even for a sys intf as famously stable as Linux's (and Linux has been good in this regard), must be very difficult. I can see why they would conclude that WSL 1 is too much like hard work, whereas neatly integrating a half-fat VM hosting a full Linux kernel is simpler and need be done just once.

Microsoft says VBScript will be ripped from Windows in future release

bazza Silver badge

Re: small number of people who have inherited some ancient scripts

Trouble is VBA remains curiously useful, occasionally. I've recently written some as a last resort.

I tried doing it properly, using C# an the doc generation dev kits from MS, various helper libraries, etc. In the end to do it properly was going to take a massive heap of C# to get at that one slightly obscure animation setting all the other tools kits and helper libraries overlook, or do the whole thing in about 6 lines of VBA. It's slow, I grant you, but it's come out way ahead.

Fresh curl tomorrow will patch 'worst' security flaw in ages

bazza Silver badge

Methinks that'd be like me chairing a meeting of Alcoholics Anonymous, with me also being the most ardent consumer of booze in the room....

bazza Silver badge

No one ever uses it with root privileges, right?

curl vulnerabilities ironed out with patches after week-long tease

bazza Silver badge

Security Overblown?

Well, remote code execution is a pretty bad thing, even if someone has to "contrive" the circumstances to make it work. If an attacker wanted to get code execution on your system, they'd be doing their best to contrive the right circumstances, sure enough.

New information physics theory is evidence 'we're living in a simulation,' says author

bazza Silver badge

Re: A simple analogy.

Except that's not how evolution has actually worked. For example, whales are descended from the same land based ancestor as sheep. Some proto-sheep decided to foresake grass / legs / wool / bleating for the very different attraction of crill / flippers / blubber / ocean-spanning song. And, that proto sheep itself had an ancestor that'd decided "to hell with living in the sea, let's see what's beyond the beach".

If one goes looking for micro-evolutionary changes, one will find them. If one goes looking for macro-evolutionary changes, one will find them too, but one has to look harder. It's all about what evolutionary opportunities existed in the environment. Even within just mammals, there's a vast variety that exploded out into the world following the demise of the dinos.

Blockchain biz goes nuclear: Standard Power wants to use NuScale reactors for DCs

bazza Silver badge

Re: How long has Bitcoin mining got?

It's a futile activity. When fully mined out (as Bitcoin one day effectively will be), the only reason for anyone to keep validating the block chain is participate in the voting for what the correct version of the block chain is. In effect, it'd be a very expensive way of offering free banking services to those minded to move money outside of government control, for no returned benefit to oneself apart from the dubious pleasure of opening the monthly electricity bill.

I've yet to hear of a bank that's operated on a charitable basis, never mind one that's also decided to take on the liabilities of a burned out personal nuclear reactor sat in the carpark...

bazza Silver badge

>This is just getting ridiculous. Maybe it's time to decide energy shouldn't be wasted in this way.

Absolutely agreed.

Missing the Whole Point of Distributed Ledgers / Block Chains

Worse, it's defeating the whole point of proof of work block chains anyway. The whole idea is that there's a large number of separate participants, all doing the work, all agreeing on what the block chain content actually is. There is a majority vote amongst participants; that's how bad actors are detected and defeated. Trouble is that the vote is valid if, and only if, one assumes that the participants are fully independent.

If you go and lump most of the miners altogether into (for example) just one or two data centres, nuclear powered or not, then in effect there's only one or two voters. Each is perhaps comprised of myriad instances all working on and agreeing with the block chain content. However, if they're denied the opportunity to vote (because, ultimately, their ability to vote is in the gift of the data centre housing them and the internet connection it controls), then they no longer count. Instances in another datacentre can vote a different way, changing the "majority view" of the block chain content.

The danger inherent in nuclear powered data centres for block chain mining instances is that they could make it commercially uneconomic to host miners anywhere else. There's never going to be lots of nuclear powered data centres. Thus, the effective number of mining instances able to vote against a malicious change in the block chain could be severely impacted if one of the data centres goes offline.

All someone needs to do to take over the block chain is to knock out those few data centres' internet connections for long enough such that their mining instances drop out of contention for voting on the block chain content. When they do come back online, it could be to find that the block chain has been altered to their disadvantage and the rest of the world has moved on.

Political Analogy

For an analogy based on the democratic process; nuclear powered data centres for proof of work mining is a bit like putting the majority of voters in one particular district, having a general election, burning the votes cast in that district, and letting the election be decided by the minority of voters in other districts.