Re: Blimey, pt II
I once wrote assembler for a 4 bit NEC microcontroller on a pager. Every individual bit of memory was needed. It was a nightmare.
691 publicly visible posts • joined 23 Oct 2006
Here in the UK, the big providers have all hoarded IPv4 addresses, and if you start a business and want a good sized block, you need to buy them.
This gives the old guard a competitive advantage, so it's not in their interest to promote IPv6 adoption.
Where it suits them, you'll see IPv6 usage. BT and Sky now have it enabled by default on consumer services. Some mobile phone operators use it, they have so many attached devices that rfc1918 addresses were exhausted on a national scale.
Last year, BT wanted £600/house to lay fibre in our village to be funded by government subsidy. They missed the deadline to get funding.
With a new fibre subsidy scheme offering £1500/house, BT told me that they will lay fibre for £1500/house.
Funny how the cost BT has calculated has risen to the maximum available subsidy!
I think actually that jacking up the price of immigrant workers in the US is a good thing, it'll mean jobs moving to the UK and elsewhere to get cheaper labour.
In fact, one SanFran company tried to recruit me only a month ago, to work for them.. only snag was they wanted me to work in the US timezone, started at 1600 and finishing 0000. I live in the UK, and I wouldn't have liked that.
If consumers were told "don't buy any new kit that doesn't support IPv6" and it was sold as a feature of the internet connection as a means of solving the horrid mess of nat and port forwarding, maybe we could make progress.
back in May, it was proposed to the UK Network Operators Forum that they announce the end of IPv4 for a future date but it was met with a range of responses, from apathy to "too hard" to enthusiasm.
The interesting thing is that the big players like Sky and BT have deployed it to domestic customers, and BT and other business providers have offered it for a long time, but because they have hoarded IPv4 they can use it as a competitive advantage over newcomers to the market who cannot buy large blocks of IPv4 addresses easily or cheaply.
at a previous job, there was an IBM server that needed to be moved from one computer room to another. It could have been unracked, lifted onto a trolley by a few people, and re-racked in half an hour. IBM's charge? over £2k.
we couldn't move it ourselves as it would have invalidated the support contract.
I've heard so many times that you can migrate your on-premise servers to the cloud in a more or less 1:1 mapping and let your cloud provider do all the work of maintaining uptime and data integrity.
And yet again we have proof that you still have to put in the effort to ensure you have geographically diversified replication and backups.
Google apologised and, somehow in the process of paying compensation, accidentally ended up owning Ghana.
"We're not sure what to do with it" said Sundar Pichai. "We're thinking of bulldozing the entire country flat and covering it with photovoltaic cells set at 30 feet above ground level, giving free shelter from the sun to the inhabitants. It will pay for itself in under 15 years and make us the largest energy supplier in the world".
One problem is that at the low end we have R-Pis and competitors, all sub $100 for the boards. At the higher end of that we have Rockchip RK3399 boards which tend to be in the $80 to $120 range depending on RAM, eMMC and wifi capabilities. All of these are built on older Arm cores - the RK3399 has A72 cores which are a good few years old now!
There's then a big jump to get boards with higher spec cores, and most of those for "professional" users as "board support packages" for businesses developing phones or tables, and cost many many hundreds of $$, far from affordable. You can look on 96boards and you'll find the Kirin 970 (Arm A73) at roughly $300. I couldn't find anything there with newer Cortex cores.
Then you have the problem that Arm don't have good linux support for their GPUs, usually a binary blob, and little or no 3D acceleration. AFAICT people end up using kernels and drivers from Android builds and then bodging a linux desktop on top of that.
Gigabyte have a cavium thunderx workstation, but for that price you can buy a pretty decent Intel laptop! The Socionext dev/workstation is over $1000.
So, really, it seems to me that Arm don't care about anything other than Android or small embedded devices, maybe they care a bit about Windows (with the new replacement of Windows-RT) but I wonder who's doing all the work on the GPU side to make Windows run on it? If Arm cared, they would be actively supporting native development on Arm-based workstations.
Apple are really doing their own thing, their processors closely resemble Arm processors when seen as a black box, but AFAICT their bionic processors (which are really good!) are completely custom design. I hope that they do release affordable devices like Mac Minis with Bionic processors, and they don't lock them down, so they can be re-purposed for other operating systems!
I do welcome the competition in the market, as I think there are many workloads where, say, an Intel Atom would have sufficient performance, and so an Arm would be too.
But if you look at the price/performance ratio of an AWS instance running Arm, it's not really different from an x86 server.
I tried to deploy some of $WORK's requirements on an Arm/Graviton instance, but other than the simplest service, I got into dependency hell, with some packages simply not ready built for arm.
The price of zero day vulnerabilities has been increasing over the years, very significantly for some platforms.
This suggests that either it's getting harder to find significant vulnerabilities, and/or that the value of a security break-in has increased a lot too.
So even if the "white hats" decided that trying to find vulnerabilities was a bad thing and stopped altogether, the "black hats" have a big financial incentive to carry on, and of course the latter will do their best to keep them secret which makes things less secure for everybody and reduces the chance of a fix.
Personally, I'd prefer to keep going with the good guys finding bugs and getting paid for responsible disclosure, I can't see a better way other than revolutionising the way software is developed so that such bugs are made unlikely or impossible to make!