Re: Riddle me this
Sounds like they've had a dose of Solarwinds / Sunburst.
666 publicly visible posts • joined 14 Dec 2017
The example of asking ChatGPT as a calculator is shocking as it:
- takes longer than just using a calculator on the device you're already using
- is so wasteful by a factor of 30 according to the programme
You forgot:
- actually gives wrong answers (you can easily persuade a LLM than 2+2=5, for instance)
Ionity on Electroverse with Intelligent Octopus Go discount is 63p/unit.
Ionity Passport Motion is £5.49 per month, for 53p/unit..
Ionity Passport Power is £10.50 per month, for 43p/unit.
43p/unit is also what Tesla superchargers cost, outside of 4pm-8pm, and equates to about 11p/mile (at 4 miles/kWh). That's *slightly* better than petrol, but it gets eaten up if you add in the cost of a cup of coffee while you wait for the charge!
I am firmly in the "Fuck Sony, and the horse they rode in on" camp. I would not buy a 1p facial tissue from Sony, even if it came with a free ticket to heaven.
Oh but plenty of people do... see this classic from Onion News Network (somewhat NSFW)
Or "shareware" as it used to be known in the 1980's. Except generally you'd pay a small, flat one-time payment for a perpetual right to use.
This isn't really news; CockroachDB went fully proprietary when it went BSL. In theory, the versions they released under BSL will revert to (I think) Apache license in (I think) 4 years time, but by then it will be irrelevant. Those who care will have long moved on.
Aside: FoundationDB started off in the same path as CockroachDB, but with a different outcome: they got swallowed up by Apple who locked it away from the world. However a few years later, they released it under Apache 2.0. It's worth looking at as a CockroachDB alternative, although it lacks CockroachDB's Postgres-a-like SQL layer.
Most likely scenario now is: Starliner returns intact, Boeing crows "there, we told you it was safe all along" (whilst privately heaving a huge sigh of relief in the boardroom) and asks for the next few billion dollars.
Next most likely scenario is: Boeing performs an *intentional* timed self-destruct of Starliner on the way down, ostensibility for "safety reasons" - but in reality so they don't have to see if it will do an RUD on its own - and then asks for the next few billion dollars to continue working.
> Optane was (much) cheaper than RAM and both (much) faster and (much) longer-lived than SSD
Optane was (significantly) slower than RAM and (significantly) more expensive than SSD. That's why it failed.
In current architectures, DRAM is already around 3 orders of magnitude slower than what the CPU can deal with - hence the need for 3 layers of cache between RAM and the CPU. Optane would only have exacerbated that problem.
If you want a radical architectural rethink, then how about smaller, loosely coupled cores? Imagine, instead of having 16MiB of shared L3 cache, your CPU had 256 processors, each with 64KiB of local SRAM. Like a load of Commodore 64's on a chip. They could be running LISP or Smalltalk or whatever you like. If there are more than 256 things going on at once, then the entire CPU state can be paged out to DRAM, using bulk page mode transfers. Since each block of internal RAM is dedicated to a single CPU, all Spectre-like cache timing attacks are eliminated.
Of course, this involves writing applications in a completely different way, being unable to depend on a single virtual address space accessible by all processors, and with more explicit message passing. Like Smalltalk does. Or perhaps Occam, also originally designed for lots of small CPUs talking to each other.
The issues that are delaying the release of 24.04.1 are things that affect people upgrading in-place from 22.04 to 24.04, not stability issues in 24.04 itself.
For example: the version of RabbitMQ in 22.04 cannot be upgraded in one step to the version in 24.04. They'll have to find a workaround for that.
For people with such allergies they cannot realistically expect to heap all the responsibility for their safety on a food vendor who is supplying to the general public in street retail environment.
[....]
The best they can realistically expect is 'reasonable precautions', which certainly does not imply any kind of 'guarantee' of safety.
If this particular restaurant advertised "allergen free" food, then I would have thought the customer was entitled to take this at face value, especially when reassured by staff that this was the case. If the actual offering was "mostly allergen free, but possibly not" then they should have used those exact words in the advertisement.
I do agree on the chilling effect though, i.e. restaurants and food outlets either refusing to serve or give any meaningful information for consumers to make a choice.
Many years ago, I flew on a US airline. They handed out complementary packets of peanuts. On the outside of the packet it said "Warning: may contain nuts". Apparently they weren't prepared to make a definitive statement even in this simple case.
Proper Continuous Deployment depends on:
(1) A comprehensive automated test suite;
(2) A pipeline which never deploys anything unless the entire test suite passes;
(3) Phased deployment (i.e. canaries)
(4) Instrumentation so you can see no unexpected changes in the behaviour of the canaries
If you're not doing these, you're not doing CD, you're doing crash-and-burn.
> However, any outage should have been managed as a major incident internally and various DR and business continuity kicked off. This would be covered in any risk register.
But how do you stop your DR environment from getting affected? Do you run no security software, or run security software from a different vendor?
Exactly that. If you have any sort of ARM Mac, even an M1, you're likely to keep it for a long time.
"as of March 2024, 68 percent of Mac owners had a device older than two years. Four years ago that number was just 59 percent"
That figure really isn't very useful. Almost nobody upgrades their Mac within 2 years - or any other laptop for that matter. Hence all that it shows is how many people decided to upgrade(*) their laptop in the last two years - it doesn't tell you anything about how old their previous one was, nor how long they're likely to keep their current one.
(*) Or bought their first Apple laptop, of course.
Tritium is what I was thinking of - at about $30,000 per gram if you believe the webs. Tritium breeding from lithium is still currently a pipedream.
https://www.science.org/content/article/fusion-power-may-run-fuel-even-gets-started
Erm, I don't think any such thing as been shown. On the contrary, fusion experiments have consistently shown over decades that it's nigh on impossible to sustain fusion in a lab environment, even using the most exotic fuels and containers. For some reason, there's now this misplaced optimism that if only you make it bigger and more exotic and much, much more expensive, finally it might work. It's as if steam power couldn't work until you had built the Royal Scot, or the Wright Brothers had no success with heavier-than-air flight until they'd build the Airbus A380.
Furthermore, the best you could hope for in the end is a source of heat in the form of highly energetic neutrons which will irradiate the reactor vessel, and in turn make it radioactive. When JET ran for less than a minute, the vessel could not be entered for a week afterwards. And even if you manage to create this heat and turn it into power, still the best you can hope for is a power station which is ludicrously expensive to build, the fuel is ludicrously expensive (inherently so), and the operations and maintenance are ludicrously expensive.
Sure, solar panels don't work when it's dark. But it's always sunny (or windy) somewhere in the world. A trans-global grid is well within the realms of technical feasibility. Sadly, perhaps not politically in the current world we live in.
My next car will be a BEV. I'm just waiting on the price of good used ones to come down.
These are interesting times.
* Citroën are about to start selling the e-C3 at £22,000
* Other brands are responding in kind by reducing the prices of their base spec models. e.g. the Vauxhall Corsa-e "Yes", which was previously introduced as a low-cost entry model and has a list price of £26,895, is now on sale at dealers for £22,500 brand new
* As a knock-on, you can now pick up a pre-reg delivery-mileage Corsa-e or Peugeot e-208 (73 or 24 reg) for around £16,500, or the cooler but less practical Fiat 500e for £19,000
The madness is that this is the same company competing with itself. Citroën, Peugeot, Fiat and Vauxhall are all brands of Stellantis.
I raise you a Western Digital Filecard: a full length ISA card with a 10MB hard drive and the controller.
"The value of the Hubble constant was the topic of a long and rather bitter controversy between Gérard de Vaucouleurs, who claimed the value was around 100, and Allan Sandage, who claimed the value was near 50"
What sweet irony that the current best measurement is almost exactly half-way between the two :-)
... because of an arcane rule that judges eligibility based on price per share, rather than total valuation of the company?
Here, I'll start a company with $1000 in the bank. I'll issue one share, and it has a value of $1000 (*). Can I join the New York Stock Exchange now please?
(*) Or it could be worth substantially more, if my company has "AI" in its name
To make something reusable requires 40-50% more capability in the rocket
What metric are you using to measure "capability"?
If you mean payload tons to LEO, then you'd presumably be saying that a reusable rocket has a *lower* capability than an equivalent non-reusable one. Maybe that's true, but you already said that was irrelevant, since "most satellites are much smaller and lighter" these days (your words).
and a load of on-shore infrastructure to process the recovered pieces. Is all of that worth it for a couple of launches per year?
Even Arianne is planning on ten launches per year. Surely to be worth it, the only requirement is that the cost of recovery and reuse is less than the cost of constructing a whole new rocket from scratch? They're pretty expensive things to build.
SpaceX launches as much as they do mainly due to their own in-house needs for Starlink. They aren't creating a market where there's more outside entities launching things.
I think you'll find SpaceX has a long queue of commercial customers.
Personally I see it the other way round: Starlink is a technology demonstrator for SpaceX, giving SpaceX customers huge confidence that their own payload is in safe hands, plus lots of practice to SpaceX's operational teams. If Starlink happens to make some money on the side, that's a bonus.
But there certainly seems to be a demand for Starlink-type services, and the potential for money to be made. Otherwise why would three or four other companies all be building their own LEO constellations to compete with it?
But also curious about the social cost of deals like this. So this effectively takes up to half of the NPP's output out of the market at a time when policies are increasing demand for electricity.
It doesn't really matter where they take the power from. Apart from saving a tiny amount in grid losses, data centres burn the same amount of power wherever they are located. As you say, any power they take displaces other grid users and forces more high-carbon generators to be turned on somewhere else.
"It’s not as well-made. It’s not as nice. It’s not as connected."
"Not as connected" is a good thing - especially for goods imported from a different country.
I'd happily buy Chinese solar panels, since there's nothing coming out of them apart from DC. A Chinese Internet-connected EV? Less sure.
I find the desktop environment is important - apart from launching applications (and how you launch applications matters) it provides panels, integrated applications (matters for file managers and a few others)
You forget an audio/video player, a text editor, a web browser, a mail client, a PDF reader, a full office suite, an app store, and Solitaire and Minesweeper.
Some of the reasons why an "operating system" takes 10GB+ these days.
Exactly that. Why would a pedestrian care how many miles a vehicle has travelled before it hits them? What matters is the proportion of EVs *in their environment*
Many vehicle miles are covered on motorways, where there are zero pedestrians to hit. But I suspect a higher proportion of those miles are from ICE rather than EV, simply because most EV's aren't well suited to motorway journeys.
From what they said, it was completely the opposite of agile. They admit that they committed the number one cardinal sin of software development, which is to throw what you have away and start from scratch.
You can *always* make incremental changes to get from where you are now, to where you want to be. Admittedly it may seem like starting from scratch is easier, but it's a false economy. What people don't realise is that the current implementation is also the only definitive documentation of the system behaviour, and the features that people are using.
Another company that made this mistake was Mikrotik, when going from RouterOS v6 to v7. They decided to do a rewrite from scratch. Three years later, v7 is still a buggy pile of sh1te. Certainly they needed to do some major things (e.g. upgrading the Linux kernel to a modern one; upgrading the BGP implementation to use multiple cores); but those could have been done one at a time, fixing the issues in each piece as required. Sadly, newer Mikrotik routers only run v7, just as newer Sonos hardware only runs S2.
If the code is "too complex", then refactor it (preserving functionality). If it's weighed down by legacy features, then decide one by one which features to remove, and pull them out. At each stage, maintain a viable working product. Admittedly, all that is pretty much impossible if you haven't inherited an automated test suite - so if necessary, start by building that.
"select high-volume suppliers to use 100 percent carbon-free electricity by 2030"
That's easy - in the same way that UK consumers can buy "100% renewable" electricity for their homes. However it simply shifts the carbon generation to other consumers.
Google "renewal obligation certificates" for the full details, but in short, if a company does actually buy 100% of its electricity from renewable sources it's left with a surplus of ROCs, so it sells them onto other companies who need to show a certain percentage of *their* energy is renewable. This is also why the "100% renewable" home energy suppliers charge the same as normal suppliers.
I guarantee that this will fail since a fork of the software will be made that will remain open source. Redis tried to pull that misguided stunt a few weeks ago and the only thing that happened was that the open source Valkey was created instead and will now by used by Redis' former customers.
To be more precise: "... and will now be used by Redis' former non-paying users".
There's no particular reason for Redis' paying customers to move to the free fork. They could have used the free version of Redis previously, and chose to pay instead - for various reasons that made sense to them (e.g. support availability, supply chain or licensing policy reasons)
It remains to be seen in the long term, for products like Terraform, Redis and Elasticsearch, whether they remain viable without participation from the wider community. On the one hand, they will certainly lose contributed code, maybe some good testing and bug reporting, and exposure to potential future customers. But those were already companies that had full-time staff doing the majority of development, presumably concentrating on the needs of their paying customers more than the wider community. Especially in the case of Hashicorp under IBM, they now have hooks into all the largest corporations with the biggest pockets - the ones who are happy to pay for everything.