For, "but that doesn’t mean they can tolerate it", please read "but that doesn’t mean they can't tolerate it". Ruddy autocorrect...
2507 posts • joined 23 Apr 2008
Your assertion that the Japanese population is predominantly lactose intolerant may come as a bit of surprise to Meiji, established in 1917, which sells a hell of a lot of dairy products in Japan. They have not got a centuries old dairy farming tradition like other countries, but that doesn’t mean they can tolerate it.
Similarly they’ve not got a centuries old tradition of beef farming or even having meat in their diet, but Wagyu is widely seen as the best beef in the world. And, whilst it’s true that there is a genetic factor behind their common inability to metabolise alcohol as other populations can, it’s far from being a universal trait.
I don’t think law can help with the regulation of misinformation, unless it can be clearly demonstrated that it’s somehow dangerous (eg “drink bleach”).
The problem we have is that literally anyone can portray themselves as authoritative online without any checks. Twitter, Facebook et al will quite cheerfully let people write and share a load of tripe without ever once checking that person’s qualification and flagging their content accordingly. This is the social media platforms’ identity issue - they can’t afford to check out people’s identities or qualifications, and so there’s no way for them or others to flag content as “probably wrong”.
I think the only law that makes any sense is if platforms are required establish the proper id (name address and bank account number) of users. That way posters of garbage can be gradually excluded on a permanent basis (instead of just opening a new account when they get booted off), or referred directly to the police if necessary.
Where it gets interesting is if one considers the case where, for example, the military have developed the software themselves and are the sole users of it. It’s doubtful that that counts as “distribution”, and so they possibly can use GPL code without having to make the bits they write themselves public.
Years ago some chap did get their name into the detail of some cliff somewhere on the Isle of Wight I think it was. But that probably got spotted and editted out in later editions.
Edit: Yep, found it http://www.paulplowman.com/stuff/isle-of-wight-map-hidden-names/
Nah, anyone who can fly a plane can land on the top of that tower. All it requires is a sufficiently high windspeed and absolutely no turbulence, and appoaching it from downwind. Practically an every day event! (I am not a pilot, but I am an excellent passenger...).
The (very crude) flight sim for the Research Machines 480Z had an integer wrap around bug. Put the "airplane" into a vertical climb, and wait for the airspeed to drop to 20, 10, then 5, 2, 1 and 0 knots. At that point you'd expect the plane to start falling, tail first. Nope, airspeed was an unsigned integer and it would underflow, meaning you now had an airspeed of 65535 knots straight up, or about M.86. Weeeeheeeee!
I've always liked the Power range of CPUs - all sorts of nifty add-ons that boggle the mind. Nice to see IBM keeping it going even now.
One thing did catch my eye though, and my point has nothing to do with Power as such. From the article:
"We're told Power10's architecture has a focus on accelerating matrix math operations for use in AI,"
Somehow the term "AI" loses its mystery if all it comes down to is matrix maths. As the great Dr. H Fry has it, it's just old statistics ideas played out on a scale larger than the originators ever had in mind.
I guess so, but it still feels to me that if an application has knowledge that can be used to aid the kernel in scheduling threads then it might just as well bypass the kernel altogether. I know there’s shortcomings to that approach (signal delivery, major revision to source code, etc), but in the quest for ultimate speed all sacrifices must be made. This mod feels like a half way house to avoid rewriting a lot code “properly” (granted that is subjectively judged, depends on the cost benefit analysis, etc).
I wonder if Fuchsia has this in it already?
It is? Well now I’m tempted to have a new battery put in my old SE. It’s still perfectly usable otherwise, and if there’s the prospect of at least security updates for a few more years it’d make a perfect backup phone, for in case.
I’m a BlackBerry traditionalist by nature, but I have concluded after trying both Android and Apple that Apple are the better bet. Ah, if only BlackBerry had been first with BB10, instead of last... Oh well. At least Apple seem to believe in software maintenance, even if the end result is occasionally a bit haphazard.
Well, the SR71 was a bit different, because the bypassed air was fed back into the exhaust rather than going around the engine to provide thrust directly. Apparently the idea for doing this was spur of the moment. They knew they had to bleed large amounts of air out of the compressor at high speed because of the ram compression the inlet was getting. They were thinking of just dumping the air. But a moment of inspiration led to them changing the design so as to put the air back into the engine at the afterburner, and this is what led to it being termed a turbo ram jet, and added usefully to the efficiency.
Though the bulk of that power plant's efficiency came from the inlet compression. The inlets provided heaps of compression, meaning that there was far less need to use engine power to run the actual compressor itself. So that power went out the back as thrust. The faster it went, the better this worked.
Concorde did effectively the same thing, though with a totally different inlet design philosophy.
Incidentally, the inlet design of Concorde, F15, B1A, Panavia Tornado, various Migs and Sukhois - sloped cut square inlets - are like that because it makes the inlets fairly robust to upsets in the air flow. The round inlets of the SR71, operating on the limit at the best of times - were sensitive to airflow disturbances, causing the infamous un-starts.
So the SR71 inlet was all about peak performance at any cost, Concorde's was all about good performance without risking spilling the champagne.
These days there’s plenty of supersonic turbofans, as fitted to most fighter jets today.
Concorde used the Olympus because a lot of the development of that engine into a supersonic capable power plant had been done on the TSR2. The French company SNECMA did the afterburner, instead of the TSR2’s one. Concorde needed only a small afterburner.
Just a few years later the RN was flying F4 Phantoms with RR Speys. The age of supersonic turbofans had arrived pretty much as Concorde went into service.
If Google do drop Linux for Fuchsia, well that'd be billions of mobile devices moving away from using Linux.
I can't see Linux falling out of favour in server applications though. Whatever issues one might have with Linux's license and / or design and how it pans out on mobile devices, those issues don't impact on servers one little bit.
And with Linux seemingly being open to using Rust - presumably possibly encompassing a wholesale move over to it, eventually - there's a chance that the Linux world will start to incorporate Rust's benefits in their security strategy. It remains to be seen whether or not that happens of course. But it seems that multiple OS teams are beginning to think about Rust at about the same time, Linux included, so I think there's a good chance that Linux won't get left behind as a pure-C dinosaur if Rust prooves to be the way to go. So it will probably remain relevant for a good while yet.
You might like to take a look at Redox OS - written in Rust.
What's impressive about that project is how little time there was between inception and having something semi-decent running; really, not long at all.
Then there's the option of re-writing Linux. Or Windows. With C OSes I think you could have a rolling programme of conversion of the code base from C to Rust - AFAIK the two languages integrate just fine - and eventually end up with a predominantly Rust code base implementing essentially the same OS.
None of that means that the Userland also has to be translated - sensible though that might be here and there - but with a Rust kernel one could be confident that the kernel's memory correctness was more assured.
I think that the world has missed a trick. Borland back in the 1980s and early 1990s has some fantastic TUIs, and they were really, really good. All of that got thrown out with the arrival of the GUI, and now we’ve successfully screwed things up to the extent where it’s now hard to have a GUI app remotes to a terminal (like we could with X applications), people are reinventing TUI apps for use in SSH shells. It really is quite pathetic.
Quattro Pro was a masterpiece of a TUI.
Yes, and that’s some that was allowed to happen because it was thought, wrongly, that the aircraft’s fundamental safety case rested solely with the 1960s hydraulic/electrical/mechanical flight controls. Turned out that was wrong.
The blame thing is interesting. I’ve read elsewhere that the programmers working in the company that Boeing contracted to write the software for MCAS were concerned about the design they were being told to implement. So much so that apparently the company reported the concerns back to Boeing asking, “are you sure this is what you want?”. To which Boeing replied yes, get on with it.
That’s probably the most valuable email chain in history.
It does rather beg the question though, from whence are specialist large aircraft going to come from? Is the freight market alone big enough to justify the development of aircraft that large?
I don't think it is.
Then there's the Antonovs, which provide a fairly niche but none the less critical heavy lift capacity to the world. They've got only one AN-225 flying, but the revenue hasn't been enough to allow Antonov to complete 2nd one (the Chinese are now paying for it, and will get it and a series production run).
And it looks like far off future purchases of Air Force Ones are going to have to settle for a poxy little twin jet. The ones that are slowly being put together for Trump will probably be the last ones that large.
So it looks like sub-sonic large aircraft have had their "Concorde" moment, in that the biggest, fastest ones we're ever going to see are now beginning to disappear, probably forever.
Boy, is humanity losing its mojo or what?
It's been pointed out that, had Boeing done a new aircraft instead of the 737MAX, it would have been flying well before now and cost less to develop than Boeing spent on handling the MAX crashes crisis.
Goes to show that accountants are incurably penny pinching, pound foolish, and should on no account be allowed to influence company decisions.
IANAE, but I think the 1960s was great for mechanical engineering, but it will only get you so far. None of the designs of that era is good enough for new expectations of efficiency and noise levels. For that you need a merging of mechanical and electronics engineering, so we are in a <1940s era, where we are yet to perfect this approach.
There were a lot of clever things done in mechanical engineering then. For example, the manufacture of Concorde relied on CNC machining, developed for the purpose. That was in parallel to Lockheed Martin doing the same thing for the machining of titanium structural parts for the SR71.
But to be honest I think that, fantastic steps though they were in the 1960s, nothing compares to what's happened in the 2010s, 2000s. We now have the ability to machine huge parts (e.g. A380 wing spars) to very tight tolerances, and indeed those spars themselves were impossible without the British development of friction stir welding (now proudly touted by the likes of SpaceX, Boeing in their rocket construction). The quality and performance of the materials can also be extremely high today, allowing designers to really lean on their properties in their designs. We have to acknowledge that 3D printing (in metals, at any rate) is making big inroads into mechanical design. And with CAD being so immensely more capable than it was even 20 years ago, a mechanical design can be interrated a huge number of times without anyone ever so much as glancing at a sheet of metal.
But for all the sophistication of modern mechanical engineering, somehow it doesn't feel quite so heroic. Ah well, that's progress I guess.
Only pushing the certification rules to its limits brought about the 737 problem, the Max is virtually a new aircraft.
The 737MAX is not a new aircraft in any meaningful way. It is an old aircraft with added bodges to allow it to carry modern engines and modern avionics. In terms of structural strength, primary controls, and cabin dimensions it is unchanged since the 1960s.
One of the major criticisms is that that the fuselage, unchanged since the 1960s, has inferior crash survivability compared to today's standards. For example, if you take a look at some A320 crashes recently, e.g. Sully's river landing and the Russian one that had twin engine out shortly after take off, they both stayed pretty much intact to the considerable benefit of all on board. A 737 is far more likely to break apart, to the detriment of all on board.
The primary controls are unaltered since the 1960s, except the trim wheels got shrunk. Fundamentally it's a levers and wires control system that still relies on pilots moving them, and the safety case up until the MAX was based on the design of this and not on the electronics that have been added. This has allowed Boeing to implement all that electronics without the triplicate redundancy that is standard these days, because it plays no part in the "safety" of the aircraft. This was fine right up until MCAS, which they wired in permanently and the FAA was mislead as to what authority it had over the levers / wires.
The shape of the aircraft is also largely unaltered. There's been a few trimmings - winglets, etc. but it's otherwise unaltered. The cabin compares very unfavourably wrt to more modern designs, with the A320 and A220 having faster turn around times simply because passengers can get on / off more easily.
Both 737 and 747 were 1960s engineering that formed the platform for continued update and improvement for decades
Not as much as you'd think. The 737 in particular hasn't really changed at all, bar engines.
The problem those legacy platforms have had is that their certification was based on old technology - levers and wires controls. Airbus with their fly-by-wire tech introduced at the end of the 1980s has meant that they can make really quite substantial design changes - e.g. the A320neo, A330neo - but truly keep the flying of the aircraft the same.
They've also been helped a lot in that the delta between the design standards of the 1990s and today aren't so great as between the 1960s and today. So they've not fallen so far behind what's current so as to stretch the bounds of credibility. Just look at that other 1990s hit, the 777; BA had one of those land very short and very hard, it stayed in one piece and everyone got away with it more or less (1 broken leg?). Apparently it wasn't clear that, despite the damage, the airframe was a write-off. It's unlikely that anything from the 1960s would have stayed intact so well.
Airbus knew what they were doing with the A320; they designed it to be lengthed and given MTOW increases; that's why it's got such tall undercarriage for its size. Boeing haven't been able to do this so successfully with the 737 because in the 1960s it had to have short undercarriage so that it could carry its own steps around with it (a lot of regional US airports at the time were little more than a runway and shed). For Boeing, fixing that would inevitably mean a whole new airframe, something they've steadfastly refused to embrace.
Yep, I agree with all that. I always felt that an independent ARM trod a very fine line between asking too much for licenses, and not asking enough. With a $32billion acquisition hanging round the business's neck, well.
Personally speaking I think that there's room for growth yet for ARM with their current business model and pricing. Apple's move, and Windows 10's availability for ARM, might be about to kick off a ARM revolution in desktops, simply because app developers are getting lots more prodding to support multiple architectures (like the OSes do / will). This would be at the expense of Intel. It'd be ironic if Softbank sold up and then missed out on that.
From the article:
Airbus has plenty of fingers in the Skynet pie, having been involved in Skynet 5 since 2003, and has had a hand in all Skynet phases since 1974
Well, Airbus is simply the latest in a string of corporate ownerships of the organisation and sites building satellites. I expect the people there have got used to changing the name plate after all these years, seems to be an essential skill for engineers these days in some sectors.
Rather the opposite I think. It was intended to make it easy for signallers in the field to reliably encrypt and decrypt messages with a minimum likelihood of error. You set the wheels to the daily settings, key in the message, write down what comes out, and then start mashing the morse key. It wasn’t foolproof, but it wasn’t difficult and it was certainly a whole lot easier than doing an encryption by hand.
The Polish contribution was indeed critical, and proceeded anything that was done in the UK.
Though I prefer to place even greater emphasis on the other aspect of the Polish achievement; hope. Their work was inspirational and showed that it could be done.
Looked at in this sense, at the time when it was still so early in the war and the Germans were seemingly invincible, the fact that the Poles had shown so early on that the German forces weren’t quite so universally unbeatable as many feared them to be could have had a galvanising effect on the senior British leadership. There was a chink in the German armour. It was the Poles who found it. Perhaps there’ll be another, was probably what was being thought.
That’s a very special kind of hope, born as it was from the appalling destruction of one country by another.
I think the trade in such artefacts is ghoulish. These are tainted goods after all, not something to be venerated.
The German fragmentary effort in a lot of domains reflected a deliberate policy the Nazis had of keeping the various military branches at each other’s throats; to make coups less likely. They sowed the seeds of their own destruction...
This also showed up in all their crazy inventions. Nutcase inventors would trundle round the different armed force branches looking for funding, and so had several chances of seeing their ideas put into development. Hence the tornado generator, and all sorts of crazy ideas. Whereas in Britain there was only one place an inventor could go, a committee specifically for the purpose of evaluating contributions.
Another aspect of the deceit around how the Allies were targeting u boats was the double cross system. Through this, at RV Jones’ suggestion, captured German agents sent back misinformation about how infrared detectors were being used to spot surfaced boats. This hinges on the fact that the Germans knows who Jones was and that he’d got a background in IR detection.
The Germans went to some lengths developing an IR masking paint - it had lots of glass granules in it - for u boats. Having done this they could only wait to see if this had a positive effect on their loss rate. It didn’t, and another few months slipped past.
It’s not quite true that the Germans believed Enigma or Lorenz to be impregnable. They knew of various flaws, and understood the theoretical level of effort to required to brute force exploit them.
What they didn’t count on was clever chaps like Turing finding a short cut or two, nor did they reckon on someone like Tommy Flowers getting Colossus together (his electronic approach Exploited a flaw the Germans knew about, but they’d not thought anyone would be able to make a machine fast enough to be able to use it), and they certainly didn’t count on the British putting in such an enormous effort so consistently when that resource might otherwise have gone into fighting.
In short, the Germans were well primed with the necessary information to be able to deduce that Enigma and Lorenz were broken, had they ever been presented with incontrovertible examples of “How could they have known that?”. The care taken by the British in handling the take from German comms traffic was certainly well worthwhile.
Trump's statement shows that the ban is purely a political attack on China.
Of course it is but that doesn’t make it a bad thing. If you can prove that it’s entirely about US domestic politics and nothing at all about China’s reprehensible record in almost everything, their deceit over covid19 being only the latest, then you may have a point.
Trump banned, well put a 300% tariff on, Canada's competitor to the Boeing 737 - that sounds like bullying
He tried to, but the legal system stopped him doing so. That’s the difference between China and the US (and other democratic societies) right there.
I’ll drink to that!
It’s worth reading up on Hyabusa 1 if you’re not already familiar with it. The whole thing reads like the transcript from a torture chamber for spacecraft mission controllers. So much stuff broke but somehow they managed to work around the problems. And they got lucky; even the sampling went wrong, the impact was far more feeble than planned. But they still got some material and, astonishingly, the partial failure meant that what they got was predominantly surface material, which was far more useful scientifically speaking.
Well, it works both ways round. If the samples are kept sterile, then that’s because there’s a containment that prevents contamination moving across it’s barrier in either direction.
There are some sort of international agreements about the cleanliness of spacecraft visiting other objects in the solar system. Any craft visiting Mars or objects further out is supposed to be very sterile indeed to try and ensure that we don’t ruin future “tests for life” experiments.
When Musk did that show off launch of a Falcon Heavy with a Tesla on it, they broke this agreement. It wasn’t sterile, because it was supposed to be launched into an orbit that could never reach Mars. However it turned out that the orbit achieved goes beyond Mars, significantly so, to the point where I find it hard to believe it wasn’t deliberate. One day that thing may come down on Mars and, whilst it’ll get comprehensively trashed by the atmospheric entry at Mars, we know that microbes can survive such experiences lodged inside fragments that survive. We’re unlikely to spot that happening, and if it does it will compromise tests for life on Mars. Musk’s ego 1:0 science. They didn’t give a damn about astronomy either when they started launching StarLink, and only now after quite a lot of fuss are they taking some measures. And AFAIK the StarLink satellites have no reliable means of being deorbitted (there’s already 2 that have been abandoned in their orbits) so they’re creating the potential for vast quantities of space junk in LEO.
Yes, it's got some very appealing ideas in it. Someone described Rust as making all those difficult things programmers were supposed to know about memory (lifetimes, sharing, mutability, etc.) compulsory, and it turns out that an awfully large number of us simply cheat all the time in C/C++.
It's certainly feels like a radical idea, bring some high level language safety to a systems language. Ada had a runtime (of varying size, depending on platform), Java had a bigger one, and almost everything since has had a runtime or interpreter of some sort. And then up pops Rust with many of the same features (plus more), compiled to native code.
I've been hoping that Linux would start considering embracing Rust. That it seems to be happening quite sensibly is very encouraging for the long term relevance of Linux. I had been getting worried that with MS (Windows), Google (Fuchsia) and Redox considering becoming, or already being, Rusty that there was a danger that Linux would become the last of the breed of ancient C dinosaurs with endless scope for memory faults. Seems not, and that's good news for us all.
Biting the hand that feeds IT © 1998–2020