Re: Russians, alcohol, making toasts
401 posts • joined 3 Mar 2010
PayPal is perfectly happy to assume the roll when they figure they can hang onto the money for themselves, or at least put it to work for a while on the short term money market while they "investigate".
BTW why shouldn't they suspect fraud in some rando selling job lots of brand new second hand laptops? They claim to have found it in an online haberdasher doubling their sales during the pandemic.
It allows for them to be cut down to smaller sizes while allowing for the saw kerf.
OTOH pre-cut stick lumber is is cut 'accurately' to length because the machinery that does the cutting is now fully automated and PHBs are cheap bastards. If you want 'overs' buy full (6m) lengths and have it cut to your specifications.
"...Some of the failures are harmless. Some annoying. A few are kind of frightening."
Just like when dealing with human idiots in all walks of life.
When used as intended Tesla's AP outperforms unaided drivers. When used incorrectly AP still does a better job than idiots who are: applying makeup; drunk; tired; texting; eating; or otherwise doing something other than concentrating on the primary task at hand.
It's kind of like the approach of Boeing vs. Airbus to flight management systems. Boeing tends to operate on the principle that an engaged pilot, who always has the final say, is better able to react when things turn to sh!t. Airbus concentrates on preventing a disengaged pilot from doing something stupid that turns things to sh!t. Both approaches have their merits and their flaws. Each outperforms the other in different circumstances.
Perhaps I should have added "...by the general public."
I'm perfectly aware of the adage "To err is human, to really f*ck things up requires a computer."
However I'm also aware that when autonomous or semi-autonomous systems are involved in an incident, blame is all to often placed upon the machinery, rather than the humans who blithely ignored procedure to: squeeze out a little extra productivity; out of laziness; or to save a little money.
I'm not sure which are worse, the otherwise highly educated people who decide it's impossible to learn the most basic of computer skills, or those who refuse to learn, because they have "you" to deal with that sort of thing.
Then there are those who say, "There was an error message, but i closed it because it was in my way."
I got to "supervise" the changeover from manual to automatic exchange when I was 11 or 12. 3 metres of 100 pair kept me in hookup wire for many years. And I discovered butyl rubber might work like Blutak when it came to hanging pictures, it wasn't so residue free when taking them down.
Part of Boeing's problem is their continuing reliance on "wetware" in an increasingly automated environment. Too many of their systems still rely a human in the loop to look at an instrument and think "Yeah that's not right." when it misbehaves.
All well and good provided the human sees the problem soon enough, and intervenes quickly enough, but absolutely deadly when the computer gets the "bit between its teeth". Thus there were several near misses with Boeing's flare retard landing system and MCAS before disaster inevitably struck.
My guess it that Adobe created a link to the photos folder in its own application folder tree and then recursed through the photos folder when it wiped the application in preparation for the new install.
Bigger question is, how the hell did did this update make it from development to production without this being detected. It's not like this is some obscure edge case bug. FFS El Reg regularly reports cases of application updates trashing customer data as soon as they hit the real world, so why the hell aren't the developers testing on actual real world systems (or copies thereof) instead of pristine devices fresh out of the box?
Not just a sensor. Disk ][ drives and their controllers lacked 90% of the chips and other support hardware present in contemporary disk drives. Instead everything, right down to the level of reading and writing individual bits was done using very clever code built around the exact timing of individual microprocessor instructions.
Long, long time ago, way back when, our CS lecturer managed 5 very basic errors in 5 lines of code. (mismatched brackets, missing semicolons, wrong variable name) We took great delight in pointing them out one by one and watching her read through the corrections without spotting the next one. Ended with her calling us smart arses and telling us to "F*ck off" and work on our assignments.
Tesla makes no such claims. Tesla repeatedly warns drivers that they must retain control. Problem is that drivers discover that under most circumstances they can get away with not paying the attention that they should, and then one day the world blows up in their faces, but it's not their fault for doing exactly that thing that they were repeatedly told not to do.
Tesla's "autopilot" feature also presents multiple warnings before engaging. The real problem is that no amount of product warnings can prevent idiots from being idiots.
If anything, the problem with Teslas is that under normal driving conditions, Tesla's "autopilot" works too well and drivers get away with eating, reading, playing games and otherwise doing anything but paying attention to their driving again and again. Until one day they don't.
...amateurs are at greater risk.
It is not an uncommon occurrence for police and emergency workers, who know what they are doing, at accident sites to be hit by rubberneckers and otherwise inattentive or distracted drivers. A milling crowd of bikers ended up becoming part of a bigger problem than they were solving.
Motorway laws are strict for a reason. "No stopping for any reason.", not except when it seems like a good idea at the time. You can end up with hefty fines for running out of fuel, or breaking down due to a known problem. If there are already people rendering assistance, you're more likely to make the problem worse than help.
What they had here was tank full of liquid sitting on top of a tank they were trying to keep stiffend with pressurised gas. Under normal circumstances both tanks would either be empty or filled with cryogenic fuels. However, we are yet to see a fully built rocket, which will have to support a fully loaded payload section on top of an empty tank section until fueling takes place on the launch pad. This means figuring out how to keep those tanks stiffened until then.
Actually, it's highly unlikely that Raptor engines can throttle down that far. The current Merlin engines certainly can't. Plus it takes extra fuel to ease down to a 'kiss the ground' landing.
Instead SpaceX rockets do something called a hover slam manoeuver to land. Very roughly speaking, a rocket descending at 20 m/s from a height of 20 m, applies 2 g's of deceleration for 1 second and cuts the engines at the moment of touchdown.
My experience has all to often been, "This is how you do it. No you don't need to know why. You're just a temp called in to take up some slack. Just do it the way I showed you."
And then one day after the company suffered a 'temporary embarrassment' and restructured, I was called back as the sole full timer, the three previous full timers having moved on. The boss and I built a batch of machines 'the way it had always been done', but this time I was allowed to ask questions and take notes. We then built a couple of dummies minus the expensive special order parts, (one of each model) which I then took apart, measuring every wire and hose as I went.
Next I hung every spool of wire on a length of gal pipe, taped a length of broken measuring tape down to the work bench and made a few other changes to the workshop layout, so that I was able to do a lot of advance preparation before assembly began.
Previously, four of us had turned out 22 machines in a calendar month. After my changes, I plus a half time electrician built 20 in 4 weeks. All because I now knew what I was doing and why I was doing it.
This is the state of affairs and it isn't going to change, so we're just going to have to figure out how to implement this in the most effective and least intrusive way possible. Only high quality images in the database, preferably including part or full profile; Secondary camera to zoom in on subjects of interest for a high-res comparison; decently speced computers; parameters tuned to err on the side of caution; a human in the loop to make the final call.
Yes it matters. Particularly with [semi]autonomous machine learning. Such a system misclassifying an object as "looks like, but isn't" puts that object squarely in the path of an autonomous vehicle seeking to avoid a collision with another object clearly classified as "must avoid".
We generally don't know exactly how a neural net builds it's recognition matrix, so it's conceivable that it might pick up on some minor/irrelevant detail common to untagged objects in it's training data and end up putting all such objects into its ignore bin.
The real fun is with edge cases such as the occasional unicyclist, trike or recumbent bike. A system trained on data containing untagged objects offers the real and dangerous possibility of a "bin" for objects that don't neatly fit into any of it's trained classifications.
They're called ephemeride tables, and they're published, free of charge, by the satellite operators and various spacewatch organisations. So that just leaves planning observations around not pointing where a satellite will be illuminated by sunlight, or briefly shuttering the sensor as one passes by. This is easily within the capabilities of modern telescope pointing software.
I think you'll find that the idea predates Musk by a decade or five. Remember Iridium? Musk is simply the first to have the wherewithal to do it economically.
Something else to consider, the cheap launch capacity that makes Starlink possible also means that we could easily launch half a dozen or so Hubble class (perhaps even better) instruments into orbit for about the same price as the original Hubble mission.
At DAWN and DUSK, with a little bit of overflow into full darkness. Strangely enough, most serious observers don't fire up their instruments until it's really really dark.
Furthermore, the field of view of most astronomical instruments is only a tiny fraction of the Mk1 eyeball, nor are they easily distracted like the human mind. Any given satellite will be in the field of view for seconds or less, and I suspect it would be a trivial exercise for a telescope taking a long exposure to "blink" as a satellite flew past. So even with 20-30,000 satellites swanning around up there, very few visible light observations are going to be ruined by random intruders.
Radio astronomy is a bigger concern since the satellites will be "visible" any time they are in the sky, but even there the impact is small, since by international treaty satellites don't transmit in the bands used for radio astronomy. So again, no great concern.
As for who benefits: all of outback Australia; Siberia; most of Africa; big chunks of South America; Canada; Alaska; international aviation; shipping; AND almost certainly Elon Musk since he wouldn't be doing it if he didn't see a profit in it.
In theory true. In practice not without the whole world knowing that you're doing it. Commercial operators trying to enforce their monopolies would quickly find themselves on the losing side of a lawsuit, and even repressive states have to deal with the rest of the world. Remember, even the Soviet Union gave up on jamming the Voice of America.
Given that there are still planes flying that use metal foil recorders, and that many "black boxes" have been found to be non-operational after crashes it's entirely possible that Tesla's data recorders are better (at least on average) than those used in commercial transportation applications. 1) because a lot of installed kit may be 20 years (or more) old, and 2) in the interest of maintaining compatibility, even brand new kit is often based on reference designs that are years or even decades out of date.
Certainly SpaceX's telemetry and data logging is far more comprehensive than that of others in that industry, and Elon boasts that there is a great deal of technology sharing between SpaceX and Tesla Motors. Thus it is very likely that Tesla's data recorders represent industry best practice and may well be better than the vast majority of those used in aviation.
Anomalies in the data or not, pretty much all of the models have been bang on the money. More to the point, that's where we are when we go with the worst case forecasts. We have been wrong and wrong again every time predictions have been made using "realistic" numbers.
Australia, where I live, is already experiencing conditions which are equivalent to the 1.5 degree increase that's supposed to be the goal we're shooting for 30 years from now. So far this year fires have burnt more than 5 times the area of the recent Amazon fires, or 6 time size of the California wildfires. And it's an absolute certainty that things are going to get a hell of a lot worse before we see enough rain to bring them under control. At the rate things are going, we might well run out of bushland to burn before the rains appear.
All else being equal, the long term forecast based on natural climate cycles (orbital eccentricity, precession of the equinoxes, etc.) has us in a period that should be dominated by a cooling trend. ie. an ice age should just about be upon us. Instead we are experiencing conditions that have not been been seen on this planet for more than 3 million years.
What is not equal is the concentration of CO2 in the atmosphere, and that is 100 percent down to us humans. We're pumping out at least twice as much CO2 as the planet is able to cope with. Probably more, since the higher atmospheric concentration does to some degree drive a higher take up rate.
The time for scepticism is long gone. Heat records all over the globe are being smashed again and again; droughts are unprecedented; forests that have not seen fire in centuries and milenia are burning; storms are more extreme. Every indicator tells us that climate change is here and as bad as or worse than every prediction.
Ran this bad boi exactly once in my life. And spent an entire weekend using Norton Disk Doctor to rebuild the hard disk from scratch. Not a bad effort if I must say so myself, given that I had essentially zero prior knowledge of IBM disk structure.
Deleted it on any and every PC that I had anything to do with after this little lesson in life.
Yes the buck stops with him. However, what of those who originally proposed the MCAS system, who implemented it, or who signed off on it as ready for deployment/fit for purpose? Those heads should be the ones rolling or alternately the arses booted with great vigor.
Boeing has gotten away with dual redundancy instead of industry best practice tripple for as long as it has because, on Boeing aircraft the human pilots have traditionally had ultimate control. Presented with an instrument showing an obviously wrong reading, the pilot would simply ignore it, and use the good one, or another instrument from which the necessary information could be extrapolated. eg the artificial horizon in the case of faulty angle of attack sensors. Worst case he could usually look out the window and fly entirely by the seat of his pants.
MCAS simply took it's input from one of two sensors and without any error checks (AOA disagree was a separate optional extra which did nothing but light a lamp on the instrument panel) or cross referencing with other sensors, did it's thing relying entirely on feedback from its inputs to decide when to stop. If the sensor providing that input was jammed or otherwise returned data that did not coincide with reality, it kept on doing its thing until the elevator trim ran up against the physical stops, and overrode every attempt the pilots made to correct the situation.
What I find truly frightening, is that Boeing aircraft have a number of systems where the automation preferentially takes its input from just one sensor and require specific pilot action to force the use of the backup. MCAS is not the only system that has resulted in Boeing aircraft flying themselves into the ground. Faulty ground proximity radar has caused a number of planes to lose engine power during landing. Fortunately pilots have managed to recover in most cases, but not always. Turkish Airlines B738 stalled and crashed on landing in 2009.
I strongly suspect a thorough critical review of Boeing's avionic systems would find an uncomfortably large number of systems where a single failed component could potentially lead to a crash if a pilot failed to take the proper corrective actions. MCAS made those two crashes inevitable because the system overrode pilot discretion entirely.
That's because you fail to understand how aerospace/military spending works in the USA, which is basically spend as much as possible as often as possible to as many as possible. SpaceX's low cost, one shop business model is incompatible with this goal.
The really smart trick here would be for someone to come up with a milspec strength private keyserver. Noone and nothing joins a private network (home, corporate, school, whatever) unless and until it is explicitly registered to that network. And no device or application gets permissions greater or a data pipe wider than needed to carry out its designed task.
And it's us in the consumerist nations who (directly and indirectly) pay those in charge to keep it that way. Brazilian beef; Indonesian palm oil; Nigerian petrochemicals; West African cocoa; Singaporean fisheries; Bangladeshi textiles; and so on. It's cheaper in the long run to pay a handful of warlords a small premium to keep the poorest of the poor in a condition of abject poverty.
Biting the hand that feeds IT © 1998–2021