Re: MagicSix?
Any sufficiently advanced technology is indistinguishable from Magic.
142 publicly visible posts • joined 28 Feb 2011
The term 'noise' is a bit misleading -- the main issue was that the data returned by an instruction that took a fault was *undefined*. The hardware wasn't unreliable, the OS designers just assumed it provided functionality (restartable 'segment missing' faults) that Interdata didn't actually provide until 3 years later.
They did eventually fix the issue, so that by 1978, the next model, the 8/32 worked as desired. But that was three years later, and we wanted a solution to share these machines much faster.
When I was a grad student, 40 years back, I remember the folks who built the research machines told me "It's always the power supply. Even after you've ruled it out, it's *still* the power supply."
That being said, the nightmare tech I remember is coax, specifically tapping receivers into a coax cable, before there was point to point ethernet cabling (this was 10 MBit ethernet, so we're going back to around 1980). You'd add a new machine to the network by essentially drilling a hole into the cable, and adding a tap that reached the copper core of the cable. Of course, each time you added a new machine, you'd get different reflections from all the previous taps, and of course you also had to terminate the cable itself properly. The end result is that you'd add a new machine, and some random arbitrary pair of other machines might start having trouble communicating.
Teams does work for simple meetings.
What drives me nuts is the maze of twisty passages that makes up your life in Teams. Someone shared a document with you during a chat? Good luck finding it ever again, even if you were smart enough to remember to pin the conversation. Even if you remember who it was who shared it, you have to find the specific conversation. And then you have to remember if they uploaded it to the 'Files' tab or if it was part of the conversation inline.
And sometimes the conversation is in the Teams tab, and sometimes in the Chat tab.
And random functionality requires you to click "Open in SharePoint," where you're in a similar 'app' but different. And of course, you have to guess that the functionality you're looking for is in sharepoint in the first place.
And again, search just plain doesn't find anything.
It's just a mess. And the way that MS works, shoveling functionality into applications with no thought of the user experience, it's only going to get worse.
There was a recently letter to Slate's "Care and Feeding" column where a letter writer wrote in that she recently noticed that someone who bullied her in a racist manner as a very young child was now, 30 years later, an elementary school teacher.
She wants to contact the school to report the teacher's behavior as, I'm estimating, a third grader.
That's insane, and I was shocked to see that the column's author thought reporting the teacher's behavior back when she was about 8 years old, was a good idea, but suggested going to the principal of the school instead of the school board.
And about half the comments to the column are of the "yes, report her," category.
It's disappointing that it took so many years for this guy to get over his anti-semitism, especially since he's spent so much time in the US apparently still under the sway of his upbringing in Egypt.
But still, it is disappointing that Google would dismiss someone for testifying how they got over the hate they were raised with. Many of us grew up in way less tolerant times, and it is generally a good thing to hear about someone who overcame the biases they were raised with.
And yes, I'm Jewish, so don't call me an anti-semite for dissing Google here.
His point seemed to be that emotionally he found homosexuality repugnant, but he recognized that logically his position was nonsense or worse. And as for his emotions, he wrote "Indeed, it is through the lasting nature of that friendship that my emotional core is changing."
IOW, in my reading, he knows his emotional reaction is bad, and is working to change it.
Couldn't Christies, and any other trusted name in the art world, just sign NFTs with a secret key, which could be verified by anyone with their public key? You'd save the energy burned by block chain updates, and still get certified authenticity.
And isn't the idea of NFTs for physical objects sorta silly? Aren't you signing a checksum of the object? And how do you generate a replicable checksum of a physical object?
This is all a bad joke.
AFAICT, someone paid $69M to get a digital signature of a JPG added to a blockchain, indicating that they own the JPG. And it isn't even clear whether the new owner owns the copyright to the image, or is just the registered owner of a file that can be still be copied and distributed to others. Or even trivially modified by its creator and sold again.
Wow.
Really, wake up. MSFT of 2020 is not MSFT of 1995. There's a huge amount of support for Open Source in Azure, and throughout MSFT in general. The MSFT IT department even supports MacBooks (I'm typing this on my MSFT provided MacBook, and no, it isn't running Windows) and iPhones.
Disclaimer -- I've worked there since 2018.
Well, you can call it a prank, but the idiots did commit assault with knives (using the legal definition of assault: "putting another person in reasonable apprehension of an imminent harmful or offensive contact"). The difference between a prank and a robbery was unobservable to the victims, which is why one of the pranksters is dead.
There's something weird:
"Lee said that although the false positive and negative rates of the algorithms were low at 0.024 and 0.037 respectively, the team needed to verify their results with many more patients."
With 23 people tested, why wouldn't all of your error rates be multiples of .043 (1/23)?
I think you're right. The problems with this plane stem from the fact that the engines are too far forward, in front of the center of gravity. They did this because the fuel efficient engines Boeing wanted to use were too big to fit further back. Unfortunately, if the nose rises, wind hitting the front of the engine pushes the nose even higher, an example of positive feedback. Other planes don't have this problem, and so don't require an MCAS to be stable when the nose drifts upwards a bit.
Fundamentally, this is a flawed design, and the MCAS is a complex bandaid designed to hide this fact. The only solution is to put smaller engines on the plane, and live with the worse fuel economy.
As the pilots quoted here noted, once the MCAS fires and the plane dives, the pilots probably aren't strong enough to adjust the trim against the wind. It's pretty scary watching the FAA and Boeing make the same potentially fatal mistakes again, this time in public. Let's hope the BAA, CAA or European equivalent put a stop to this crap.
Great summary, not only of Windows, but of MS in its entirety. Their approach was, until Azure, to just throw a random collection of poorly implemented functionality into Windows, all bundled together for whatever they charged (around $100 for the weakest tea, IIRC), cutting into the market for anyone who wanted to do a better version of whatever Windows threw in for free. I believe Ballmer called it 'cutting off the air supply' of their competitors, in an accurate turn of phrase that I believe he probably regretted.
Windows NT was the first system whose internals you could study without becoming ill, and that only if you didn't look at its VM or file system interfaces. I haven't dared look at it for years, but I have no reason to believe it's improved. SunOS did the file system / VM system much better than anyone else at the time (thanks, Steve).
MS definitely violated all sorts of anti-trust laws, but what really did them in was the Internet. They just didn't get it, and their browser's attempts at doing things proprietarily continued to hurt them, as their attempts to innovate were labeled, not incorrectly, as 'extend and extinguish'. It took Amazon's success with AWS to define the market well enough for MSFT to actually start competing again, with Azure, in an environment where, for the first time, they weren't leveraging their position in Windows to get an unfair advantage for mediocre technology.
I remember reading that the 737 MAX is just generally less stable than its competitors because the engines are so far forward that they're ahead of the center of gravity of the plane. So, if the nose starts going up, the air catching the engine cowling pushes the nose even further up. That's why there's an MCAS in the first place -- to prevent that instability.
Myself, I think I'm going to do my best to avoid flying in these things. It sounds like their fundamental stability depends upon the correct operation of the MCAS software, and the pilot's correct interaction with that software, unlike nearly any other commercial plane flying today. No thanks.
It seems to assume that everyone from Asia shares a similar culture, whether they're from Japan, Korea, Viet Nam, Cambodia, India, Pakistan, Afghanistan, Turkey, Siberia, China (and China itself obviously has many cultures internally).
How this became the SJW term of art for people from all the above countries is a mystery to me, but it certainly seems disrespectful. Or at least stupid.
An honest question. Clearly, the flawed software is required to make the 737MAX fly like earlier 737s, which it apparently doesn't do accurately enough to be safe for untrained (on the new plane) pilots to fly.
What I'm wondering about is whether the plane, with its forward placed engines, will always be significantly more unstable than other planes, even after training pilots on accurate simulators for the MAX. I'm wondering this, because it sounds like the plane has a serious tendency to pitch upwards in scenarios that wouldn't be a problem for other planes, and correcting for this behavior with software doesn't seem to be working particularly well. After all the misinformation Boeing has spouted so far, I certainly wouldn't fly on a MAX, no matter what software it's running.
If the plane is never going to be as stable as an older 737, I'm guessing that the only solution will be to put older engines back on the plane, with Boeing paying some compensation to companies that already purchased a MAX. If thats' the case, Boeing better be designing a new, better plane starting about 5-10 years ago.
I'm reading Bad Blood (about Theranos) and it strikes me how far people went to convince themselves not to notice that Theranos was a scam.
And that's what this sounds like, too. They refuse to answer reasonable questions. That's usually because they know the real answer is worse than you can imagine.
The Nova Scotia government is just showing how technologically incompetent they are by prosecuting someone for enumerating public documents.
Hopefully the prosecutor will drop the charges if they can find someone in New Scotland who knows anything about web servers, and can explain it to them.
There's so little we know of our own civilization that extrapolating to extra solar civilizations is a joke. Two questions make that pretty clear:
1 -- If humans disappeared today, how long would it take for another technologically advanced species to arise? I don't mean intelligent -- it is quite possible that dolphins or whales are more intelligent than us. But neither one has managed to build a telescope or a radio. AFAICT, it took 4.5 billion years for a civilization capable of building a radio to arise on Earth. It has been about 120 years since the first radio transmissions.
2 -- How long will humanity, or our direct descendants, exist at least at a level capable of broadcasting radio.
We don't know the answers to these questions, and they're about us and Earth. It would be even harder to extrapolate these answers to other worlds.
The Sparc may or may not be vulnerable, but I don't think this explanation covers it.
The Intel issue comes from a speculative code path loading data into the L1 cache from mapped but protected memory. This has nothing to do with TLBs -- the real question is whether the kernel address space is accessible to speculatively executed instructions.
Intel's error came from allowing a speculative load to proceed even when the ring # was wrong to allow the access to complete.
Yup, even remembering exactly what the interface is to a string package you haven't used for a year takes a search for me. Javascript, Objective C, C++, C, and Python all have slightly different versions, and I'm just not going to remember the parameters to substr in something I haven't used in > 12 months.
Perhaps I dreamed the whole thing -- Google's been no help here, but I recall a TOPS-10 program running at Carnegie Mellon, called "tingle" that, when run, would generate a nerdy pornographic one liner, of the sort "Shove a DECtape up my ass, I need to obstruct justice, cried the President!"
That would have been late 1970s through the early '80s. Anyone else remember that? Bueller?
Several years ago, I took a track from a Decemberists CD, encoded it at 64Kbits, 128Kbits, 256 KBits, 320 Kbits, both MP3 and AAC, along with the original song's bits, and then made a new CD with the resulting songs. Played it on my decent component stereo (probably cost about $2000 in today's USD). 64Kbits definitely sounded flat, and I thought that 128 was still perceptibly worse than lossless. Both 256 Kbits and 320 Kbits were indistinguishable to me from the original CD. So, I've been happy with 320Kbits rips since then; 256 would probably be fine, but I figured a little extra wouldn't hurt.
Actually, I did a few other things as well -- some classical piece I've forgotten, and something from Phillip Glass. All basically behaved the same as the Decemberists.
With regard to MP3 or AAC, I really couldn't tell any difference once I got to 256 Kbits or above.
You pay about 5-10 cents/GB for transfers between regions. You even pay 1 or 2 cents per GB for transfers out of one availability zone into another in the same region. So redundancy isn't free.
That's probably another reason it doesn't get done as often. But I suspect the biggest issue is the complexity, and dealing with additional latency when going between regions.
One thing you should know, if you'd never used it, was that the ITS "shell" was a binary debugger.
You'd login by typing "mlk$u" where "mlk" was your user name, and "$" was the Escape character. No password was required (or even allowed; thanks RMS). You could then type something like "4/" and see what's in location 4 (which was a register on the PDP-10). There were ways of running commands like macsyma, too (":macsyma", i.e. put a ":" in front of the command line).
Even stranger, you could type $$^R, and it would unprotect the OS, so that you could patch the running kernel from your shell.
I am not making any of this up.
The thing is, I like OS/X. But I have to admit that I'm also disappointed with the choices Apple made. I've using a late 2011 MacBook Pro, which I've upgraded to 8GB of memory. I've been holding off putting a new SSD disk in it, since I was hoping I'd be able to get a really light MacBook Pro 15". And I guess I can -- the new 15" model is 1.6 lbs lighter than what I'm using. But it's expensive, and I still have to see how badly they've messed up the keyboard. And I need dongles for everything.
But I'm half tempted to go down to a 13" display, and if I do, I'll go to the one that doesn't have a touch bar, and save $300.
But I think I'm also going to have to at least look at what the Linux ecosystem looks like these days.