
What are the chances...
... that it is AI-generated deep fake of Zuck? Out of curiosity: is there a link or an attachment (NDA? Contract?) in the mail?
1229 publicly visible posts • joined 19 Dec 2012
From TFA, quoting the CEO of Civo: "we saw no decline in productivity (I wouldn't say we had any gain)".
What's "productivity"? Is it hourly or is it weekly? The Fine Article gives conflicting ideas at best.
Quoting the boss of MSFT Japan (as of 2019), "I want employees to think about and experience how they can achieve the same results with 20 percent less working time", makes the impression that he wants his employees invent ways to become 20 per cent more efficient on an hourly basis to preserve the weekly productivity. As the article notes MSFT didn't expand the pilot and are not saying why - maybe the tradeoff didn't work out?
At the same time, "Everyone at Civo does their full week's hours during the four days." Hmm... This looks like no change in hourly or, indeed, weekly productivity. It's just that you cram your work week into, say, four 10-hour days rather than five 8-hour ones, and presumably you compensate for the degraded rest and family/social life for 4 days/week over the 3-day weekend. I don't know what was expected, but I can't say I am all that surprised. How happy this makes one's spouse/partner/children/elderly parents and how legal it is in one's jurisdiction may vary.
for a given definition of "works"...
And for a given definition of a FLOPS... The first PetaFLOPS supercomputer, codename Roadrunner, came into existence not all that many years ago (long enough ago though: I did get a T-shirt, but it no longer exists). That was in proper 64-bit FLOPS though.
I wonder whether even in the LLM context FP4 loses precision, or overflows, or both...
I suspect the main (or, maybe, another big) problem is that face recognition that is trained on, say, predominantly white English/European faces will be worse at distinguishing between subjects of other ethnicities whose face geometry is significantly different, leading to more false positives, etc.
IIRC, all the previous pilots/trials of face recognition hit the brick wall of "base rate" whereby even a small fraction of false positives among the basically nice law-abiding population will dominate the true positive rate over a small number of criminals / thugs. This was reported even in the Register on a few occasions. This is likely to be the main problem before every other big problem of relevance.
I'd also throw in an honourable mention of Andrew FS, Lustre, etc. I think (too lazy to check) that these - and some others - predate the (very relevant and correct) examples mentioned by @containerizer. In general, distributed filesystems have a long history in the UNIX world.
And I thought, naively, that the red stripe was Prada Linea Rossa reaching for the moon.
What's the mechanism that led to nearly simultaneous explosions of thousands of devices?
Physical introduction of explosive charges and triggers into the supply chain on such a scale seems unlikely. Reports in mainstream media focus on the possibility to trigger thermal runaway reaction in Li-ion batteries by malware. I am not sure I am buying this: thermal runaway occurs when the battery is physically damaged, e.g., punctured - this would require non-trivial physical interference which is, again, unlikely, nor is it a guarantee - or overcharging, which is unlikely to be triggered by malware only, IMHO, since the device must be charging in the first place, and most obviously weren't.
So, any ideas? Can some kind of malware that, say, causes a CPU to overheat cause the battery to blow up with high probability?
Regardless, and as an aside: anyone considering buying a Chinese (or any other) EV with Li-ion batteries should let a second thought at least begin to contemplate crossing his mind (with apologies to Douglas Adams for reusing the turn of phrase that is obviously his).
Backups, of course... Fully agree. But a lot of (the same) execs who paid the ransom say they'd pay again if hit again, because they didn't expect how time-consuming restoration from backups would take. That's the executive approach to recovering from a breach: "Are we there yet?"
What they still don't get is that getting the data back is only a small part of recovery. Here are some other things their IT/security team must do: 1. clean up, 2. identify how the bad guys got in, 3. plug all the holes, 4. make sure no malware remains (including in backups), 5. do full backup (of the clean state) anew, 6. test restoration...
None of the above depends on whether you paid the ransom or restored from backups - it must be done anyway. Aside: decryptors may not work, restoration may fail, too.
Do all you can to prevent breaches (and regularly test restoring from backups!) - it'll be cheaper in the end.
Version 2.0 of John Wanamaker's famous quote: “Half the money I spend on advertising is wasted; the trouble is I don’t know which half” should read "More than a third of my advertising money is wasted; there is no trouble at all to know which third it is - the one that Google pockets."
Can't they afford a test lab?
I am guessing they probably had an additional flaw in their process/CI/CD/whatever, such as not verifying that what they are pushing to the world is the same thing their QA approved. At least check the hash or something?
This is not instead of small yellow birds but in addition to them.
@EvilAuditor: why would anyone deploy an update of any kind directly to business-critical production systems?
Customers deployed Falcon agents that includes an in-kernel components/module/driver. The agents communicate with a management server that pushes malware signature updates ("templates" in Crowdstrike parlance) and the customer does not have any control over the process. Crowdstrike don't call it a software update, even though it is (clue: there is no difference between programs and data). Once in a while such an update crashes every machine it is pushed to, as it turns out.
Your question, were it not rhetorical, should be directed at Crowdstrike in this case.
One of the causes of this clusterfuck is that people either forgot or never learnt what "von Neumann computer architecture" is and means. Besides clues in the more descriptive "stored program architecture" moniker, what it really means is that there is no difference between programs and data. All the CPU does is access memory, where both operands and instructions live (well, it also checks for interrupts). This means, among many other things, that if you keep a program at version N-2 but update the data the program uses then it is no longer the same program and your software is no longer stable!
In particular, to all the voices (not necessarily here) that scream that NSFT should not have signed the stupid driver: the "signed" bit does not mean much more than "we really got it from CRWD". There is no way MSFT can comprehensively validate every possible configuration of anything, let alone data updates.
Both vendors and customers need to be aware that if "only a data file is updated" then it is, in effect, an update. And if the "program" in question runs in the OS kernel then all sorts of things may go awry, and everybody involved - R&D, QA, customer IT/ops - must be extra careful.
In the past there were 2 routes to learning that program and data are the same. It was made an almost obvious basic principle in the LIST world and there were OS courses that started from a description of von Neumann architecture and the basic CPU operation (as above), neatly introducing process context and memory management, and then, typically close to the end of the course, revisited the topic in a lecture about security. I used to do that, too.
Nowadays there is always Wikipedia. LLMs may or may not come up with something more useful.
Part of the blame is certainly on Crowdstrike: if their content update breaks Windows with such high probability (if the probability were low only some parts of the world would crash) how come their QA didn't catch this?
The other part may or may not be on Crowdstrike: do they offer a protocol and recommend a change procedure that includes staging and testing? If not it's on them. If yes, then it looks to me that hardly anyone in the whole world (OK, in the part thereof that uses both MSFT and CRWD, on the basis of the observed data) implements a reasonable change protocol.
Mind you, EDR/XDR products typically require admin level access to the target machine, without it it's kinda difficult to fight invaders off (the R=response part, at least). And security updates tend to be quite time-sensitive, but that should be handled by the change protocol, at least at the crash/no crash and boot/no boot level.
Can you (= the team) do the job?
If you can, you will be able to do it to clear, strict requirements. Along the way, you will be able to argue about those requirements and suggest improvements, additions, deletions, modification, etc. You will know that those "strict" reuirements will change along the way and you will come up with a flexible design that will help accommodate the changes without rewriting everything from scratch.
By the same token, you will be able to do it without a complete set of clear, strict requirements. You will argue about what is needed (you will also find the right people to consult with, in addition to or, in some cases, instead of the Product Manager who came up with the original requirements, clear or not), and fill in the blanks along the way. Not everything will be certain - or correct - from the start, but your design will accommodate the new information.
If you know what you are doing you will also be able to estimate complexity and you will know where you do not even know the complexity (those will be the parts that you cannot easily decompose into tasks that you have already done in the past). That will be an essential input for your effort estimations that the managers will want to know, By the way, if all complexity is known and the job has been divided into familiar tasks (and known dependencies between them) a waterfall/Gantt will do as methodology. This is also the reason why old, well established industries like, say, construction can often/usually do very well. It is unknown complexity that engineers overprovision for and that is the source of those conflicts with Sales and Management. Knowing how to do the job includes being able to explain and stand your ground (and, on occasion, disclaim).
But what if you cannot do the job? For one thing, you very likely won't know the complexity. You'll make mistakes, You will redo stuff every now and then, and you won't have a flexible design to make changes easy. No amount of "methodology", "process", "daily status meetings", "looking over the shoulder" will help. As a special case, none of those things is a substitute - or compensation - for hiring cheaper but less capable workforce. And I do think that proponents of methodologies and processes have this assumption - that methodology can help regardless of composition of the team - implicitly in their theories and manifestos.
Oh, but what if you are young? How will you learn to do the job? From working alongside your more experienced colleagues who will generously suffer the overhead. Learning how to look at clear requirements and lack thereof, and how to design for various eventualities (and assess their likelihoods). In short, how to do the job. Looking back at my own career here...
Learning "methodology" won't help you much. Well, it might help you get that next job, but if it does chances are you won't learn much more in your new position. Therein lies your path to Management.
Turns out this is difficult to do with magnets. Even really big ones. The real Sun has mastered a trick called gravity over a few billion years - that's how it keeps its fusion reactor going. But for gravity you need to have something really "vastly hugely mind-bogglingly big"... I mean, you may think it's a long way around that toroidal camera, but that's just peanuts...
The one with H2G2 in the pocket, thank you. --------->
@AC: Ever seen a webpage with the Facebook thumbs-up icon?
Add
127.0.0.1 facebook.com
127.0.0.1 fb.com
127.0.0.1 meta.com
127.0.0.1 twitter.com
127.0.0.1 x.com
127.0.0.1 instagram.com
127.0.0.1 snapchat.com
to your /etc/hosts (Windows has its equivalent, doesn't it?), add variations on the theme as you desire, and the thumbs-up/blue-bird/whatever icons on web pages will be sorted.
How do I know? I never see them.
I expected a sub-headline or at least some quip about "NVDA losing a TSLA" in little more than a day (at some point Nvidia's loss in valuation was ~$550B, and that was close to the total market cap of Tesla). Should there be a new Regulation unit to measure inflated expectations or something?
I assume you mean "positronic brains" - that's what they were called in the 50ies.
The idea that a smartphone's mic on the coffee table in the living room will pick up the sound from the TV to determine what program/channel one is watching and use the information to target advertising is well known and has been observed in the wild. If the lady did not use headphones while listening to her audiobooks there was no need for anyone selling her library records - the tablet's microphone would be up to the task on its own.
What I find a lot more mysterious (and suspicious?) is someone who supposedly has an iPhone and an Android tablet. This combination is rarely observed in the wild, and my nasty suspicious mind tells me there is a chance - merely a chance, mind you! - that it was invented to avoid the obvious claim that (Apple|Google) know everything about you anyway.
Anyone who ponders this - whether one is about to take a CFA exam or not - should read the classic paper by Fisher Black [PDF] (of the Black-Scholes fame). The paper is brief and very readable indeed, even if you have no background in finance.
those foolish enough to depend on these "generative AI" tools are going to get the karma they so richly deserve
Unfortunately, it's their users and possibly customers who will get the karma they don't deserve (save for the arguably just punishment for using lousy and insecure software, but quality control may be non-trivial).
An extra note: quality and security are highly correlated.
With all the mix of definitions of Growler and military/beer stories herein I am surprised that this alternative definition hasn't been mentioned yet. It invariably generates chuckles from British military personnel, I suppose. With or without beer.
I am with you. I have a fairly large (and still growing) collection of CDs and at home I play them on a fairly good audio setup.
But a modern car no longer has a CD player/changer, so I have no option (no, please do not suggest Spotify as a substitute) but to rip my CDs onto a USB drive to listen in the car. I do believe it is perfectly legal in my jurisdiction since I own the CDs in the first place. IANAL though.
Does anyone know how Stack Overflow plan to distinguish an AI engine (e.g., Google's) slurping the content into its training set from a search engine (e.g., Google) crawling the same content to index it. Is there a technical way to discriminate between those? Or will it be an honours system?
Absolutely serious question.
Where will all the training sets that nVidia GPUs will pound on to train AI models come from?
Especially if one wants to do interesting new stuff. Mundane boring old stuff will have been packaged in libraries, I assume. You might imagine an AI "copilot" that will offer useful documentation to use the libraries, but that doesn't need to be super-intelligent or require too many GPUs to train, nor will it provide much more value on top of actually reading such documentation. Someone, possibly your kids, will still need to do some thinking.
And for the new stuff - some coding. Enough to make up a decent training set for AI to help with documentation after the interesting new stuff becomes mundane old libraries.
Weekly status meetings??? She was just lazy. Standard practice is daily status meetings, and the procedure you described is pure textbook. Sorry to break it to you so bluntly: you were not special, just lucky on the account of the "weekly" bit.
By the way, a spreadsheet is obsolete technology. Today, the previous day's email summary is edited on a shared Teams screen during the status meeting and is then sent to the team at the end, only to be edited again the next day.
How do I know all that without having an MBA? Can't you guess?
I think I already posted the thought here once or twice, but at the risk of being repetitive and with apologies all around: status meetings are always about the status of the manager, not the status of the project.
I don't think unlawful/unethical practices are necessarily implied. There is extra compliance-related cost even if you operate completely withing the laws of the land and the laws of ethics, and that's the point.
If you act illegally and are caught you'll be fined which is additional cost, the cost of lack of ethics is not regulatory but is real nonetheless. A related point is that shareholders' value is the number one (and ultimately the only) consideration in every matter regarding business, as long as everything is legal and ethical. Note that the above does not mean you are wrong: the difference is that you seem to assume that businesses necessarily engage in illegal and/or unethical practices and I don't, that's all. You are quite right about illegal/unethical businesses, but the good guys - still the majority hopefully - also pay extra to be compliant.
Companies will not (necessarily) change their processes w.r.t. data and storage, especially if they decide for one reason or another (good or bad) that the data are useful. They will notice that their operational costs are higher and will pass the costs to their customers. One may or may not consider it fair (my privacy, as protected by GDPR, comes at a cost and I am willing to pay it, etc., etc.).
Or maybe companies will cut some data - if they decide that the data are not worth it - and reduce the cost without price increases. The article seems to say that is the case at least in some cases.
AI running amok and exterminating humanity is not my first concern. Humanity becoming blindly subservient to AI (a.k.a. "computer says so") is a lot more worrisome.
The first (?) case of a EUR380 traffic fine issued in the Netherlands because the AI behind an intrusive camera thought the motorist was using a phone while driving, whereas he was merely scratching his head, and no human bothered to check, may be just a precursor of really serious trouble on a massive scale.
That seems to me a lot more immediate, likely, and serious problem to solve.