The simple solution to ensuring that X complies with EU law
would be to just block X domains EU-wide until they come into compliance. Hit them where they hurt, and dispense with chump-change fines that they'll never pay anyway.
158 publicly visible posts • joined 11 Aug 2009
I don't think the OP actually read or understood the article.
This isn't even remotely about the merits or demerits of noncompete clauses (which I think are often abusive in nature); based on the information presented, it's about an employee STEALING the employer's IP and trade secrets and poaching its employees for a competitor WHILE STILL AN EMPLOYEE of that company.
Even with no post-employment noncompete in the picture, this behavior would be theft.
A startup I worked for had a new engineering manager who insisted the embedded engineers get rid of "that junk in your cubes." 'Scuse me, but that junkbox saves untold time and cost when I need to build or fix something, and the cost of not having it on hand when needed is extreme. It's a wellspring of creativity and ideas.
We dutifully hid our junk boxes for a few weeks until we got a NEW new engineering manager who had a clue.
The party in power with majorities in both houses of Congress has, <checks notes> failed to pass a budget despite ALREADY having gotten a several-months extension (that first continuing resolution) to give them extra time to do it, and all the while has been systematically destroying security resources. But somehow it's the party NOT in power that is to blame?
Pull the other one, sirrah, for it hath got ye bells on it.
Many years ago, when I wrote a thoroughly annoying remote access program for a portable flowmeter, I had the problem that Sales insisted on testing the unreleased beta versions, but ALSO refused to abide by the "don't give out the beta test version to customers" requirement. Even adding a big splash screen with "INTERNAL TEST ONLY, NOT FOR CUSTLMER RELEASE" didn't help.
Plagued by complaints that the beta didn't do X or Y (which, of course, weren't implemented yet!), I added a profane version of the "NOT FOR CUSTLMER RELEASE" warning to the splash screen.
Naturally, a sales rep immediately gave it to a customer, and complained bitterly on his return to the office. I calmly explained to management that Sales had been repeatedly warned not to distribute unreleased versions, the salesman thus knew he was not supposed to do so, and therefore the consequences were entirely his own fault.
I had to remove the profanity from the message, but the point was made and Sales behaved themselves for a while thereafter.
AI deployments for job replacement in the real world already fail to pan out in the majority of cases, so AI becoming more expensive will further reduce its adoption, further degrading AI company profitablity.
If that turns out to be the case, it's vicious-circle time for AI vendors. I think I'm fine with that.
That put me in mind of this bit from a song written by Terri Sharp and performed by Don McLean:
"When the gates are all down, and the signals are flashin'
The whistle is screamin' in vain
And you stay on the tracks, ignorin' the facts
Well, you can't blame the wreck on the train"
I recall reading about a university UNIX mainframe system that the too-clever students made a habit of crashing for yucks. The admins spent lots of effort hardening the system, eliminating each vulnerability as it was demonstrated, which just made it a spicier challenge for the nascent PFYs. The crashes continued. So the admins just added a crash-system command, instantly destroying the challenge and virtually eliminating the practice.
I used to work for a small company whose owner never made mistakes, even those he DID make. I got the hell out, and everyone in my life is better for it. Yes, it can be hard to find a job not managed by bozos. But it's certainly worth a largeish investment of time and effort to do so.
Though my money is on a patch that introduces gradual performance degradation and random BSODs at an increasing rate as time goes by, coupled with anti-debug measures to keep researchers from discovering that it was intentional.
Paranoid? Hey, we're talking about Microsoft here.
"...that approach is necessary because technological development is now so rapid it’s no longer sensible to expect that studying narrow skills can sustain a career for 30 years."
50 years ago, on the advice of my physics professor, I decided that studying narrow skills wasn't a sensible career choice, got a degree in Engineering Physics instead of straight EE, and became more of a generalist. I did wind up specializing along the way, but that was more adaptation in the presence of opportunity than specialization beforehand. It was a good idea then, it's a good idea now.
And yes, failing to foster the development of new talent is a dead-end proposition, whether you supplant them with AI or outsource the jobs to sweatshop developers in Asia. It always has been.
The Trumpians have demonstrated quite definitively that they care nothing for reality or the public good. At most, this effort may spur the clowns at the cabinet level to forbid the release of documentation on the data formats (in which case I hope someone leaks the docs), or even cause them to leave the data encrypted over the US, but that's all down to having petty, vindictive, incompetent assholes in charge. It's still worth the effort to mitigate the intentional damage they're doing.
"Not everything requires the government to do it."
I feel obliged to point out that only a government is going to pony up $250 million plus launch costs for a weather satellite, particularly when providing data at no direct cost to the user. (A corporation would definitely charge sky-high fees for data from such a satellite, if they could be moved to launch one in the first place - given that the payback period would be very long.) Some critical things do indeed require the government to do them.
Well, that second word is the key here, isn't it? LLMs don't *have* awareness, they're just spicy autocomplete, so "contextual awareness" is fundamentally out of the question. Any researcher recommending that is falling into the trap of imputing capabilities that an LLM cannot, does not, and never will have.
ISTR that several of the recent failed landings involved trouble with laser rangefinders. I wonder if they're underestimating the amount of dust kicked up in low gravity by the descent rocket's exhaust plume, or its optical properties? That could certainly interfere with a time-of-flight range measurement.
Seems too obvious to overlook, but who knows.
Decades ago, I worked on a truly awful project building an app to talk to a portable instrument.
The entire software spec was, "We want it to talk to the flowmeter." Predictably, it just got worse from there.
There was a series of internal-only test versions; predictably, sales, who were expressly forbidden to give them to customers, did exactly that, and demanded that I support these incomplete test apps.
I responded by incorporating a special splash screen in the next test version that read, more or less, "INTERNAL-ONLY <EXPLETIVE> TEST VERSION, NOT FOR CUSTOMER USE."
To the utterly predictable howls of outrage from sales conveyed to my manager that this was inappropriate language to expose to customers, I responded by by pointing out that internal-only test versions were never supposed to be given out in the first place.
Alas, as Corey Doctorow is in the habit of explaining, the corporate enshittification process inexorably pushes the company into ever more rapacious, exploitive behavior to feed shareholders' insatiable demand for more profits every quarter. And the regulatory guard rails that would stop that slide have incrementally been eliminated. The DMCA made it all much worse, as it empowers all sorts of anticompetitive crap; it's now a felony to bypass the toner-cartridge lock-in features, as that's a "protected work."
I, too, am happy to repay a manufacturer with a decent profit for a decent product, but not to be endlessly squeezed for more by increasingly abusive practices.
The point where the aircraft designers BUILT IN A WHOLE SYSTEM to detect that the gears were eating themselves up was the logical place to stop the whole program and say, "Bad idea, let's go back to something more reliable." The fact that this didn't happen says a lot about how organizations can lock themselves into pursuing a bad idea rather than just scrapping it.
Full agreement with the poster pointing out that a flock of A-10s is more cost-effective. From what I've read, the A-10 isn't glamorous enough for the Air Force, but the folks who really hate it are the ones in its gunsights.
I'm sure there is exactly that serious work underway at MS to enshittify W10.
It wouldn't surprise me in the least if one of MS' last updates to W10 contains a logic bomb that randomly crashes the machine after official support ends if the extended support hasn't been purchased. I don't think even Redmond would be QUITE dumb enough to just brick W10 machines, but they're obviously not far from that level of dumb.
It never ceases to amaze me that folks will install the proverbial Big Red Mushroom Button on a wall or a rack and NOT put a transparent safety cover over it.
Seriously, folks? If you really, really NEED to mash that button, you'll have that cover open before anyone can say "NOOOO!", and if you DON'T need the button being mashed, the cover will prevent lots of needless drama and expense. There is *no* downside to the safety cover, especially if you cover it with painter's tape when the wall is being repainted so it remains transparent afterwards. Apologies to those painting contractors who really aren't that dumb; I know you're out there somewhere, busy as hell.
These folks need to be punished.
And we need to vastly expand the ability of agencies to impose meaningful penalties on corporate violators, especially large ones. F'rinstance, statutes could set the maximum fine as a percentage of the offender's assets, with a floor for the maximum fine to deter gamesmanship.
One: Generative AI is irredeemably crap. There's no way LLM tools can possibly replicate human judgment, filter for truth in any reliable way, or stop parroting obvious BS because it's on the Internet. To an LLM, Donald Trump's statements are equally valid input to Kamala Harris', and Fox News stories are as valid a source as NPR. Expecting sense from these tools is a fool's errand.
Two: There are lots of people intent on making money off these things, and determined to convince us all that they can do what they clearly cannot. There are also people who want to (mis)use these tools to get rid of those pesky, expensive human employees and make their quarterly financials look better. AI chatbots instead of human tech support, f'rinstance.
Problem one is a technical question; problem two is a social and ethical one.
About 25 years ago, my US employer provided weather satellite instruments to the European Space Agency, and I did on site ground support, which included setting up the instrument test console; a Sun Ultra One if recollection serves me. The test control room was wired for 220V 50 Hz, and after procuring the relevant power cords with Schukosteckers, I connected the Sun monitor to the mains - having first verified that it was automatically dual-voltage agile, and didn't require a switch.
Alas, I then neglected to verify that the CPU was ALSO auto-adaptive. (The nameplate said 105-230V AC, 50/60 Hz - we checked that before shipping it - so we were good, right?). Plugged it in, flipped the switch, and POP! Oops. Dead, smelly PSU. The CPU DID have a selector switch. which was, of course, still set to 110V.
Fortunately, I had good relations with the local techs, and even more fortunately, the PSU was a standard PC type. We made a quick trip to the local Mega store, where I bought a replacement and swapped it in. Mission rescued. (I think we first temporarily pillaged an idle Sun workstation for its PSU so I could get the console up and running, verifying that the rest of the station hadn't died as well.)
Never made THAT mistake again.
Ironically, some of the other instrument contractors were using 110V-only computing gear, and had a separate AC supply through a stepdown transformer. Which I didn't use, since the Sun station was dual-voltage capable.
I find it curious that the discussion of renewable energy generation doesn't seem to include one blazingly obvious aspect: The world, and in particular the rich, highly industrialized countries, are going to HAVE to become much more economical in our use of energy in general, and electricity in particular. We simply cannot endlessly consume ever-increasing amounts of power and expect to survive. Nor can we demand that developing nations economize where we do not; that's a recipe for failure, massive unrest, and mass migration.
The US in particular (and probably China, but I'm less sure of that) could do much better in energy efficiency. When I've been in Europe, folks there seem to be much more conscientious about energy use in general, as illustrated by simple things like not lighting unused spaces. I've seen more motion-sensor lights there than I ever saw in the US.
Human civilization will not survive climate change without getting serious about energy use, and this is going to mean making do with less, especially for rich and powerful nations. That is an uncomfortable truth as well as a very hard thing to sell politically, but it IS a truth. Fortunately, there are signs that people are becoming aware of that fact and are willing to make necessary sacrifices, IF they see that process as equitable and fair.
Tesla will have a hard time competing with BYD in the Chinese domestic market. BYD has better build quality, newer designs, and - critically - they are not a foreign company. That last factor will deliver both political and customer-preference advantages to BYD. The Chinese government is notorious for tipping the scales in favor of domestic producers over Western multinationals, and I doubt this will be an exception. (And one cannot really blame them for that.)
Tesla may be "successful" in China in a break-even sort of way, but it sure won't be a growth engine or a money-spinner.
We see a tracked vehicle tooling around in open country, over uneven ground that the vehicle chassis can obviously just bull through without navigation assistance. We do not see it maneuvering among anything I would call an obstacle, even to a vehicle that couldn't just drive right over medium-sized shrubbery.
There may well be high-performing autonomous systems here, but the video shown doesn't substantiate that proposition.
"Look! A monkey!"
Nothing, not even free rides, could persuade me to trust myself to one of this man's robocabs. That's even assuming they ever happen, which I think is highly unlikely.
NPR reports that he's planning to achieve low manufacturing cost by using a "revolutionary" manufacturing process where almost the whole body is cast in one shot. Surely I cannot be the only person thinking "This isn't going to be as quick or as cheap as he seems to think. Process revolutions never do."
I do the tech for the home business.
We got a Canon inkjet with the big tanks on my advice because it was well reviewed and the ink cost per page was low, and have had nothing but trouble. There's a leaky seal somewhere, and multiple ink hoses get air-bound every few weeks. Colors stop working. Deep clean cycle wastes a lot of ink. Partner gets real annoyed. Cannot seem to fix it, and no replacement parts for the affected items.
While this is probably a one-off quality issue no one else has been bitten by, and any identical replacement would likely perform perfectly, we're sufficiently cheesed off with it that ink-tank printers are no longer an option, and the next time this one borks it's going on the curb. One bad experience is enough.
Obvious, but also obviously NOT ENOUGH. You can reduce the incidence of your staff being phished, but you *cannot* totally eliminate it. Therefore, you cannot responsibly base your cybersecurity strategy on that assumption.
For those few of us readers who may not already understand that point, defense in depth and robust intrusion detection and response are essential.
Also essential, if perhaps a tad less obvious, is NOT shaming the phished victims into hiding the fact. That just helps the bad guys. Establish an infosec culture of *immediately* contacting Security, and of Security responding immediately in a supportive manner. No blaming, no shaming. THAT way you get the fastest possible notification that you've been attacked, and you stand the best chance of minimizing the damage.
Sure, you may get some false positives this way, but that is far outweighed by the benefit of quick and effective detection.
OK, so you MAY need to shame some C suite idiots into being more careful and more forthcoming, but that's a tool to be used sparingly and with great care.
The trouble with reference designs for crazily complex CPUs is that it's rather difficult for anyone but the chip maker to effectively optimize the bypassing scheme, as only the maker really knows where the critical points and needs are.
I'd be surprised if the motherboard makers' engineers get much leeway for comparative design testing when churning out the board for the Latest Sexy Chip. There must be incredible pressure to just use the reference design.