* Posts by Henry Wertz 1

3136 publicly visible posts • joined 12 Jun 2009

Rapid7 throws JetBrains under the bus for 'uncoordinated vulnerability disclosure'

Henry Wertz 1 Gold badge

Aggressive

I'm for full disclosure but I do find Rapid7's policy aggressive. The standard policy Rapid7 follows is to file a CVE after 15 days if they don't hear back from the vendor; the CVEs filed with CERT/CC are private for 45 days (if the filer doesn't choose to make it public earlier..) So basically you'd have 60 days to get a patch out to customers then the CVE is public anyway, feel free to publish exploit^H^H^H^H^H^H proof of concept code and all that. That part seems fine! But...

Rapid7's argument is valid, when a company is putting out security patches it's pretty easy to take a peak at them and it points you straight to the exploit. There was a big problem in the past with companies just blending in the security fixes with product updates, people were frequently running vulnerable software because they're like "I don't need these new features" and there was no disclosure of the security content of the updates. Truly silent patches.

And I even think this "24 hours disclosure after a hidden patch" is fine for companies that truly do silent patches -- they are typically trying to hide security fixes in with general updates for their software, "sweep it all under the rug." Since people aren't told there's a security update, they had no urgency to update. If the software has an automatic updater most people's copies may be updated within that 24 hours anyway, otherwise many people may never update it. So in the case of a true hidden patch, disclosure after 24 hours versus a month or 45 days would likely make little difference.

But it seems like a pretty perverse interpretation of their own rule on Rapid7's part to consider this a silent patch... After all, JetBrains filed a CVE (which would automatically disclose in 45 days), created a patch that was specifically described as a security patch, and then E-Mailed their customers to tell them "This newer JetBrains fixes important security holes, please install it, but here's a patch for your current version". I don't know how that is a silent patch, and given this needs some manual intervention to install (it's not going to auto-update itself..) it seems perfectly reasonable to at least give people a few days (like a week maybe if not that full 45 days) rather than 24 hours to get those patches installed before full disclosure time.

FCC: April is last month for Affordable Connectivity Program payments

Henry Wertz 1 Gold badge

Not a problem here...

Not a problem here... FYI, ACP gives low-income families a $20-30 a month discount toward Internet service. The intent was to let low income families to at least be able to get some modest speed internet at home for something in the $0-20 a month range.

Here? Both the DSL and cable companies somehow got it arranged so ACP cannot be used on plans under $50/month (the cable co has a really bad like $30 a month plan with like a 150GB a month cap on it), and plans must be at full price (so at full price the next plans over the $30 one are like $80 or $90 a month.) So... a) People who use ACP cannot afford a $50-60 a month plan really. b) Those who buy this expensive internet anyway, usually signing up as a new customer and getting the "$X a month for Y years" discount gives a lower price than paying full price and taking off the ACP credit anyway. So, there's likely VERY few if any people here on DSL or cable on ACP.

But! Both Verizon Wireless and T-Mobile have been cleaning up, selling $20-35 a month 5G Home Internet (which is just a name, these devices will use 4G service if they're outside the 5G service area...) which with ACP credit costs $0-5 a month. These have only started here within like the last year. I suppose there's probably people on these plans that'll lose out. My parents are not on ACP, but $20 a month for unlimited data and "typical speeds of 100-300mbps" (they are right on the edge of the 5G service area so when the box gets a hint of 5G it does like 100-150mbps, but usually it's on 4G getting more like 20-120mbps... but it's hard to complain for $20.)

*I assume ISPs in Britain do this -- the cell phone companies here don't usually do this, but the DSL, cable, and dish (Dish Network and DirecTV) all tend to have a stated price for the service but frequently people are singed up on the "pay XX a month for Y years" sort of special. Both the cell cos and ISPs/TV providers tend to let customers stay on grandfathered plans, though (i.e. plan is not available to new customers, but they don't force anyone on it off it.)

Toyota admits its engines are overrated – by its own power testing software

Henry Wertz 1 Gold badge

WRC

Toyota also got banned from WRC for 1 year, for cheating. When the WRC teams (in the early 1980s) began moving from carbuertors (where there had to be some compromises in airflow to maintain any kind of driveability) to turbocharged fuel injected engines, they went up from like 150HP to like 700 in a few years, and there were lots of fatalities. So from sometime in the early or mid 1980s to present they have a restricter plate (and I assume a limit on boost as well) in there to limit them to like 300HP. They run on often not that smooth tarmac, gravel, and ice and snow, and roads that are curvy as all hell (one of the rules is if they get over some certain peak speed, the route is curvier the next year, even if it's by sticking some bollards in a straightaway they must drive around) so it's more about driver reflexes, handling, and brakes anyway than about all-out horsepower. (If they were just holding the pedal down and steering left they could run those 700HP+ setups, but then it'd be NASCAR. Side note, here in the US (that have any interest in motor sports) generally have a hard time even wrapping their heads around my being into car racing, but not interested in NASCAR. It's very hard to get WRC races here, they are not on any channel even if I got cable, dish, or streaming. )

1995 Toyota was doing very well, and the cheat they used was ingenious; there was a vacuum-operated flap inside the air intake. These cars are inspected! They'd look at the intake on the car, it looked restricted. You'd have the part off on the bench looking at it, it looked restricted. When air flow went through or enough engine vacuum built up or something, this huge flap would flap up in there and voila, no more restriction, the car was making like 400-450HP. Apparently it was only actually discovered because someone on the team got remorse, or perhaps felt slighted and got revenge, by suggesting one of the inspectors take a closer look at this thing, like run some vacuum on it or take a look at it while someone blips the throttle or whatever. They got all points the team and team drivers had gotten so far through the 1995 race year revoked, and banned outright from WRC for the 1996 race year.

A path out of bloat: A Linux built for VMs

Henry Wertz 1 Gold badge

The other solution for Linux virturalization

The *other* solution for Linux virtualization, lxc (Linux Containers) and the like. Not as commonly used for sure, but they are very efficient and effective! No virtualization overhead, everything is bare metal, but you still can have per-container RAM, CPU, disk usage, and network quotas if you want to.

This is just a chroot jail "on steroids" (and BSD has a BSD Jails solution that provides this type of functionality too from what I've read.) It's a chroot jail but with it's own view of processes, it appears to have root and user accounts within it, you can even give it it's own virtual ethernet so it get's it's own IP addresses. Ubuntu has a "cloud-init" version going back to very old versions where it still has a startup script and shutdown script so you can "boot" and "shut down" the thing, but it basically runs nothing on startup (maybe a script to set up the /dev, /proc, /sys and the IP address if it's not sharing the host IP) until you install or put something in there to run on "bootup". You install a cloud-init image with lxc, you can run some command and get a root or user shell in there and set things up as needed. There's some tools so you can limit RAM usage, CPU time, disk quota, etc. (which I didn't use).

I have an antiquated Ubuntu 11.04-era MythTV setup I got running using a chroot jail type setup (after the ancient Dell croaked, it had just turned 21 years old, I used a full backup to resurrect it.) After I had too much trouble trying to pull the MythTV, MySQL, and Apache over to run directly under Ubuntu 22.04 system, I tried a chroot jail and there was very little issue at all! (I pulled the bits to start up those three and put them into a script to start up just those 3 services, and it worked straight off!)

Companies flush money down the drain with overfed Kubernetes cloud clusters

Henry Wertz 1 Gold badge

Yeah no kidding

Yeah no kidding. They sell configurations with x GB of RAM and y CPU power. And typically, if you need more CPU power, you'll automatically get more RAM; if you need lots of RAM, more CPU power comes with it. I have an in-cloud site I set up for someone that is just like that; it's not intensive so it's on the smallest available system, the 2GB RAM is fairly full but the CPU usage is probably 1%.

I think what probably happens in many cases is just that they set up Kubernetes, and Kubernetes itself may be fully capable of spinning up new instance on demand, and removing ones on demand. But if whatever they are running in Kubernetes isn't then they just spin it up with some excess capacity so they don't have to keep babysitting it. And probably go for redundancy/failsafe by just having enough spare that if one or two instances go down they're still OK.

I imagine some would leave the auto-scale-up off as well because they want a predictable bill; I mean, they could run fewer instances, let it scale up to perhaps even less than what they are now running and save money. But there's always that concern that something goes haywire and fires up like 100 instances, making for a big bill.

Finally, there's the matter of spinup time. If you ran these systems very busy and you get load spikes, it may not be particularly helpful if it takes like 5 minutes for a new instance to spin up and be ready to help with the load. Or, you may go to spin up a new one and find out there aren't any available!

Intuitive Machines' lunar lander tripped and fell

Henry Wertz 1 Gold badge

Not a failure

Calling this a failure is too harsh. The instruments on it are operating (they don't rely on the lander being upright) and it's returning data.

AT&T's apology for Thursday's outage should stretch to a cup of coffee

Henry Wertz 1 Gold badge

US pricing

"$5 per account, not per line? Then it would make sense to have each family member with a separate service provider."

Well, there's two main types of services in the US.

Prepaid, generally it's per-line and prices accordingly. This is "Pay as you go" service, you pay per month. The disadvantages being you usually don't get roaming coverage (which is not a big deal these days -- AT&T and Verizon have massively more coverage than they did 10 years ago, largely through buying up regional providers and integrating them into their networks; T-Mobile also has massively more coverage largely by running 600mhz spectrum on their sites that they didn't have before about 5 years ago, making their previous "swiss cheese" coverage much more solid.) Other disadvantage, since you are paying per month and are in no contract the cell co is free to modify or remove that plan, you may go to renew one month and find out the plans have changed!

Postpaid... it's typical to have the single-line pricing on this be QUITE bad. Like $50+. They REALLY encourage "family plans", by making it like $60+ for the first 2 lines, but then lines can be added on for typically $20-30 a pop so the price per line gets decent when you add enough lines on there. Advantage being, USUALLY the cell cos will just keep letting you stay on the plan you are on forever, so if you find a plan that is a fantastic deal you don't have to worry about them not offering it any more. In the past, postpaid used to also offer phone discounts that prepaid users did not usually get; however, that for the most part is a thing of the past (making there be that one less reason to go postpaid over prepaid.)

In other words.. prepaid, feel free to have each line on it's own account or even it's own service provider. Postpaid? The pricing would get out of hand if you did that.

Henry Wertz 1 Gold badge

The most crapulant company

I think AT&T is the most crapulant cell phone company in the US. In the past, (due to Verizon and Sprint using CDMA, and AT&T and T-Mobile using GSM), you had Verizon and Sprint only permitting phones on a whitelist on their networks (no SIMs, and the phones tended to be customized enough between Verizon, Sprint, and US Cellular that they were not very cross-compatible anyway). And AT&T & T-Mobile being more lax about things.

No longer! When Verizon began converting to 4G LTE, they switched to SIMs and allowing ANYTHING on their network that is physically compatible. T-Mobile (who bought Sprint) continued to do this as they switched over to 4G LTE service. AT&T has gone to a whitelist -- even previously working phones they got a friendly text saying effective some date, the phone would quit working and cut them off! (I'm not referring to them dropping their 2G and 3G networks -- you can have a fully compatible phone with 4G, VOLTE, 5G, and all the bands AT&T uses, and they will not let the phone register if it's not on their approved phone list!!)

$5 per account cap? That is a joke. I mean, typically you can have these low-cost plans for like $10-20 (especially prepaid), and the "unlimited everything" sort of stuff running up around $25-40 a line depending on the details. If someone is paying $250-400 for 10 lines, they seriously can't stump up a full $8.33-$13.33? You'd think they would have the sense to realize the bad PR of paying the same to someone with 5 lines as someone with 10+ lines on their account.

But, AT&T has had a history of questionable behavior (a recent example, a few years back, they offered a 2 year price lock... then raised people's rates within a matter of 2 or 3 months. THEN tried to violate their own terms allowing people to cancel penalty-free if any "materially adverse" changes were made. Instead of just crediting those people who called in to complain a $42-63 credit to cover $2-3 price raise for the remaining 21 or so months, or even a $2-3 credit to cover that month, or letting them cancel, they tried to tell them they could not cancel without paying early termination fees despite the contract terms saying they could. Eventually people filed FCC complaints and the FCC told them to cut that crap out and follow their own contract terms. Amusingly before the FCC responded, a few people read that AT&T would cancel unlimited data plan users if the data use was really extreme ('unlimited' plans with limits are a whole 'nother kettle of fish...), so being unhappy over that $2-3 price raise they got the cancellations they wanted by intentionally sucking down Linux ISos and speedtests and junk all month so they would be up in that 1TB+ data use range and get cancelled.)

Windows 10 users report app gremlins after Microsoft update

Henry Wertz 1 Gold badge

Re: The time for change approaches

"However most of the games I am interested in can now be played on linux and the only thing keeping me on windoze is inertia. I've freed up an M2 SSD and will shortly be installing linux on that to make the system multiboot."

I think you'll be duly impressed! I'll note, benchmarks recently show games in Linux getting 95-130% the frame rate in Linux compared to the same hardware in Windows, in general. The 95-100% range is pretty uncommon, most games are around 115%.

If you have an older video card, it's particularly impressive, AMD and Intel themsleves may have minimal improvements for older cards, or stopped driver support entirely. But Mesa Gallium drivers are fully modern and support up to VERY old AMD and Intel GPUs, get speed, compatibility, and capability improvements with every release. It's lovely!

If you have Nvidia hardware, nouveau (OpenGL) and nvk (Vulkan) driver are apparently getting better but I would use the Nvidia driver. Linux users love to HATE on the Nvidia driver and talk about how it's garbage. But really, I've found the performance of them to be quite good. Could a Mesa driver be even faster? I don't know, at present the answer is "no it's not", but the nouveau and nvk development were pretty moribund until recently so the drivers aren't all worked out yet like they are for AMD and Intel GPUs. (noveau/nvk could not reclock the GPU until recently, even a 4090 is not going to run games well stuck at 200mhz, there wasn't much excitement about making the drivers feature complete and comformant when the 200mhz GPU clock was going to make it run like ass anyway. Now they have reclocking support so they just started heavier work on these drivers within the last 6 months or so. So there's some catching up to do there.)

Henry Wertz 1 Gold badge

Intel SDE

First, I'll note a few of the Linux distros mulled a MacOS/Win11 style increase in requirements. They found...

a) "hwcaps" were put into the distros in the late 1990s; when CPUs began getting MMX instructions, hwcap allowed for non-MMX and MMX versions of libraries and binaries to be installed side-by-side, with the system automatically loading the right one based on the hardware capabilities. So this was instead extended to handle SSE3, SSE4.2, AVX, AVX2, AVX512 so you have your distro get slightly bigger, but maintain compatibility on older hardware while still getting the speed benefits on newer hardware.

b) Based on that, they decided to take a look at doing this again in about 10 years. They plan to maintain compatibility with the likes of Core 2 Quads until like 2032 at a minimum!

Second...

Intel SDE! "Software Development Emulator". This is FREE from Intel for any purpose, personal or commercial. It's intended for testing the newest CPU instructions on an older CPU that doesn't provide them, to make sure your software using those shiny new instructions is using them properly (i.e. you can even test your app before the new CPU using some new instructions even ships.) But it provides everything back to at least SSE4.2, and I think all the way back to MMX... AVX, AVX2, AVX512, TSX instructions, etc. are all there. You can run "sde your_application" (in Linux) and "sde yourapp.exe" (In Windows), with no discernible slowdown (it only traps and emulates the unsupported instructions, so it has very little overhead.) I used this with SecondLife on an old potato when they switched from a non-SSE4.2 to SSE4.2 build (without even mentioning it in the release notes!) with no discernible performance difference (maybe 1FPS difference?) I didn't get the speedup of SSE4.2, but emulating it didn't slow me down noticeably either compared to the previous non-SSE4.2 build. I don't know how this could hook into running Microsoft Store apps since those are presumably launched by some Windows service or something -- but presumably one could figure out SOME way to run the store or the service under SDE?

At any rate, there is a solution if you start having more and more apps require new instructions, as long as the kernel and early-boot stuff don't start requiring it!

The Hobbes OS/2 Archive logs off permanently in April

Henry Wertz 1 Gold badge

Probably not that big?

I'm just assuming this archive can't be that big. You simply didn't have much multi-GB sized software back then. As others have said you used to be able to order a physical copy on CDs, and it seems like (other than a time lile now when people are archiving the whole site) the traffic levels would not be all that high.

Is it time for 6G already? Traffic analysis says yep

Henry Wertz 1 Gold badge

Fantasy-land

Ahh the telecoms and vendors are off in fantasy land again.

For 3G, and 4G, and 5G, and now 6G, they have this fantasy that factories and such will rip out their wifi and ethernet, and pay a telecom per device to hook up to 6G networks.

Of course I would do this.. why wouldn't I remove an already installed, highly reliable and available network* with installing ethernet to 6G bridges on each and every device (or buying new devices with 6G in them), then paying a telecom per-device for whatever service you happen to get, subject to getting adequate signal strength? That's definitely a sensible thing and not a totally ridiculous fantasy on the part of the telecom vendors.

I also question this assumption that mobile phone usage will constantly increase -- they did find in Japan, once people started getting gpbs speeds (not mobile, but still) that past some point usage did NOT keep going up and up -- after all, if you're streaming 4K video 24/7, there's nowhere to go from there that's going to magically use even more bandwidth. Compared to a few years ago, more widespread use of H.265 versus H.264 or MPEG4 or god forbid MPEG2 (plus some streaming providers letting the picture quality go to hell compared to a few years ago...) also means less bandwidth used for the streaming people are doing whether it's 4K or some 640x480 SD-style stream. Even their people running bittorrent 24/7, the torrents would finish, they'd share with everyone who wanted a copy -- pretty quicklly at those higher internet speeds -- and usage plateaued.

Google Groups ditches links to Usenet, the OG social network

Henry Wertz 1 Gold badge

Fine

Fine with me. On the one hand, i slightly lament the move. On the other hand, I did find the interface confusing since they had some google-only groups, possibly mailing lists, and usenet groups. And a user interface that i found difficult compared to a standard usenet news reader. Reading a post was. easy. Doing a proper usenet-style reply was not, it wanted to put the quoted text in the wrong location with no apparent way to control it.

Microsoft floats bringing a text editor back to the CLI

Henry Wertz 1 Gold badge

I'm a weirdo so I use Joe

I'm a weirdo so I use Joe. I do believe it's using Wordstar comaptible key combos (I used Wordperfect 5.1 for DOS back in the day but went from Atari 8-bit to PC so I never used Wordstar.)

That said, i rather liked DOS Edit, a version of that without the DOS-based file size limits etc. could be pleasant to use. Maybe (if it doesn't bloat it too much) t hrow in syntax highlighting like (from what i've read) VSCode has. I mean in theory it can do that and stay small, Joe'sd installed size is like 2MB and that includes all languages and a large man page. 23. It does syntax highlighting and all that fun 4stuff for every language i've thrown at it.

Tesla to remote patch 2M vehicles after damning Autopilot safety probe

Henry Wertz 1 Gold badge

Re: "recall"

Nonsense and gibberish. If NHTSA was "against" Tesla they would have assessed a massive fine, required people to physically bring their Teslas in to make sure the software was appplied, and possibly require autodrive to be removed entirely until it's not buggy as hell.

Given NHTSA's potential enforcement powers and Musk's cavalier approach to regulations, NHTSA's actually shown restraint.

Broadcom to divest VMware's end-user computing and Carbon Black units

Henry Wertz 1 Gold badge

Might want to be careful

They might want to be careful there. I mean, if they totally ignore small users, then techs will not be able to cut their teeth on a small scale use of vmware, like it, recommend it for larger use. They may have decided they have plenty of customers already but that could certainly limit long term growth of their customer base.

Messed up metadata could be to blame for Microsoft's Windows printer woes

Henry Wertz 1 Gold badge

Yup

Yup, my sister works at a hospital in New Orleans and commented none of the printers are working today. I assume they probably have heaps of printers all called HP M106 or whatever. Glad I run Ubuntu!

Musk tells advertisers to 'go f**k' themselves as $44B X gamble spirals into chaos

Henry Wertz 1 Gold badge

I despair for my country

I despair for my country.

You should see the comments on this very event on Techspot. They are ugly.

People going on about gov't censorship (nothing here has been censored, advertisers pulled their funds.)

People coming out in support of Musk, saying everyone who supports pulling advertisements from hate speech are "woke".

Nonsense circular logic about how this speech is fine but saying "Maybe hate speech is not a good thing" (not a call for censorship, just the suggestion that maybe one should not actively support these views)... is unreasonable extremism.

The view that these companies (I mean, IBM! Really!) are "progressive-aligned". Even the suggestion that they wanted to drop ads on Twitter for a long time and just used this as a manufactured excuse to do so (which I find an odd argument, companies have shifted ad spend from one platform to another since the beginning of time.)

Nonsense about fake news and media distortions (of course, not sites like Fox News or NewsMax where they distort things enough they are hard to recognize compared to what really happened... complaints about more mainstream sites that posted a story, updated it within 24 hours as new info came in, as "proof" that they intentionally posted inaccurate information since the original article had to be updated.)

And of course, since the US has only 2 main political parties, the ugly polarizing bickering based on the view from many Deomcrats that almost all Republicans have extreme far right views, and conversely the view from many Republicans that almost all Democrats have extreme far left views. If we had a functional multi-party system, those people would be in their respective far-left and far-right parties, with the bulk being in center-left and center-right parties merrily getting along and actually agreeing on at least some issues.

I made what I thought was a reasonable suggestion, perhaps Twitter/X should allow for ad controls like has been going on for TV/radio for over 50 years (you can choose to not have your ads on certain shows) and other websites and platforms. I mean, I did an Android app with banner ads, I could select not to have gambling, alcohol/tobacco, and "adult" ads, and conversely the ad suppliers could choose to not display their ads on gambling apps, "adult" apps, etc. They would even likely get more ad revenue as better ad targetting would increase the view and clickthrough ("conversion") rates, letting them get more $$$ per ad. The commentors on there were too busy calling the others "woke" (and then acting like they didn't say "Woke" when someone would comment on their use of the term) to even respond to any actual discussions.

US govt pays AT&T to let cops search Americans' phone records – 'usually' without a warrant

Henry Wertz 1 Gold badge

AT&T is anti privacy

Not only does AT&T violate your privacy in this way, they developed the initial version of the R program language to help the feds violate your privacy. by doing statistical analysis of call records (and that was in the 1990s so this wasn't some "post 9-11 lets give up our privacy" thing.

Ubuntu Budgie switches its approach to Wayland

Henry Wertz 1 Gold badge

Re: Enlightenment

Oh yeah, I started using Linux around 1994, and Enlightenment came out a few years after that. Per Google, the first version, E16 (why it started at version 16? I don't know) came out in 1997, and E17 in 2012. 15 years for 1 release.

Henry Wertz 1 Gold badge

Re: mature has become a bad thing

Absolutely. I mean, if Wayland works well enough, I'll ultimately use it. And the argument that X11 is showing it's age and could be replaced with something cleaner is valid (although the counterargument that the cruft doesn't matter when the non-cruft works fine is also valid to me.)

But the Wayland fan's argument that Xorg MUST go because it's old, and it's "unmaintained" because it gets few patches. Seems to completely miss that it gets few patches because the bugs got worked out years ago, the security bugs got worked out years ago, and the features people wanted to add all got added, it's essentially feature-complete.

Henry Wertz 1 Gold badge

Thing I found surprising about Wayland

The thing I found surprising about Wayland.. and incidentally this is why things like HDR support etc. will end up working in Gnome or KDE but not necessarily both almost simultaneously.... and why xfce, etc. are having such a time getting complete Wayland support... is because there is not a Wayland server the way there is an X11 server, that is used by the different desktop environments. Wayland is a specification and each desktop environment includes an implementation of the Wayland specifications and protocol(s)!

(I realize, to Wayland enthusiasts I am probably stating the obvious -- but realize those who still use X11, or just don't look into how it works, would probably assume the X11 display server was just replaced with a Wayland display server with the desktop environment perhaps having special status but still ultimately connecting into the display server; rather than each desktop having it's own implementation.)

Civo CEO on free credits, egress fees, and hauling it all back on-prem

Henry Wertz 1 Gold badge

no kidding...

Cloud's not best for everything? No kidding. It's great for burstable loads, and at the low end it's great to be able to get. a low end cloud server VM for like $5-10 a month. But indeed, cloud providers are not selling their services at cost, Economies of scale mean up to a point cloud could save money (slicing up 96-core cpus and such has got to save) but past some point it'll cost less to run on prem. As said in the article latency is certainly better on prem, you’re not going to get the sub 1ms time you can get over ethernet or couple ms over wifi over any internet connection between your business and a cloud provider.

Apple exec defends 8GB $1,599 MacBook Pro, claims it's like 16GB in a PC

Henry Wertz 1 Gold badge

Re: Insult to injury

This.

I haven't seen much discussion of this, but this is a HUGE problem, essentially making it risky to buy ANY used M1/M2/M3 equipment (and iPads and iPhones). You can have a Mac that has been factory reset -- but if the previous owner didn't log out of iCloud first, you CANNOT use the machine for anything until you log back in with that previous owner's iCloud credentials.

Even read about cases where places that do "rent to own" would repo one of these Macs, of course iCloud locked; there's a procedure for Apple to reset that but (even with paperwork showing they owned the machine) apparently could not get Apple to unlock the machines.

Don't worry about those new export rules, China – Nvidia's already got more sanctions-compliant GPUs for ya

Henry Wertz 1 Gold badge

Domestic sales?

Maybe they should make some available domestically. I could see people buying the H20 who would not consider an H100 due to the high cost.

‘How not to hire a North Korean plant posing as a techie’ guide updated by US and South Korean authorities

Henry Wertz 1 Gold badge

In-person meetings and drug tests?

Evading requests for in-person meetings and drug tests? I'm not sure how that would be a red flag. I mean, if you're hiring a freelancer through a freelancing platform for remote work...

a) I wouldn't expect a remote worker to have to travel to wherever to meet in person. I have met in person with several people I freelance for, but I wouldn't think it'd be a red flag if I was like "No, I'm not driving 2000 miles to meet up with you."

b) I'm surprised any company would think they could require drug tests from freelancers, this to me appears to be them wanting to have their cake and eat it too. Expecting to treat someone as a company employee while (through paying them as a freelancer) avoiding granting them the benefits an hourly or salaried employee would be owed (i.e. even if the company doesn't have health insurance, 401K, etc., the company would be paying various taxes for employees where the freelancer is expected to pay them as a freelancer.)

The rest of the stuff listed would certainly be red flags though. Especially the rotating through payment methods too frequently, and the having stuff shipped to a shipping company rather than an actual destination.

How 'AI watermarking' system pushed by Microsoft and Adobe will and won't work

Henry Wertz 1 Gold badge

It's a start

It's a start. Every concern the article brings up is valid. But we're in a better situation having ai-generated images marked and someone having to know to remove them if they want to pass of the image as genuine, than the current situation of having no metadata indicating this at all. I'm sure given this spec just came out that more apps will gain support for it over time.

Forcing Apple to allow third-party app stores isn't enough

Henry Wertz 1 Gold badge

Just don't use it

I give people wanting the freedom to use their device the same answer I gave those saying "Shouldn't vendors all be forced to use a standard connector?" I point out "Apple is the only one not doing this, just don't use their products."

Really, for what you pay for that Apple device, you could buy in some cases 2-3 android devices with similar specs, or buy 1 android device with even higher specs; put whatever software you want on it (or don't, Google has an app store if going freeform is too overwhelming), and freely interchange your USB-C cables between them (unless you get one that's pretty old, then it'll be microUSB.)

Nvidia's accelerated cadence spells trouble for AMD and Intel's AI aspirations

Henry Wertz 1 Gold badge

NVlink expansion

So, NVLink supports up to 256 devices. That just sounds like they are using a single byte in the protocol for addressing essentially. But it's proprietary and only used by NVidia, you have Nvidia cards using NVLink to connect through Nvidia-supplied NVLink switches. They do not have to concern themselves about compatibility with any other devices, or following some kind of industry standard. It seems to me they could just extend things to use a 2-byte address (or go to 4 bytes if they're concerned they could have a cluster with over 65536 devices...), either an incompatible NVLink change, or some method so it can limit to 256 device NVlink if you may be using newer cards in an older system with older NVLink hardware.

Gas supplier blames 'rogue' code for Channel Island outage

Henry Wertz 1 Gold badge

Airplane safety

Just to point out on Airbus and the like, this is why some of the safety-critical redundant systems, if there's 2 or 3 of some sensor for instance they'll be from 2 or 3 different vendors. It avoids the situation where if failure is due to some design flaw, software flaw, or manufacturing flaw, it's possible for your redundant systems to all succumb simultaneously to the same flaw.

I don't realistically expect gas plants, power plants, etc. to get redundant systems from 2 different vendors, but anyway in some limited cases that's actually done.

Of course, maybe they really asked some programmer "How likely is this to happen?" "Million to once chance" (but the code runs like every 5 minutes so it'd run a million times in just over 9 years.) There's been a few kernel bugs (in the unstable versions) that would essentially have a million to once chance of triggering, but if it's in some driver that's pushing like 10,000 packets or screen draws or whatever a second that means kernel panic within a few minutes.

Excel recruitment time bomb makes top trainee doctors 'unappointable'

Henry Wertz 1 Gold badge

Oversite?

Any oversite? I mean, I worked as a temp at the cable company here briefly, they had a system act up and start printing out 1 cent checks. Guess what they did? Noticed this thing that normally printed like 3 or 4 checks was printing a stack instead (if someone cancelled service mid-month, they could get the prorated amount refunded), stopped it printing, didn't mail out the checks, fixed the problem, and had it print out whatever checks were really supposed to go out.

You'd think somebody there would have noticed they had WAY more "unappointable" candidates than normal and went to see what was going on before they sent anything out.

Ahh well.

MariaDB ditches products and staff in restructure, bags $26.5M loan to cushion fall

Henry Wertz 1 Gold badge

Re: forced to explain to customers...

Yeah. Back when I started out in Linux in the 1990s, MySQL was considered simpler, faster, while PostgreSQL was considered to be more feature complete but not as fast. Then, over the next couple years, MySQL gained features, PostgreSQL gained speed, so both have been quite fast and feature complete for years. MariaDB is a fork of MySQL, started when Oracle bought Sun for similar reasons to Libreoffice forking from OpenOffice at around that time.

Workload written by student made millions, ran on unsupported hardware, with zero maintenance

Henry Wertz 1 Gold badge

Re: Is anyone surprised?

Surprisingly, though, 32-bit systems could continue to be a big issue here.

I remember reading a few years back how stuff had been put in the Linux kernel in the late 1990s to handle clock rollover (i.e. attempt to handle clock rollover in 2038)... then when it was reviewed 5 or so years ago, it turned out these fixes were totally ineffective. (This is at a low-level, as the clock would get within that last moment before clock rollover, the CPU scheduler, timer-driven stuff, etc. would all crap out, like "run this a half second from now", then "a half second from now" would never arrive because the clock rolled back over to 0.)

Having some kernel system calls to return 64-bit times instead of 32-bit? And glibc support for userspace to use this? Only put in around the 2018-2020 timeframe. I was a bit surprised by this one, LFS ("Large File System") support was put in a long time ago, allowing access to >2GB files on 32-bit systems by having file system calls with 64-bit size type. I just assumed they did the same for time back then.

I don't expect my desktop/notebook computers to crap out. But I won't be surprised if there isn't all sorts of stuff with Raspberry Pis, 32-bit microcontrollers, etc., that doesn't just freak right out and at least have to be power cycled when the rollover time comes around. I'm thinking also DSL modems, cable modems, wireless access points, a lot of that stuff still uses wheezy old (but fine for what it's doing...) 32-bit CPUs. And often ridiculously out of date kernels ("BSP" -- "Board Support Product", the vendor takes some specific kernel version and fully customizes it to run on the board's hardware, so even if you felt up to comiling an up-to-date kernel for your embedded kit, you don't really have the option to without huge amounts of work porting patches etc. if it's possible at all.)

Long-term support for Linux kernels is about to get a lot shorter

Henry Wertz 1 Gold badge

High stress

I saw a thread recently regarding the possible inclusion of bcachefs in the kernel, it was clear the developers are under extraordinary stress. They pointed out, as it stands, they have automated bug finding systems sending sometimes dozens of reports a week, with several filesystems having one full time developer maintaining them. The were not getting along terribly well and it was clear from their interactions that every one of them was heinously overworked and approaching the end of their ropes.

I don't know if this is typical of other subsystems; probably it is since they'd get simillar loads of automated reports presumably. I could see stopping maintaining 6 year support kernels given distros don't use them as a very reasonable thing to do.

The Pentagon has the worst IT helpdesk in the US govt

Henry Wertz 1 Gold badge
Devil

security

I suspect (among the other issues plaguing gov't agencies in general) that DOD probably has special contracts with the expectation that the software will be secure. I know a relative of mine, when he logged into a system, they were still having to get computers with a cardbus slot in them because it was authenticating using some kind of physical smart card. I've seen the flags for that kind of thing in ssh etc. but honestly assumed it was obsolete (... and it likely is but they were using it.)

Amazon Linux 2023 virtual machine images still MIA

Henry Wertz 1 Gold badge

odd

odd, given that presumably they already have an image to deploy on ec2 itself, and people expecting it to run on site are probably using software that provides an ec2 style deployment environment.

Hold the Moon – NASA's buildings are crumbling amid 200-year upgrade cycles

Henry Wertz 1 Gold badge

As a few have pointed out

As a few have pointed out, the main issue here is that a vast majority of this money is earmarked. Yes, NASA gets $26 billion so $250 million is only a few percent of the budget. No, they are not allowed to just shave a percent or two off those projects to keep their buildings are maintained, the money is earmarked and cannot be spent on something else.

Perhaps the easy solution would be if they could get a bill to run NASA like some universities do. With NIH money, a research project would get funded, but the university had line items for "overhead" and a university foundation and so on that'd take a nice percentage off the top. It could make it easy if NASA could get something passed that let them take like 1.5-2% off the top to cover infrastructure maintenance.

ISP's ads 'misleadingly implied' existence of 6G, says watchdog

Henry Wertz 1 Gold badge

10G network

Oh I've got that beat... Mediacom cable (I mean "Xtreme" internet..) has been advertising their "10G" network for about a year. Knowing that there's no 10G wireless or wired spec, I thought maybe the marketers made a mistake meant 10gbps network. Nope, their highest plan is 1gbps.

What the hell is a 10G network you may ask? Total vaporware. CableLabs decided to use the term "10G" to refer to their plans to eventually, at some point in the future, provide some speed approaching 10gbps through future DOCSIS (cable modem standard) versions, future hardware for the cable headend to provide that much speed to begin with, and future hardware to cram into the cable modem so the customer can actually get that much speed. None of which exists now.

Mediacom of course doesn't mention any of this in the ads, but on their website they reveal "10G" is their plan to, over several years, put in more cable nodes (less users on each piece of cable); roll out DOCSIS 4.0 (once hardware to do that with actually exists, which it doesn't now); and do a "split" where they use a wider band of frequencies for uploads so they can have higher upload speeds. I must admit, this is a fair enough plan for them to do. Just don't like how it's being advertised.

There's no equivalent to the ASA here in the US! Can you imagine a company in UK saying "check out our 6G network!!!" Then a section on the company's web site noting network is actually 4G and 5G, it'll be 6G at some point in the future once 6G specs come out and they roll them out.

Chinese media teases imminent exposé of seismic US spying scheme

Henry Wertz 1 Gold badge

Well...

Well...

a) Seismographic equipment doesn't seem like a very high profile target. As corestore (above) points out, this data is generally openly available anyway. Which isn't to say it wasn't hacked but again it just doesn't seem like a particularly juicy target.

b) As thames points out (above), well, take a look at even the NSA's public tools (no kidding they have a github). Ghidra is a really great, free and open source disassembler (which besides showing you assembly language, will make a fair attempt at turning it into C code so, if the program was written in C at least, you'll get something back at least vaguely resembling the source code that may have gone into the program to begin with.) They don't write these programs to NOT use them, I'm quite sure they're hacking plenty of things.

c) Not that I'm happy about China's behaviors, but do keep in mind the five eyes (US, Canada, Britian, Australia, and New Zealand) run what is probably the largest surveillance network on the planet. I've found the rhetoric of the last couple years to be hypocritical to say the least. Keep in perspective when you hear "I can't believe China is doing xyz" (in regards to data collection or surveillance) if the five eyes aren't doing the same.

Tesla is looking for people to build '1st of its kind Data Centers'

Henry Wertz 1 Gold badge

So mixing the Tesla and Twitter operations again?

I'll be shocked -- SHOCKED I tells ya! -- when it turns out the first tenant in these data centers that Tesla investors pay or turns out to be Twitter.

Sparkling fresh updates to Ubuntu, Mint and Zorin on way

Henry Wertz 1 Gold badge

6.2 update

I had one computer affected by 6.2 update -- ironically I used a live USB to do a live install to it, three days earlier and I would have had a semi-functional 5.19 kernel. This has a very old Nvidia chip (8600M GT), I ended up installing "non-HWE" 5.15 kernel and putting 340 nvidia driver on there. No comment on my other systems, just like Liam (article writer) found, on my other systems they updated to 6.2 and everything continued to run fine. I do think 6.2 might be running a hair faster than 5.19 for me but that might be placebo.

Let's have a chat about Java licensing, says unsolicited Oracle email

Henry Wertz 1 Gold badge

Disingenuous

I find it to be particularly disingenuous of Oracle to put in these E-Mails "Customers no longer need to count every processor or user name" then tell people they must pay for every employee at the company. I mean, OK, technically an employee is not a user name... but obviously that's what they are trying to imply.

Anyway, yeah, I don't think I used Sun Java since the 1990s, and I've never used Oracle Java. I can't imagine not just using OpenJDK or the like instead.

Two new Linux desktops – one with deep roots – come to Debian

Henry Wertz 1 Gold badge

I have one!

I have a nextstation! I fired it up last week (after realizing I was wondering about Y2K compliance so I had probably not fired it up since 1999.) It powered right up. Then I realized I needed the password... luckily I remembered it after about 10 tries. One really nice saving grace (if you intended to actually use it rather than have it as a showpiece), it DOES support NFS, so rather than trying to find some 30 year old SCSI hard disk to replace the ~100MB HDD (and manage to get nextstep installed on it, which I think involves a boot floppy?), you can just access your terabyte after terabyte of modern storage over the network. I'm not a member of the "cult of Jobs" so I will probably sell it while the prices are high.

Bosses face losing 'key' workers after forcing a return to office

Henry Wertz 1 Gold badge

Yup

Yup. Obviously, there's work you have to physically be present to do. But going into a dreary cubicle farm, only to log into a usually equally-dreary computer and spend all day on a computer coding and collaborating via instant messages and slack channels? Screw that, I can (and now do!) do that from home. As a bonus, the employer I work for now is in an area where wages are much higher than my area, so I'm getting a very good hourly rate. (No, I'm not working from India... it's a US employer and I'm in the US, they're just in a part of the US with higher costs of living than mine.)

Mystery Intel bug halts shipments of some Sapphire Rapids Xeons

Henry Wertz 1 Gold badge

Agreed, but...

Agreed, but... the Z80 did have several bugs. Apparently, several bugs were fixed in a later revision... then reintroduced after they found some widely-shipped software relied on the buggy behavior, later Z80s were made bug-for-bug compatible and just listed that as the documented behavior. Per google, the 6502 originally had a buggy ROR (Rotate bits Right) instruction, it worked except the "end' bit did not end up in the overflow register as it was supposed to; so they just said it didn't have one and that it would be added in a later revision. These CPUs tended to have at least several "undocumented" instructions -- on the 6502, generally bits of hardware were selected based on the bits in the instruction so all the undocumented combinations did something -- sometimes just locking up the chip, or selecting unusual and usually useless combinations of functionality from other documented instructions. But on later revisions, they made them NOPs, then on even later variants used those to add additional instructions.

That siad, I do agree -- it'd be great to have some "middle ground" -- CPUs that are not as ungodly slow as the Atom-based E-Cores, but far less complicated pipelines, instruction reordering, etc. than the "performance" Intel and AMD cores (and probably no hyperthreading) so there's less scope for bugs, corner cases, and security flaws to creep their way in. Ahh well. At least the modern ones have microcode so you can usually patch around the "stepping 0" bugs with microcode updates.

Quirky QWERTY killed a password in Paris

Henry Wertz 1 Gold badge

Ugh.. AZERTY

Ugh... AZERTY. I had the misfortune to use one of these things in Morocco. It was terrible. Leave it to the French to decide to move a few keys around just because.

Incidentally, I did find it odd that ALL the computers I saw there had AZERTY keyboards... I mean, it used to be a French colony so OK... but the primary spoken language there is Arabic and I never saw a single Arabic keyboard anywhere in the country. Do people there just E-Mail each other in French (or not E-Mail each other at all?) or use an on-screen keyboard or what?

One person's trash is another's 'trashware' – the art of refurbing old computers

Henry Wertz 1 Gold badge

Smart

Smart. I mean, the minimum of core i3 is a LITTLE high, but I can see it as an easy cut off. My dad is still using a Dell Optiplex 755 with a Core 2 Quad (that is about to turn 18 years old) as a daily driver with Ubuntu 22.04 on it, and it gets a hell of a workout (he loads 100+ page documents full of graphs and figures almost daily, scans both photos and documents, lots of printing, and Zoom conferences at least weekly if not several times a week -- monitor-mounted web cam...) Zoom maxes out 2 cores, but luckily it has 4. It'll hit 200-300% CPU usage as chrome loads up those more bloated web pages (then dropping nice and low once the page is loaded of course.) It is VERY slow to boot however (it's not just because it's using spinning rust, the drive was priced very low when new because it was an unusually slow model. I guess my dad starts it up then just goes and does something else while it boots.) I think the top of the line Core 2's would be "OK" but I could see drawing the line at a Core i3 rather than seeing a "Core 2" sticker and taking the risk that it's a lower-end model that would not be fast enough to keep the user happy.

The first several Core i3/i5/i7 models, I think it's Sandy Bridge (2nd gen) where it could be SHIPPED with Win10 but after-the-fact upgrades are not permitted. So the first several generations of Core chips don't support Win10 at all. In addition, my understanding is that later Win10 releases REQUIRE "UWP" drivers, and the earlier Core series CPUs don't have them (since Intel dropped support for these series before UWP drivers became a requirement), so those series MUST use a several year old version of Win10 LTSB (Long Term Support Branch.) Of course quite a few can't run Win11. So I could see plenty of fully-functional systems that business will retire simply due to not being able to keep the (Microsoft) software on them up to date. Of course, you now have Zoom and software like this that will NOT let you conference if the software is more than about 3 months out of date (which I suppose is for both security reasons, and maintainability since you don't have to ensure backward-compatibility with years-out-of-date client software). So you don't have the option of just running older software for certain uses. I'm running a Ivy Bridge (3rd gen) right now and it's fully up to date; 16GB RAM, loads of storage (really, I have like 22TB storage hanging off this thing), and a GTX1650 video card; courtesy of wine/dxvk/vkd3d (and proton for steam games) it even runs every game I've thrown at it. I may expand it to it's maximum 32GB since the DDR3 RAM prices have utterly collapsed, I got the 16GB for like $90 a few years back, now the 2x16GB kit is about $40.

Red Hat strikes a crushing blow against RHEL downstreams

Henry Wertz 1 Gold badge

Not bound

"But you are subject to a contract under which you have agreed not to distribute the code. As shown above, the GPL does not appear to prevent side agreements, only additional restrictions within the program itself or its components. You can guarantee IBM lawyers will have looked at this very closely before they proceeded."

Except you aren't subject to a contract saying you have agreed not to distribute the code, because the license for the actual (GPLv3) software specifically has a clause saying if those you got it from try to apply contract or licensing clauses restricting the right to redistribute the source code, those clauses are invalid.

Vodafone offers '5G Ultra' to users of very specific phones in very specific locations

Henry Wertz 1 Gold badge

"ISTR that when 5G (or was it 4 or 6 or whatever?) being mooted it was said that because the range was so small the base stations could be like WiFi and instead of erecting the sort of masts they're now putting up they would be many unnoticeably small boxes similar to WiFi base stations on lamp-posts etc."

Yeah that was just PR hype. Microcells are small like this (and mmwave 5G does have coverage of a matter of city blocks.) But of course, just like 2G and 3G, 4G and 5G run on a variety of bands and for wider coverage the cell companies are going to continue to use the "cell towers" as well.

So there's (perhaps) 2 different technologies at play here (and I don't know if Vodafone is using both or not). 5G standalone is just as described, the cell cos started rolling out 5G for faster data before the spec was completely finished, so all "control channel" traffic ran over 4G, voice calls ran over 4G (if the company and phone support VoLTE, otherwise it could still run voice over 2G or 3G potentially). Running control channel over 5G is "5G Standalone", the battery life savings are mainly by not having to keep connected to both 4G and 5G. The 5G control channel does let it schedule time to send/receive data packets a bit faster so you get a slightly lower ping that way too, and save a slight additional bit of power since the radio can spend slightly more time powered down.

The OTHER technology is mmwave 5G. This is the stuff that gets easy multi-gbps speeds, Verizon got "in the lab" 5gbps over it 3 years ago and I've seen people posts for the last 2 or 3 years from places like Manhattan showing they could easily get 2.5gbps speeds in real-world use. The sites have a range of a matter of a few city blocks (because 28ghz+ frequencies get scattered and absorbed), but massive speeds. This isn't doing anything too fancy to get those speeds, they just run 400-800mhz wide channels versus the typical 4G/5G channels being 20-40mhz, so needless to say if you have a channel that's like 40x the size you can get much higher speeds over it.

The nice middle ground now, C-Band (5G bands n77, n78, n79 -- don't worry, I googled those bands I didn't have them off the top of my head.) This is 3.3ghz-5ghz range. It's used by satellite (not Sky Television, the giant dishes that went up first in the late 1970s and 1980s, and TV networks and such still use for uplinks), but in the US at least the FCC, cell cos, and satellite companies came to an agreement where the satellite companies are getting paid to move all their remaining C-Band stuff to one end of the band so the rest can be used to free up like 1500mhz or so for additional speed -- this is nice because it has FAR higher range than mmwave band, but (since it's a huge block of channels) should allow massive speeds. (For most cell cos, CBand alone will give them more spectrum than they have in all their other bands combined, so it should give a nice speed boost when they have it all up and running.)

Open source licenses need to leave the 1980s and evolve to deal with AI

Henry Wertz 1 Gold badge

Nonsense

This is nonsense. Open source writers do not have to rewrite licenses to accomodate AI systems memorizing their entire code and then typing it out for somebody. Any more than, a company could not just hire people with photographic memory to read some code, write it down verbatim then claim that this is new code and not subject to the license. Clean room implementation (i.e. one person writes a description of what the code, does, and a second writes code based on this description?) This is allowed. Copying the code over? It's still subject to license whether they want it to be or not, and that is as it should be.

And, to be clear, patent trolls are patent trolls -- companies that have not invented anything, just patents and lawyers, often abusing the patent system by extending their patent(s) so they can add things that are already being done by others but get it backdated so they can falsely claim they "invented" them first (.. I think in the US this was finally fixed, but the patent system did have that ridiculous bug where you could extend a patent for years WHILE adding things to it and everything in it would be backdated to the original filing date.) People producing code under an open source license enforcing their license against people who think they can incorporate their code into proprietary products without following the license? This is not trolling this is using the copyright system as intended.

As for developing new licenses for AI data sets -- that does make sense. It's still tricky, an AI that will spit out vebatim blocks of code without following the license of that code, putting the entire AI data set as a whole under a license does not change the fact that the code it's spitting out is still subject to the original license whether the AI company or people using the code the AI spit out want it to be or not.

(To be clear, I'm not hating on the article, Steven did a fine job writing up what's going on here. I'm objecting to the AI system vendors arguments that they should be able to pirate open source code for proprietary products because an AI read the code off, say, github, then spit it back out.)