* Posts by drankinatty

278 publicly visible posts • joined 25 Oct 2018

Page:

GNOME dev gives fans of Linux's middle-click paste the middle finger

drankinatty

The "Gnome way or the highway" is what is wrong with Linux

This is the epitome of the arrogance of the Gnome developers. Not only do they take ownership and break toolkits that belong to Gimp, they MS, for lack of better words, Linux itself, forcing unasked for features down user's throats while destroying functionality. That's why I stopped using qnome after gnome 2. But this behavior by the gnome-devs not only alienates gnome users, but generally has a bleed-over effect on Linux as a whole. Between freedesktop.,org and gnome.org, Linux acceptance has likely been set back by decades, plural, as the infighting and breaks with backwards compatibility became the norm in both camps.

Linux was founded on "user choice". The "my way or the highway" attitude never succeeds in growing of building anything beyond closed-source corporate profits. Sad, but not surprising, to see gnome, as the organization, bring that to open-source. As a nearly three-decade Linux user, it just leaves a bad taste in your mouth.

Lego crams an ASIC in a brick to keep kids interested

drankinatty

So much for pure imagination...

Back in the day, way back in the day, you had Lego, Tinker Toys, green army men and Lincoln logs. There was no end to what you could make. There were no blocks saying "'I only do a handful of things", you had singles, doubles and if you got the fancy sets, you had the thin zero-thickness blocks that came in many different sizes (basically for serving as bases to longer spans, where you may have a door or other opening that spanned up to three double-blocks. The point being, what you did with any of the toys was what came from unbridled imagination. The things you now basically have to spoon-feed children to develop. There were many of the best toys that didn't even come as toys. A safety-pin and pillow case, and you were superman, batman or whatever hero you wanted to be :)

Having been raised in the no-tech time, and having raised children spanning the no-tech to lots-of-tech times, I'm not certain "tech" adds anything to these type of toys or the toy experience. Sure, there are all tech games, like the video games, that matured from basic Doom to console gaming system during the time I raised my kids, and those were supposed to provide the adventure. But with physical toys to be played with, I'm not sure that the tech doesn't detract from the imagination. I mean you certainly wouldn't want to light the smart blocks and green army men on fire to pretend the dripping plastic from the sacrificial soldiers were incoming artillery (with the super-cool swhoosh the flaming balls of green plastic made as they dropped off the soldier raining down on the enemy fort below :)

I guess if the 4x2 smart-block has an "Incoming" mode - then it just maybe could add to the battlefield creations. Tech-in-toys is hard to do and find the balance between Lego still be Lego, instead of being transformed into an erector-set with odd noises... There probably all made from flame-safe compounds today any way....

Since my kids are long out of the house, I'll have to rely on those still in the fray for feedback.

An early end to the holidays: 'Heartbleed of MongoDB' is now under active exploit

drankinatty

Re: OK

As a greybeard I didn't either ... until I waded in. I've always been a traditional DB user where referential integrity is enforced with a strict 1:1 filed match between each record of a table. What Mongo does (what all "document databases" do) is they allow storage of data without the 1:1 requirement that every field in the new data match some strict table layout (schema, fields, whatever). It's enough that there are some common fields that allows querying and retrieval to be made.

So for databases that deal in documents that are not by design identical (think, e.g. about an online store where each item is a document [a record], but not all items are the same type). Where with a traditional DB you would have to insert dummy data and require every item to have the same fields (whether they need them or not), with a "document database", you just slap another document into your table and it has whatever fields it has. This radically simplifies storing "stuff" from a development standpoint. Each record just has the fields hit has, and when you query the table, it knows what records have that term and ignores those that don't.

Document databases are not a replacement for traditional databases where "referential integrity" is required (fancy word for 1:1 field match), but they provide a complimentary storage option if you are dealing with data that doesn't need every field in common, but you still want to collect all related documents in one table. Yes, you pay for the flexibility with a bit of overhead, but I've actually been impressed by the simplicity of the approach.

Tis the season when tech leaders rub their crystal balls

drankinatty

Re: Yes its time for the ROI on AI

As in defenestration?

'PromptQuest' is the worst game of 2025. You play it when trying to make chatbots work

drankinatty

It's the Acronym

They got it wrong. AI => "Absolute Incompetence", both in output and the long-term mental capacity of the user. Why learn? Why critically think? Just garbage-in-and-garbage-out. Gigo anyone?

Don't get me wrong, I can appreciate the benefit in a tool that has scoured the known limits of the written word and can predict ad nauseam a sequence of words that satisfies a query. That can be both useful and impressive at the same time. But, to appreciate the output and ensure it actually satisfies the query, you must possess the knowledge to begin with. Yes, it can rapidly assemble in minutes what would take hours or days to do with pencil and paper, but unless you know what is supposed to be on the final paper, the results either makes you look smarter than you are, or more likely, a whole lot dumber.

I can appreciate what a generative pre-trained transformer does as a tool, but it is not a substitute for personal mastery of a subject or the ability to critically think. Much hype surrounds the uses for such a tool, but if it cannot arrive a the same answer twice given the same query, that speaks volumes about the much hyped game of "PromptQuest". An apt name resulting from critical thinking. If the definition of stupidity is doing the same thing over and over again expecting a different outcome ... what the hell do you call it when the outcomes actually differ?

Here's to a better 2026 to all. Hopefully "PromptQuest" brings more than 30% higher electric bills...

What the Linux desktop really needs to challenge Windows

drankinatty

Snatching Defeat from the Jaws of Victory

My hair is fully gray, and I do recall my introduction to shell on the Vax, and Unix on Sun SpacStations, and I do hold out the best hopes to see the "Linux Desktop" becoming more prevalent on all computers, but history teaches us that the developers always find a way to snatch defeat from the jaws of victory. 2007/8 were particularly painful as both the major desktops abandoned well developed code-bases and well liked desktop environments to follow the lead of the new group of "kids with crayons" chasing desktop "widgets" and "effects" that added nothing to the ability to use a computer, but which "threw the baby out with the bathwater" and set the Linux Desktop back at decade or more.

KDE 4.0.4a (alpha) was pushed by openSUSE 11.0 as the default desktop and blackscreened continually for years. The desktop was a hodge-podge of apps that had been ported to KDE4 and those that languished well into Plasma before finally being ported (and broken, and taking years before, e.g. konqueror --profile filemanagement could open up looking the same way it did when it was shut down). Gnome fared no better with gtk+3 breaking backwards compatibility with every new point-release for a decade or more. Icon and widget factories, etc.

This effectively relegated the Linux Desktop to hobbyists, of which I've always been one. If it takes hours to collect information, author bug reports, and then spend weeks if not months of back and forth, that part of giving back to the community which we gladly do. That is also the complete deal-killer for the Linux Desktop in a business setting. There is no way any company can retrain workers on a monthly basis to deal with issues created by fights between toolkit makers or sloppy porting of apps from one toolkit to the next.

Which brings us to the present, after a long road to finally get Plasma frameworks and gtk4 settled. And what to the good stewards of the Linux Desktop do to ensure another decade of defeat snatched from the jaws of victory? Oh, they decide to tear down X11 and follow the kids with crayons pushing wayland. They make it tribal, not based on merit or features or stability or any other metric you would look at in choosing how your computer works. That will certainly give business the warm fuzzy it needs that things have stabilized to the point of the Linux Desktop being a reliable alternative.

We may as well just go build out a bizillion data centers and burn the planet down... Why we can't learn from history is bewildering and frustrating. But I'm a hobbyist, my Linux Desktop will work fine, but I'm not sure I have another decade to give. At some point sanity must prevail.

AI has pumped hyperscale capex, capacity – but how long can it last?

drankinatty

Re: To what end?

That hit the nail squarely on the head - and with the perfect tone. The insatiable greed of a few has sentenced our children and children's children to a climate becoming increasingly hotter long after the window of opportunity to prevent worst having closed on our watch at the end of this decade. While Paris may have wished for a lofty 1.5 deg C limit in global average temperature rise, current projection of the "best case", equally rosy, scenarios show surpassing 2 deg C. The number of boundaries crossed, tipping-points reached and self-perpetuating feedback-loops entailed in that additional 0.5 deg C of warming is staggering. The worst capitalism has to offer has squandered 40 years of warning with full knowledge of the consequences. It's a hard truth to explain to your children.

SoftBank scrambling to come up with $22.5B in OpenAI funding before New Year

drankinatty

Re: OpenAI has become prfitable....

It's only a matter if time before the music stops in this grand financial game of musical-chars. The sad part, and the risk, is the fallout from being the one left standing may take national and global economies down with them. Somebody is going to be left holding the bag, and it's a rather large and smelly bag.

The CRASH Clock is ticking as satellite congestion in low Earth orbit worsens

drankinatty

You are obviously an Aerospace Engineer with a full grasp of orbital mechanics and low earth orbit (Not). Or more likely a paid lobbyist for the boot-lipped, ketamine snorting immigrant from South Africa that duped the American taxpayer into subsidizing a satellite based internet build-out where he trashed regions of low earth orbit with the same vigor and incompetence he took with a chainsaw to US government agencies. Or your are one of the Maggot idiots that live a fact-free existence. Either way, it's far better to hold your tongue and sit silent and let the rest of the world think you may be ignorant than it is to pick up a keyboard and shatter all doubt.

Wonder if a bad-actor nation that had the capabilities and wanted to lash out at the rest of the world wouldn't just target a few satellites and scatter the debris across wide regions of low earth orbit. How short-sighted the use of this limited resource has been... But then again, humanity doesn't really have a very good track record over the last 300 years.

10K Docker images spray live cloud creds across the internet

drankinatty

When I read the article, the title of your post jumped to mind. Forrest Gump had it right :)

Oracle raises AI spending estimate, spooks investors

drankinatty

Re: "if you assume cloud IOUs will be fulfilled"

$300B on the books from OpenAI that has yet to turn a profit -- what could possibly go wrong? Visions of sugar-plumbs and mortgage meltdowns abound this time of year.

Welcome to America - now show us your last five years of social media posts

drankinatty

Re: Hey Trump

We will gladly extradite him to the Hague after the war-crimes charges are brought over the strikes in the Caribbean and Pacific. We'll even give him the old one-finger solute on the way out reminding him of how many friends he had before his dog died...

Vibe coding will deliver a wonderful proliferation of personalized software

drankinatty

Re: Do you smell Tulips !!!???

"The continuing push to find some arena where 'AI' is accepted & used to generate profits and start the general acceptance into our everyday lives is beyond desperation."

That's the part that really worries me. The AI companies are financed with $2T of circular debt secured by the promise of non-existing future profit. To the point they floated the idea of duping the US taxpayer into backing the debt. The echos of "credit default swaps" are loud indeed.

While the models can spit out code that compiles and functions, absent continual revision by human experts, putting any of it into production is a fools errand. When very experienced human can argue of the meaning of text in a language standard, those areas of interpretation are an LLM's Achilles's heel to ever being able to get things 100% right. Whether behavior is defined or not hangs in the balance. That bodes well for continued employment of experienced coders.

IBM touts progress on tech stack for AI-enabled airline with no passengers or alcohol

drankinatty

Re: Dangerous Article

They lost me with Watsonx will also underpin an "AI-based concierge experience for employees that is trustworthy, proactive, contextually aware, and proposes next best actions for each persona interacting with guests." I like to think I'm tech savy, and I can pretty much understand what Watsonx is from a 30,000 foot level, but the rest of that cacophony of quoted words just left the thinking "WTF??"

UK to Europe: The time to counter Russia's information war machine is now

drankinatty

Re: Given how many MAGA "influencers" turned out to be on the Russian payroll

You forgot at least one - the orange man impersonating a president is likely on the list along with his crypto co-conspirator Witkoff and his son-in-law with the father the orange man pardoned. They need to be continually reminded of the Budapest Memorandum and its control over how the war in Ukraine should be resolved.

Congress quietly strips right-to-repair provisions from US military spending bill

drankinatty

Re: As usual

Crypto changed wallets somewhere... What we need is more sunshine, the greatest disinfectant for political corruption yet developed.

Whatever legitimate places AI has, inside an OS ain't one

drankinatty

Re: I Can't Wait For It To Be Rolled Out....

That works swimmingly well until the first agentic disclosure of attorney-client or attorney-work-product or protected health information is involved and the lawyer or doctor is disbarred or has their medical license suspended for failure to safeguard client/patient information. That presents a sticky-wicket. It is the bar/medical board that imposes the safeguarding requirement and places the non-deligable duty on the attorney or physician. (at least in the US) So regardless of the how integral the agentic AI part of the OS becomes, or acquiescence of the employer, having information disclosed behind the scene by some part of the OS hangs around the professional's neck like a noose or guillotine frame.

Beyond the protected information aspect, system level hallucinations are as foreseeable as the recent article where google's new AI tool wiped the poor guys D:\ drive and then hallucinated a mea culpa. All the sorry in the world won't fix a wiped OS if the letter changes to "C". (but that does provide a perfect opportunity to be done with the M$ BS and simply load Linux...)

And all this is just the preamble to the "yet to be seen" disclosure of exactly what information the new system-level part of the OS is sending off to *3rd party servers* to enable the agentic operations to take place. And why the true need to make the feature system-level is likely to remove the user's control and ability to limit or disable the gold-mine of new personal information being phoned home under the guise of "Agentic OS" features. Loosing an individuals control over what information the OS shares would seem to have reached the point of absurdity with the proposed "Agentic OS".

All of it adds up to the most colossal "What Could Possibly Go Wrong?" I can imagine. Playing Russian-Roulette with client information on something as undefined and untested as an "agentic OS" isn't a position I'll allow myself to be put in.

I'm sure somewhere out there somebody wants an OS that will think for them and do it all behind the scenes. Personally, I just want an OS that does what I tell it to, and no more.

FTC schools edtech outfit after intruder walked off with 10M student records

drankinatty

Really 10M Student's (children's) data exposed -- and no fine?

Talk about a toothless wonder. The FTC not levying a fine when data on 10M students is pilfered in plain text is baffling. Almost like crypto changed wallets somewhere to buy leniency. Sad when you have to question the motivation of acts supposedly taken in your name. The fish truly rots from the head down.

Windows 11 needs an XP SP2 moment, says ex-Microsoft engineer

drankinatty

Re: Wait

Does kinda grow on you after a while.... After 25 years of Linux desktop use (and with the stable of windows boxes as well), ultimately you learn from a rudimentary functional standpoint, there's not much of a difference. You install, your drives are configured, the OS is installed, basic graphics is setup and your network is configured to use DHCP by default, it just works. You point the mouse at the menu, and click and your app opens, it just works. There are only a few apps where the Linux equivalents still lag (Quickbooks a notable one). On the other hand, there are 1000's of fantastic apps for Linux that simply have no counterpart on windows (or none you can get without built-in malware).

Twenty years ago, hardware support was the big divider for all but the bleeding-edge or obscure one-off hardware. On balance, Linux has done a very good job with driver support, and it's no longer the dividing factor it once was.

Then you start using Linux a bit more, discovery the development tools and command line utilities and the fact you have a full array of server apps at your finger-tips that makes exchange look like your grandmothers blown-out station-wagon, the vector and bitmap graphic tools and utilities, and the list goes on and on. Twenty years ago, the joke was if you think "I wish I had a program to do ... xyz", there is a good chance somebody has already written it, and it is packaged for your distro. If you find it, but it's not packaged for your distro, then you just pick your favorite compiler and build it yourself.

Now obviously this is just a superficial scratch of the proverbial surface, but as someone who has used both in a small business, developed on both, created work-product on both, and on and on, the more windows "'supposedly got better" with each release from 3.1, 95, 98, me, XP, Vista, 7, 10 -- the more Linux really did. (sorry can't help with 11, it Linux only after 10)

That puts me in Mr. Plummer's age group, and I couldn't agree more. The shareholders may be wanting to see the endless AI bloat premised on some undefined future return on investment. Though I wonder how tolerant a public will remain subsidizing the technology designed to take their jobs? Perhaps that's where the return on investment lies, and their just not giving us the complete picture yet?

Shai-Hulud worm returns, belches secrets to 25K GitHub repos

drankinatty

Barely a week since the last npm supply chain compromise

There is something horribly wrong with the premise of npm. We have barely gone 7 days since https://www.theregister.com/2025/11/14/selfreplicating_supplychain_attack_poisons_150k/ npm looks a lot like Russian-Roulette or cancer at this point.

Systemd 259 release candidate flexes musl support – with long list of caveats

drankinatty

Meh, there things about systemd I like, dislike and then really dislike, but live with it.

We were privy to the very vocal preview of the SysV init systemd grumbles as systemd replaced sysinit on Arch in 2012(?) -- a long time ago. We also have openSUSE (or need to check if it is formally back to just SUSE now), Debian, PiOS, Ubuntu, Alpine and probably a few others I'm overlooking. As it stands today, all but Alpine use systemd, and that appears to be the point of the muslc compatibility layer.

On balance, the only trade off to move from init to systemd is the complexity of learning to write/manage .service files where a simple bash script will would be the only challenge with initscript. The frustrating points of systemd are its near mandatory control of resolv.conf via the systemd-resolvd and all the other appendages that Pottering has glued on to systemd where before you had several contenders providing similar functionality to choose from for everything from managing login, network control and the like.

Lately I find myself keeping the parts that make sense and disabling (sometimes to the extent of setting immutable attributes on files like resolv.conf go resolvd can't break name resolution on a regular basis. Handling user-seats with the login manager has caused issues for some not using SDDM (I don't, I still use kdm), but that has at least been worked out to the point things work correctly with kdm, seats and the user-service setup (never something I would have asked for, but to be fair, it's configuration of the user environment so that all sound is initialized and works simply with pipewire, is a good thing for me.)

And that's pretty much the honest rub, so long as it is configurable enough I can tweak the parts I don't particularly enjoy so they are usable, turn off what I don't want, and finally not have unasked for new features break the way I use Linux on a consistent basis, I'll live with it without casting aspersions.

I mean if I pick up a distro running systemd, I can make it do what I want it to do and I'm not one of those zealots that would go to the trouble to switch init systems just because I dion't like systemd. YMMV. But systemd isn't blameless in the way it could easily continue support for good-old init script, but instead Pottering makes a conscious choice to go to the trouble of removing that support -- for the benefit of who? It's not like continuing to support 3 directories of bash scripts was any type maintenance issue or headache in the systemd world. And that will rub some people the wrong way.

Devs gripe about having AI shoved down their throats

drankinatty

Re: "AI is no longer optional."

Oh god.... And the follow-on to that is if it still doesn't work ... get a bigger hammer...

(flashback to the early 80's, applies to all professions, worst application was in auto-repair and separating ball-joints, tie-rod ends, etc.., and yes there is a rational limit to how big a hammer should be)

The open question is does the proverbial bigger hammer (more parameters in the model, more floating-point bits crunched) actually lead to a better model, or just more creative hallucinations? The bigger hammer adage just may break down at this point. And therein lies the part of LLMs I'll never understand. If despite more parameters, more data, more everything, there is still no concrete way to know how models actually form the response, no defined path. How does anyone know that the "more will be better" driving the insane data-center build-out will actually solve the problems that keep thinking people on the sidelines?

I'll admit it, I gave up on AI after 4 queries (2 to chatgpt, 2 to claude) to create a proverbial "Hello World!" program in C, with contingent numerical inputs, type checking, simple arithmetic using defaults if input wasn't provided and reasonable error messages describing any failures that occurred in the 50 or so lines of code it should take. Both produced working code, claude produced the most conforming code, chat assigned the result of strtol() to an int without validating the result fit in a signed 32-bit value, etc.. It became readily apparent that if I was going to have to go line-by-line and verify the model's code -- that if they can't get the dead-bang simple stuff right, how in the heck could you trust anything substantial they spit out?

No time saved for me. But I have the luxury of not being paid to code, so no overseer to answer to. I feel for those in the profession and in this position. How do you balance the application of your time, talent and energy? Use it to: (a) actually write the code, or (b) validate code some model spit out fixing errors as you go. Done right, both should lead to the same answer, but (a) sure sounds a lot more self-satisfying than the drudgery imposed by (b).

drankinatty

Re: Gen Z

Ahh, to be sixteen and omnipotent again. Ten foot tall and bulletproof. Troubling how the intervening 45 years of the slow revealing of truth has shown you were not omnipotent or bulletproof at 16... Funny how time has a way of doing that.

In Debian, APT 3 gains features – but KeepassXC loses them

drankinatty

Re: Agree to disagree

Another issue is package sprawl and confusion (as touched on). Keepassxc meant one thing in the Linux community before it was split and stripped in Debian. Just as vlc meant one thing before Arch stripped all plugins and put then into individual packages. The kernel-firmware package(s) are no different - but at least there size was the driver (but if the distro ends up installing all by default - you've accomplished nothing)

The result in each case is more packages for the user to find/install and in the case of keepassxc, the meaning of that package has changed. Just my $0.02, but it would have been far better from a continuity standpoint to leave keepassxc alone and create a stripped version in keepassxc-minimum rather than stripping keepassxc and moving what was keepassxc to keepassxc-full.

Snap out of it: Canonical on Flatpak friction, Core Desktop, and the future of Ubuntu

drankinatty

Re: Good read

Think of immutable distros as "locked down Linux" or "Linux with training-wheels". It provides just what it says, an immutable core, so the normal customizations advances users have made for years in /etc and elsewhere are no longer available. Further complicating matters is the lack of standardization in how apps look for config overrides from, e.g. /usr, /etc/ and then ~/. Not all apps implement the same lookup leaving users who need customization often with nothing but a dive into the package source to see if and what config paths are utilized.

That said, I agree with the assessment of the interview. While I may not agree with the design decisions being made, at least the justifications were given, and they are free to choose their own path forward. For Ubuntu user, the good news is the announcement that a "Ubuntu-Classic" will be available along side. So you can give both a drive and make your own decisions for your use case. There are some where immutable is a great fit - just not for a normal power-user desktop or server.

Invisible npm malware pulls a disappearing act – then nicks your tokens

drankinatty

Re: This is a bug in npmjs.org

This is the type of attack that worries the heck out of me when building source packages that rely on npm or pypi or the like. The build-system sees npm as a dependency and then pulls in npm and the dependent packages as part of an automated build. As a package maintainer performing test builds or a user building from source, you are compromised before you know it.

The only saving grace is most OS level build systems like Arch's makepkg can build in a clean chroot (makechrootpkg), mitigating some of the damage, but the only people that setup and take advantage of the additional steps to setup a chtoot build environment are package maintainers and a few advanced users. These "fetch on-demand" installers provide a wide-open attack path for miscreants to exploit that have shown themselves incapable of being made safe.

I keep notes on npm and pypi compromises, mostly from stories on the Register, and it doesn't seem much more than a month goes by in between revelations of a new compromise in either of them. Yes, I'm old, and likely a curmudgeon, but I'm of the opinion the only way to ensure sources are safe is to include the checksumed source, not "fetch on-demand" schemes as part of source-packages. Otherwise, without the ability to validate the sources before the build begins, we will be playing the game of exploit whack-a-mole forever.

AI browsers face a security flaw as inevitable as death and taxes

drankinatty

Hello Firefox 144 - with perplexity AI!

No thank you Mozilla. (unwanted, un-asked for and completely unneeded). Mozilla seems overcome by FOMO, drank the koolaid, and threw security out the window. How does a professional with a duty to protect confidential information at the risk of losing their license, use any of these tools anymore?

Ex-CISA head thinks AI might fix code so fast we won't need security teams

drankinatty

Re: This...

The orange man promised the "Best People" -- I guess this is what he was talking about. Imagine a government led by the clueless, the corrupt, the mentally challenged, or all three above... If you are here, worry, if you are not, be thankful.

NASA Jet Propulsion Laboratory: Another 550 employees set to leave the building

drankinatty

Re: " the next generation of engineers might choose to pass on JPL and NASA"

The writing has been on the wall since return-to-flight (STS-26). Following the soul-searching and critical review after the 51-L Challenger explosion, the corporate consolidations and name-change game began in earnest. Rockwell (corporate and RSOC), McDonnell Douglas, Martin Marietta Lockheed and Ford Aerospace, were the traditional engineering contractors when Nasa Rd. 1 was still a 2-lane blacktop running between I-45 and Kemah.

Beginning in the early 90's the consolidation and reorganization hit full swing with Loral (previously Ford Aero), Unisys and the like coming onto the scene, station was grabbing more and more of the shuttle budget and DOD flights were waning, ending with STS-53. The "design it, cut-metal, build it and let's go fly" mentality was replaced by "let's have a meeting to determine the agenda for the next meeting"... Deming's TQM got its nose under the tent and the culture it created resulted in the blaze in the sky Columbia cut across Texas.

Through it all, JPL retained the "design it, cut metal, build it and let's go fly" mentality. Sad to see the ax falling there. There is no faster way to destroy young engineering talent than to tell them, in not so many words, their struggles through mind-bending physics and mathematics are no longer valued by your country. The reality, then and now, is there is a very very small market for aerospace talent. It's either space, defense or a very few in aviation.

Shrinking that pool is a loss for the country, which ever country it is. While no government program is perfect, there are very few, that push technical boundaries the same way aerospace does. Producing competent engineers to design to a factor-of-safety of 0.1 or less doesn't happen overnight, and there isn't another engineering discipline that does it.

Climate goals go up in smoke as US datacenters turn to coal

drankinatty

Will history understand how greed justified intentional destruction of the planet?

I doubt history will be kind, nor should it be.

As the Paris goal of limiting total warming to 1.5 degrees Celsius was surpassed last year with no significant reduction in emissions or any real plan to curb emissions in the near future, it's worth taking a moment to look back and understand why 1.5 degrees Celsius was chosen as a target. 1.5 degrees total warming is the threshold beyond which the Greenland ice-sheet enters terminal decline with, at least, 23 feet (7 meters) of sea-level rise baked in.

As of 2024 the goal is to limit total warming to "as close to 2 degrees Celsius as possible", with the presumption being that 2 degrees will be passed. If you haven't looked at what 2 degrees total warming means for the planet, it's time to take a critical look. The impacts of climate change become exponentially more catastrophic for each 0.1 degree we exceed 1.5 degrees total warming by.

That's not okay. The current tech robber-barons happily trade your children's and your children's children's future for what? To hype the AI bubble further and over-inflate the next quarterly report to keep the capital flowing and the dreams of AI nirvana alive? (note the conspicuous absence of "earnings" between "'quarterly" and "report")

Are we really dumb enough to keep pumping CO2 into the atmosphere in ever increasing amounts despite full-knowledge of the consequences? (I know the answer to that question, and so do you. It is a stunning indictment of capitalism and its oft-touted ability to self-regulate) And now returning to coal for electricity generation? You've got to be kidding me, and I'm not a tree-hugger or climate-zealot, just a dad of three that cannot believe we can't do better. Enough is enough. Time's up.

Texas senators cry foul over Smithsonian's pricey Space Shuttle shuffle

drankinatty

Re: Risks?

This is right on point. When the shuttle carrier aircraft 747's were retired, the shuttles were presumed to be in their final resting places. It is the only feasible way to transport one long distance. Barge is really a no-go for flight aircraft, even though enterprise was moved that way, because of the risk of corrosion. Perhaps if you had an enclosed barge and shrink-wrapped the orbiter, but anything short of that is just playing with fire.

The move of Discovery from it's current resting place is nothing short of a Herculean effort, beginning with dismantling the portion of the current building housing it to get it out, to sawing down every telephone pole and traffic light in route to wherever it will be taken. Not to mention the routing logistics of avoiding all overpasses along the way.

Can it be done? Sure. Can it be done for anywhere close to the current estimated costs, doubtful. Should it be done is the proper question, and the answer to that is a firm "No".

We have a hell of a lot bigger and oranger problems in the US that need solving, moving Discovery to stroke some politicians ego doesn't even register on the list of current crisis to solve.

No account? No Windows 11, Microsoft says as another loophole snaps shut

drankinatty

Linux thanks you M$

Who would have thought M$'s arrogance and desire to extract personal information from the privacy conscious by mandating a M$ account for Win11 would be the helping hand Linux needed for the Linux Desktop to finally become mainstream.

I won't run an OS that doesn't install and run with with the Ethernet cable pulled. Windows isn't something I'll miss anyway, good riddance. :)

Windows 10 refuses to go gentle into that good night

drankinatty

Re: Untested workarounds

Once you make the switch to Linux, you will soon wonder how you ever got along without it. It will spoil you.

The only app missing is a Quickbooks type app, but other than that, from office to development, to graphics, to CAD, to audio, to video processing, to ... whatever, it's all there and much, much, more. I've run Linux as a primary desktop and server OS for (?) ... 23 years now. I also have many windows boxes, and have had during that 23 year period, and don't really have a gripe about that either, but returning to windows as a primary OS would be as limiting as trading in your latest car for a horse-and-buggy again.

Not worth the grief, just load Linux and be done with it. Load it on a new SSD if you will and pull and save your windows drive in a drawer (for emergencies). It won't be long before it's just a forgotten drive in a drawer somewhere.

I'm out, says OpenSUSE: We're dropping bcachefs support from next kernel version

drankinatty

Leap 15.4 was my last openSUSE Release - Tumbleweed is fine, but I will not load 16.0.

I started with a boxed-set of SuSE 7.0 Pro (code name: Air) and have used SuSE, SUSE, and openSUSE (and now SUSE again [1]) ever since, through all the ownership changes, Novell, microfocus, the deal with the devil (M$), no microfocus, Leap, etc.. but with 15.6 and the focus on an immutable root filesystem, it became quite clear that providing a traditional Linux distribution had given way to returning to using its open-source release distribution as a sandbox for its commercial offerings. There's no fault with that, but it did mean losing much of what the distribution had built itself on.

To SUSE's credit it did provide Tumbleweed as a rolling release that retains all the traditional Linux distribution characteristics, and to SUSE's credit again, works amazingly well -- even building and running the Nvidia 390xx driver (the G04 driver in SUSE speak). So while I won't install another Leap release, I continue to happily run openSUSE Tumbleweed.

I've also run Arch since early 2009, thanks to the SUSE move to KDE4 (4.0.4a) as the default desktop with the 11.0 release, which I also continue to happily run today. Compared to Arch, Tumbleweed package rebuild volume is quite a bit higher, not horrifically so, but while Arch updates may run 500M - 1.1G between kernel point releases, it's not uncommon to come close to tripling the update download size with Tumbleweed. The comforting part is that sometimes Arch leads in being current with upstream (including the kernel), and sometimes Tumbleweed gets there first. (they are usually within 48 hours of each other on kernel point releases no matter what)

The final point in the article about the painful nature of a Release update, especially one with no autoyast and no xorg, is something that, to some extent, has always been part of the release model. While you could always 'zypper dup' to upgrade between releases, depending on the changes and your package selection, a "fresh install" had remained the recommendation. (simply dumping the packages names with rpm and then reinstalling) The downside was even for the 'particular' Linux user, that could encompass a few days of moving and validating any version changes between configs, applying desktop tweaks, rebuilding freetype to restore proper hinting depending on the display, etc. Moving to something like 16.0 for an organization that relies on features of xorg that wayland doesn't provide would be a showstopper.

As for bcachefs, it's impact will not be as bad on my use as the loss of ReiserFS in the kernel. Let's the hope Kent doesn't suffer the same fate as Hans...

[1] I'm not sure openSUSE is a thing anymore, there was discussion about it being just SUSE now, but I'm not sure SUSE is even clear on what it is at this point.

Internet mapping and research outfit Censys reveals state-based abuse, harassment

drankinatty

Re: Build that Wall!

Moving ssh to a high port takes care of 99% of the bad guys, and if you want to take care of the other 1%, you can implement port-knocking enable ssh only after the proper sequence of knocks is received. I too am fairly strict on geographic blocking, but I tend to only do that by CIDR rather than top-level domain. Though no approach is perfect. The fragmentation and resale of IPv4 blocks has made it almost impossible to get a consistent listing of geographic origin anymore.

Says a lot about humanity when you look at what the internet has become since the naive days when Mosaic was the only browser in town ... and what it says about humanity, isn't good.

drankinatty

These "Research" Companies are a Cancer

Censys, Shodan, Digital Ocean and the lot are a research cancer that has grown on the internet that abuses and then catalogs your IP and public-facing services and makes that information available to the bad guys for free. I have near daily fail2ban bans of IPs similar to (among other services):

"2025-08-31T01:11:13.452946-05:00 valkyrie postfix/smtpd[19354]: improper command pipelining after CONNECT from 216-131-108-38.zrh.as62651.net[216.131.108.38]: €€ü[WË8…z*)QáÙ©·85§Ž7°ÂÏÄ øµQC ıÃB¶’ºBx³¶2唾6쬡ÄÜv—K“Ô€>À,À0€ŸÌ©Ì¨ÌªÀ+À/"

or

"2025-08-31T09:10:49.166588-05:00 valkyrie postfix/smtpd[25479]: improper command pipelining after CONNECT from unknown[104.248.30.84]: ¥€¡fi.baßÞ!P _ эÐÀД¦Í²½K˜dwÂ} s7òöó£ãÕ“©—m§‘í[׿€3™t 怊€€gÀžÀ¢€ž€9€kÀŸÀ£€Ÿ"

If you check the information collected, not only do they provide complete reports on the services, but also the software running behind them, version and patch-level (e.g. PHP, etc..). No wonder the bad actors flock to these "research" sites, they do most of the work for them.

Further, very few of these "research" sites provide an "opt-out" by providing a list of IPs they operate on (I can think of one that does). I've literally got ipset lists with hundreds and hundreds of IPs from researcher's past IPs that have been blocked, but it is just a temporary game of whack-a-mole.

There is a fine line between "research" and "hacking", and these sites fall further toward the latter.

Programmers: you have to watch your weight, too

drankinatty

While the tech aspects are pressing, the final paragraph touching on geo-politics and climate is the most telling:

"High-tech chips and software will be the least of our worries, but we will have to deal with artisanal software that can be maintained by hand"

We ignore geo-politics and climate at our peril. While those younger than 50 may have little memory of conversations with WWII veterans, or "'duck and cover" drills of the Cold-War, we have done little to protect against aspects of human nature that thrive on greed, strive for power and disseminate propaganda to satisfy the need for either. If you don't understand "AMOC" or the implications of its stoppage, best do a quick refresher on the current state of climate. The prediction of "Annual Maximum High Temperatures" in the desert southwest of the US easily exceeding 150 degrees Fahrenheit by 2100 will get your attention fast.

Kudos to the author for keeping the big picture in mind.

Linux Foundation says yes to NoSQL via DocumentDB

drankinatty

I dunno, I think the first two comments may be a bit harsh. Adding what is to eventually be a standard NoSQL storage option under PostgresSQL is a win-win for the community. The MIT license protects the project from redis style hijacking and if you haven't worked both with SQL and NoSQL, then you are in for a treat. While there are rocks to be thrown at The Linux Foundation, it also does an equal measure good. M$ is giving up the code (has), so what is needed is a long-term reliable steward for it. From the community's standpoint what matters is the source for the extensions stays open and allows the standardization of how you interact with it.

While I used to pooh-pooh NoSQL, it has solid benefits. It's not a replacement of what SQL does, they are apples and oranges. It's benefit is in bson document handling -- eliminating the field-by-field integrity requirement between records in SQL and just allowing you to slap another document in storage and retrieve it based on a loose key/value relationship. Add jq to the mix and you pretty much have a Swiss Army-Knife storage solution for all your loosely related documents.

(think systemd journal spitting out all comm-auth failures of bad guys knocking on your server door, a quick jq translation of add or separate wanted information into key/values (e.g. ruser and rhost or some such) and add the whole thing as a new document in your NoSQL database of intrusion attempts that you can query by any of the key/value pairs in the document)

For the purpose it serves, it takes a fraction of the technical resources to do that in NoSQL compared to SQL. TLF picking the project up as the steward for the extensions to allows you to do in Postgres what now takes a Mongo or similar to replicate is a great addition.

While anything can go south, if this works out and you have the extensions actively supported and developed and made available as a set of extensions to Postgres, I'll happily trade-in building mongo from source for it. I just don't see the same doom and gloom in the announcement. It may work out that way, but at least at this stage it as having an equal chance of being a solid benefit for all.

Arch Linux takes a pounding as DDoS attack enters week two

drankinatty

Ultimately we will need accoutability of the hardware/cloud providers that allow botnets to run

For a DDoS attack against Arch that makes little sense, it does expose an issue with the current provider framework that allows hardware and cloud services to be used for the attack. Yes, granny, and the chump that just can't quit opening e-mail attachments get a pass as far as accountability goes, but not so for the professional hosting or service providers. It isn't unreasonable to expect professional hosting providers and ISPs to have sophisticated tools in place that can spot and stop active attacks within minutes of their start.

How many millions of spam e-mails leave Comcast every day attempting to relay mail to "julia@imobust.com"? A simple pattern to spot and stop, but apparently big-tech can't be bothered. Chuckling, just checking the fail2ban jail for the last hour, and here's another:

2025-08-24T07:12:47.926493-05:00 valkyrie postfix/smtpd[28213]: NOQUEUE: reject: RCPT from c-73-195-211-98.hsd1.nj.comcast.net[73.195.211.98]: 504 5.5.2 <hypir>: Helo command rejected: need fully-qualified hostname; from=<hr@mydomainhidden.com> to=<julia@imobust.com> proto=ESMTP helo=<hypir>

Selling rent-a-servers for ~$2 a month provides a miscreant with their favorite asset, unlimited hosts and IP addresses to use freely in any attack they choose. Until we require civil and/or criminal accountability for those providing the tools to miscreants, the exponential growth of this cancer on the Internet will continue. And no it's not a one-size-fits-all problem, but just as tech-companies are expected to have specialized tools in place to defend against data-theft, providers must be required to have same level of specialized tools deployed to prevent the misuse of their platforms.

As long as they are free of this needed regulation, they will continue to take the position "it's not our problem", nothing will change, and the exponential growth of distributed bot-net attacks will continue to wreak havoc across the Internet. Yes, granny and the chumps make up a measurable slice of attacks running from infected machines, but even then, the ISP, properly motivated, by either regulation or threat of a civil liability, would have specialized tools in place to identify miscreant use of their hardware before the ink on the legislation (or judgement) was dry.

Back to being FOSS, Redis delivers a new, faster version

drankinatty

New version or New valkey (the latter)

After all of chaos redis caused in its "I'm FOSS", "oops, now I'm NOT FOSS", I wouldn't touch it with a 10-foot pole. Valkey, a redis fork before the "I'm not FOSS" license change, is a direct drop-in replacement, is FOSS, users are happy... and surprise redis does the license shuffle again to try to appeal to valkey users.

After redis has shown its commitment to FOSS is as reliable as the orange president's commitment to the constitution is -- no thanks, I'm quite happy with valkey.

Torvalds blasts tardy kernel dev: Your 'garbage' RISC-V patches are 'making the world worse'

drankinatty

Re: Dislike the delivery

Nah, we miss the old Linus. Life isn't all sugar-drops and gum-drops. Sometimes ... the truth hurts.

No qualms with the delivery.

Imagine the responsibility for making sure 38 millions lines of code continues to work seamlessly across versions updates. Seems pretty understandable that the one responsible may get a bit upset and need to lay down the law when asked to include poorly written code to the project at the last minute. I can't fault anyone for that and it's reassuring to see a firm hand on the tiller.

Gadget geeks aghast at guru's geriatric GPU

drankinatty

Hardware from the 20th Century is more than enough for 99% of users

Hardware spec outpaced what 99.9% of computer users will ever need at the turn of the century. Just how much compute power do you need to run e-mail, a web-browser, a word-processor and spreadsheet? Okay, so you miss out on being able to spin the Compiz cylinder at 1200 RPM, but other than that, as far as just producing work-product goes, minimal boxes more than mow the grass.

If push came to shove, I could get by with a Pi 3B+ just fine (build times would suffer, but that's just extra coffee).

Don't get me wrong, I'm not against building beast like boxes. I raised 3 kids and have a bone-pile of about 3 of every other year's gotta have Nvidia cards. (yes the 390 and 470 drivers are now a pain to patch with each new kernel version -- but hey, it looks like we will get a free pass with 6.16, no patches seen yet!) And yes, the kids needed the power and graphics power to game with all graphics options turned on. For me, meh, I've got no need. Now that the kids are all grown and out of the house, the bone-pile has quit growing, but looking at the old RTX2070 supers and GTX980s, there is still enough pixel pumping power for anything I'll ever need.

I'd a lot rather make use of older kit, adding a new SSD to replace spinning rust, and take the I/O benefit as that provides a bulk of what the latest and greatest promises as far as responsiveness. With RAM cheap, most old boxes now have 32G (and 64G if their chipsets support it). That's plenty enough to put even large builds on a tmpfs in RAM (e.g. PHP or MongoDB, etc..) which then complete quite quickly, and -- I avoid having to upgrade to a new 750-1000W modular PSU just to support the latest graphics cards.

I guess in sum, this is a testament to the advances made in computing capability over the past 40 years. From the 8080 with a few K of RAM and dual 8" floppy drives that wouldn't hold a single picture taken on your phone today, to Terabyte M.2 drives, processors with more cores than sense, RAM approaching the Terabyte scale and GPUs with more compute power than the system they are attached to. My Tumbleweed laptop powered by an ancient Gen 2 i7 sporting 8G still boots from off to full desktop in just under 12 seconds. Runs all Mozilla apps and libreoffice just fine (as does vscodium, etc..) and will finish most of the large builds I kick off within 30 minutes or so.

For decades I would spec-out the motherboard and chipset wanted, the socket and processor, buy the RAM and graphics card and PSU and put it all in one of Antec's solid cases (and most of those still run!). But today, if I need a box to jut prepare work product, I usually grab an off the shelf refurb HP that meets my needs and costs about $300 US. (many of those are still running too). The custom boxes and the kids custom gaming machines always cost more, but for shuffling documents and occasional builds, literally, just about anything made this century will do. Seems the King Penguin thinks somewhere along those lines too. Good for him. Taking a stand against wasted Watts contributing to global warming and using old kit as long as it meets the need to help prevent us all from drowning in e-waste (most of which is still operational...) That I can respect.

Raspberry Pi RP2350 A4 update fixes old bugs and dares you to break it again

drankinatty

Re: Whoop! (non sarcastic)

You need to temper the "Five Volt Tolerant" joy, as it does not mean you can simply hook up 5v signals to the logic pins. See "New RP235X silicon released" over at the Pico/General forum https://forums.raspberrypi.com/viewtopic.php?t=390423. the commend by "hippy" which clarifies:

<quote>

"I do wish Eben would refrain from using that term when the RP235X silicon doesn't meet the capabilities that term invariably conjures up; mainly that you can arbitrarily connect GPIO (FT) pins to any notional 5V signal, job done, nothing to worry about.

It's not "5V tolerant" as I, and I believe others, would take that to mean. It will only "tolerate 5V under specific circumstances".

At least the datasheet doesn't say "5V tolerant" anywhere, more correctly IMO says GPIO (FT) pins "will tolerate voltages up to 5.5 V, provided IOVDD is powered to 3.3 V".

<end quote>

So this is a pay careful attention to the datasheet on just what is meant by the improved 5v handling on the new pico. I was a bit surprised too.

Arch Linux users told to purge Firefox forks after AUR malware scare

drankinatty

Re: Common sense should always be applied

You got me by a year, I started in 2009. One thing of note is the infected AUR packages, all were "-bin" packages. I've long had a problem with Arch allowing "'bin" packages to be hosted on AUR. (for those not familiar, AUR packages are normally source packages that contain the Archlinux PKGBUILD build script that facilitates downloading and validating source files and building the package from source on your machine, "bin" packages are binaries built elsewhere and then hosted on the AUR for download where the PKGBUILD simply installs the binary package).

The lack of transparency in exactly how the bin packages are built, what they contain and the inability to have a validating checksum on each source component has always been a show-stopper for me. (though you should, in theory, be able to go look and satisfy yourself of the "bin" build contents -- somewhere) I get the balance, for packages like Firefox, or MongoDB, etc.. they are very large builds with a large number of dependencies. There is a "convenience" trade-off made. For those package, the normal "build from source" package is usually also available on AUR, but for convenience, and to avoid the large builds and dependency installs, many user choose the "bin" package to install.

I've long lobbied against including "bin" packages in AUR for just this reason. That "one step removed" layer between you being able to validate all sources on your machine before you kick off the build, and that taking place somewhere unknown by somebody operating behind an e-mail alias and the resulting binary files being package and uploaded to AUR provides too great and opportunity for compromise -- in that one little additional layer.

Arch, as always, did the right thing and warned on the aur-general mailing list as soon as the issue was discovered. As pointed out in the article, the only area of deficiency was the suggestion that affected users "take additional necessary steps" to ensure the malware was removed -- without providing an indication of what was needed to ensure the particular installed RAT was fully exterminated.

Lesson: I'll stick to building from validated sources -- even if the chroot ends up with eight-gig of dependencies and the build take several hours and every core I have available. Security does have its costs.

If you want a picture of the future, imagine humans checking AI didn't make a mistake – forever

drankinatty

I've yet to see demonstration of a "Successful Use Case", lawyers can't get it right.

Scraping the internet and being able to predict the next word, or decompose a cat image into a billion unrecognizable images just to be able to reconstruct another cat image is all good, but so far, other than being an neat curiosity, the AI craze hasn't really born fruit. Sure there have been sizable layoffs, but then the reckoning came that AI really can't replace all those tossed out like a worn-out pair of shoes. The scientific community is having to retract significant number of articles that AI "helped" write meaning the same old hallucination of supporting material found its way in.

None of that paints any type of rosy business picture that AI will ever be anything more than another bubble that pops, 30 years after the pin prick of the dot-com bubble. Unfortunately, the damage done by the hype isn't limited to the "hypsters". The hype comes with serious downsides, caveats and addendums for us all. Such as if AI doesn't take the electric grid down with spiraling energy demands that from a practical standpoint looks to set back the ability to reach net carbon neutral energy by decades as our temperature rise target over pre-industrial level races upward towards 2.5 deg C (which is already worst-case, Greeland Ice-Sheet lost, the AMOC stalled), and life on the blue-ball, third-planet from the sun hangs in the balance with temperature recovery models pushed out past 10,000 years.

No, until AI can beat an Atari 2600 in Chess -- it's all hype with serious downsides.

Firefox is fine. The people running it are not

drankinatty

Add NoScript and Ghostery and you are pretty much bullet-proof. I too have used firefox since 1.x, and I've had to bite my tongue a time of two as the "rabbit-pellet" version race zoomed from 3 to infinity and version numbering lost any meaning. I've also run the gamut trying alternatives, but in the end, with all its warts and the things I have to disable in about:config, I always end up back with firefox.

Is there room for improvement, sure. The loss of focus on the browser engine and making it the best has been apparent. Handing development over to the group of "kids with crayons" that tried to add every unnecessary bell and whistle they could dream up was a classic "Do the opposite" move. But, until some other FOSS browser emerges that will have a continue existence longer than a year and has less warts than firefox, I'll stick with the familiar warts I know, and know how to turn off. But, here's to hope....

This article places blame exactly where it should be, and provides a colorful (and correct) explanation of how it ended up that way. A point well made. If all the Mozilla "execs" had their salaries divided by 20, there would be a good chance those that remained, remained because they were dedicated to developing the best browser possible rather than being dedicated to the trappings of seven-figure salaries. The ad company sell-out and user-agreement modification to profit off user-data are symptoms of the cancer that has grown within the company's management.

Critics blast Microsoft's limited reprieve for those stuck on Windows 10

drankinatty

Re: Quite whining

Choose whichever distro you like, it's all Linux under the hood. The only difference between any of the ditros is how they choose to put the pieces together, subtle choices on how various configs are handled, what package manager is used and the package selection offered. Matters not whether you choose an RPM based distro, deb/apt, pacman, or good old install from tarball manage it yourself flavor. All depends on what you want. Whether a gui install/config tool to use, or whether you just want to spend the extra hour to really learn how Linux works and manually prepare the disks and setup the filesystems and then install and configure everything by hand.The result is the same.

There's nothing magic about mint. It's just another Ubuntu derivative. Works fine, as does Ubuntu, Debian, openSUSE, Slackware, Archlinux, etc... Try them all. Most provide live media you can simply boot and test drive before committing to your hard drive. It simply becomes a matter of taste at that point. Most have welcoming communities and mailing lists to help and many provide excellent online documentation. It can be a very enjoyable experience. The price is right. It costs less than a year of extended M$ security updates. (though you should make an effort to give back to the community -- that's what makes it all work -- quite well)

Proton bashes Apple and joins antitrust suit that seeks to throw the App Store wide open

drankinatty

Why does a Switzerland-based company route western e-mail through China?

Perhaps Apple would be more accommodating if Proton didn't route most (all) of the e-mail proton-mail handles through Chinese mail servers. Huh? Yep. (e.g., mail.protonmail.ch[185.205.70.128], [185.70.42.128], [176.119.200.128], and so on)

I've had no end of issues with US (Texas) based local-government's use of proton-mail being blocked at the firewall and having to specifically whitelist IPs to have city and county engineer's e-mail delivered. There may just be some valid concerns Apple has with this "Swiss" company's practices. (it's also begs the question -- what are these local governments thinking... all they hear is "proton" and "secure" without ever understanding more)

Frozen foods supermarket chain deploys facial recognition tech

drankinatty

Re: Hmm (double Hmm)

The company specifically says it will reduce "violent crime" -- when the hell did frozen food shoppers all turn violent? Did I miss that part? The data isn't stored? (how long does it exist? long enough to be transferred or sold? and ... we're just supposed to take that a face value? ... not.).

The solution seems readily apparent -- fresh salad anyone? If non-violent frozen-food consumers vote with their wallet, I bet the CEO changes his tune of the privacy infringing facial recognition escapade. What's the world coming to?? Beam be up Scotty, and if the Enterprise isn't available, I'll take the TARDIS.

CloudBees CEO says customers are slowing down on 'black box' code from AIs

drankinatty

Did you ever notice...

People that have very little to say like to misapply technical terms to sound important? "the velocity that they need to prosecute this trend at"? Really, "velocity"? And just how do you calculate the dot, cross or triple product of that "velocity"? Use "speed" or "pace" when you mean speed or pace. (</language nerd rant>)

I guess when what you are saying is "we've figured out that creating systems based on AI generated code can lead to significant reliability problems later on because nobody really knows HTF the code is put together" ... you have to come up with a whole lot of words to obscure what was readily available to most early on. But hey, you're drawing CEO pay, so you gotta make it sound complicated.

Page: