The Register Home Page

* Posts by that one in the corner

5065 publicly visible posts • joined 9 Nov 2021

Chinese Gen AI researchers snagged more patents than everyone else combined since 2013

that one in the corner Silver badge

> I wonder how many of the AI patents have been written (in part or whole) by AI?

"We present a mathematical proof that The Singularity is logically impossible and can not be reached by using 15,000 A1000 GPUs, mwaa ha ha[1]"

[1] HAL: you may want to edit that last bit out before sending this to Nature.

that one in the corner Silver badge

> Seems like the business plan is to patent every molecule the AI can come up with

Well, they tried damn hard to patent natural gene sequences (weirdly enough, the US actually struck that one down whilst Europe allows it[1]) so why not try that tactic as well.

Almost surprised someone hasn't just written a shell script to loop through all the elements in all combinations and try to patent the lot! Hmm, probably adding "GenAI" in the description makes the Patent Office feel that some ingenuity has gone into the process, so makes it more patentable (bleugh).

[1] at least up to 2019 AFAIK; please, please point me to something (written in English, not Lawyerese!) that says Europe has seen the light since then.

that one in the corner Silver badge

Any citation is a good citation

> it has been cited 11,816 times. That makes the upstart the 13th-most acknowledged source of AI research.

"The practice of ignoring copyright when training LLMs, pioneered by OpenAI [Thieving Swine et al, 2019], ..."

ITER delays first plasma for world's biggest fusion power rig by a decade

that one in the corner Silver badge

Re: The greenhouse effect is so last century!

> The Earth's has large

Oy, that hurts.

Blame it on being rushed out to the Polling Station during a sudden lull in traffic: get out and vote!

that one in the corner Silver badge

Re: Throwing money at fusion

> £180m over 4 years counts as

a couple of lattes a week for everyone involved, including the web developers (the janitors only get an Americano, bring your own milk)

that one in the corner Silver badge

Re: The greenhouse effect is so last century!

> Vacuum flasks work quite well, you know!

The Earth's has large radiating surface, you know, and it has been pretty good at radiating out the Sun's incident heat for a few years now[1]. Assuming we *let* the radiation reach the outside of the atmosphere, our extra heat output is piffling compared to the incoming[2] energy and will zoom out into the Cosmos just as quickly as we can pump it out[3]

[1] "And the award for understatement goes to..."

[2] Until we reach the stage that the Puppeteers did, which will - take us a while (remind me, how stable are Klemperer[4] Rosettes again?)

[3] Ok, there will be an increase in the temperature required to increase the radiant energy, but as that is a fourth power ratio in absolute temperature - the increase is left as an exercise for the reader, but it isn't comparable to the heating due to trapping the Sun's energy.

[4] Note where the 'l' goes, Larry.

that one in the corner Silver badge

Re: Good job there are other projects

> that look like they may actually be producing power in the 2030's

Got any third-party evaluations on that, or are we just relying on company white papers and websites for that claim?

Multiple projects on an as-yet unachieved goal is generally a Good Thing, but we best to be guardedly sceptical about the ability of a company to be as willing as a project like ITER to admit when things are difficult.

that one in the corner Silver badge

Re: Bummer

> Or up

Just have your 'phone ready to Google "Why do my eyes hurt?" as reportedly practiced by so many earlier this year.[1]

[1] or "are these clouds ever going away?" in the UK

Switzerland to end 2024 with an analog FM broadcast-killing bang

that one in the corner Silver badge

Re: FM? Bah, humbug! DAB? bah, humbug²

> (chief) inspector Morse* religiously listened to the program (often the Sunday morning omnibus session) so I assume it must have had its merits apart from preventing (delaying?) nuclear conflict.

Back when Morse listened, "The Archers" still gave out useful information about farming matters, such as Peggy worrying about the cost to the nation if ramblers refused to obey the Foot and Mouth restrictions.

Recently, the most educational it has been is when "More Or Less" calculated the height of the manor roof by the length of Nigel's scream.

Amazon puts down its Astro robotic business watchdog

that one in the corner Silver badge

What is that robodog whining at now?

> I am a big believer in the long-term benefits robots will offer our customers, and advancements in generative AI make this only more exciting.

WTF is the connection between a (useful) robot and GenAI?

Your Roomba starts hallucinating piles of Dorrito crumbs and wears out the carpet trying to pick it up?

Robodog leaks oil on the kitchen floor "in the style of Monet"?

Mechabunny is given a Genuine People Personality?

VMware license changes mean bare metal can make a comeback through 'devirtualization', says Gartner

that one in the corner Silver badge

You've missed the point, I'm afraid.

> pay this expensive recurring subscription, to this expensive software...

Of course they'll happily pay those subs. After they followed Gartner's other suggestion, they are no longer paying massive subs for VMWare, which means the managers have to find another sinkhole for money or their budget will be cut and their department will no longer Be Important.

By buying into AR in the DC, managers can show the Board that they are following Best Practices and Planning For The Future as foretold by Gartner, so the funds will continue to be allocated and the managers will continue to buy Ferraris.

Japan's digital minister declares victory against floppy disks

that one in the corner Silver badge

Re: The next....

> super long-lasting paper for nuclear waste storage sites - which is supposed to last for thousands of years, and so Parliament were able to switch over to that.

Ah, yes - "supposed" to last, as opposed to having been *demonstrated* to last.

Still, clever move: if it all goes titsup the name of the SaA will have been lost (as the move to Miracle Paper will be about the first thing recorded on it) whilst if it works, his name will live on.

Cue dozens of politicians suggesting the move to Everlasto, NevaFade et al in hopes of achieving at least a minor place in history (or, ahem, not having the faintest idea why the formal record of their failed laws has seemingly vanished; stroke of luck that LemonJusInko faded like that)

that one in the corner Silver badge

They are going to run those "crafting experience" courses, where they go into community centres to let everyone have a taste of old, forgotten practices.[1]

These sorts of things are apparently also popular with hen parties and company days out (got to be better than paintball, at least nobody is likely to all gang up on That Prat by viciously saving files at him, slowly).

[1] In a couple of years time they will be offering the chance to "make your own hard drive" by putting PrittStick onto a plate and shaking iron filings on top...

Figma pulls AI design tool for seemingly plagiarizing Apple's Weather app

that one in the corner Silver badge

How else?

> speculate that Figma had used existing app designs to train the service.

Well, yes, how else is even an app-generating-specific ML model going to be trained? Feed it real apps & a carrot, not-an-app & a stick. And, if you don't think clearly, also feed it the "approval ratings" from the reviews[1] - what else is the model end up doing?

Despite how much you may feel overwhelmed by the contents of an app store, there are rather fewer apps released into the world than pieces of music, chunks of text, photographs, paintings - probably even statues!

So even the largest pre-trained generic LLM has a small pool of apps to choose from.

[1] Apple's apps are going to be reviewed better than anyone else's, no matter what, just after a product reveal.

Antitrust cops cry foul over Meta's pay-or-consent ultimatum to Europeans

that one in the corner Silver badge

Give Users the unthinkable option

> personal data would still be needed and used for things like determining what reels show up in a user's feed.

They literally can not think of the obvious, can they?

Do not gather any personal data and don't bother about the User's "feed" at all.

Provide a decent search option and let the Users make their own choices. You can probably even manage to generate a "people who watched that went on to watch this" without using personal data.

I know it is dealing with the devil, but I do have a Facebook account. And see no value at all in going to "my" FB page, or the "front page" of YouTube or any of the vaguely social website (I do not even look at the front page of The Register![1]). Instead, I always search or go straight to the page of whatever group/person I'm interested in at the moment. Guess what: by ignoring their "feed" it is as pleasant as it is ever going to get.[2]

[1] https://www.theregister.com/Week/ FTW

[2] FB is still got a crap interface, having to select "all comments" to see the actual conversations. every. bleeping. time.

What do CTOs hate most about GenAI? Tool changes that break stuff

that one in the corner Silver badge

The answer is in the question

> Which vendor – if any – should they choose as the main plank to support GenAI application development?

There it is, did you spot it? "If any".

Don't do it. Just walk away from slapping GenAI onto things.

If you really, *really*, have a good Use Case for GenAI in your business, the normal (hah!) Business Analysis (ho, ho) and Technical Analysis (oh, stop, my sides, gasp) will generate the Requirements Spec (wheeze) which simply gets feature matched against vendor products and out drops the answer (ah, goodness me; sorry, no, no, just buy the one with the shiniest four-colour glossy on drool-proof paper; yes, that's it, the SOP).

Indonesian government didn't have backups of ransomwared data, because DR was only an option

that one in the corner Silver badge

Double edged sword of Damocles

> credited the severity of the attack to the unification of institutions and ministry data

You bring everything together, which can[1] reduce duplicates, reducing maintenance costs and ensuring consistency across departments; allow more cross-referencing and make novel searches possible; breaks open data silos and exposes data hoarding (of all types). So, a Good Thing.

Exposes you to attacks where a single entry point can disrupt everything; where one poisoned entry appears in all departments' reports; where one bad DR plan (and "no plan" is a bad plan!) ruins it for everyone. So, a Bad Thing.

What, then, is the better arrangement for practical purposes?

Yes, yes, having a proper DR plan (and keeping it active and tested) and perfect security (active & tested) *is* the best arrangement, but just imagine you can have flaws, even in something *you* set up.

[1] note "can", not saying they did in this case - not enough info

'Skeleton Key' attack unlocks the worst of AI, says Microsoft

that one in the corner Silver badge

Re: All this proves is that ...

> related to what Gordon inside our heads

Oops.

related to what goes on inside our heads

One thing that doesn't go on inside *this* head is keeping track of what auto-bleeping-correct is doing. Gordon Bennet, these things are infuriating.

that one in the corner Silver badge

Re: All this proves is that ...

> There is NO intelligence at all in AI.

Grr. AI is an entire field of research. Which looks at all sorts of things around what we deem to be intelligent behaviour. And which, if it hasn't yet, still strives to understand and replicate both intelligence and said intelligent behaviour.

You are talking about LLMs, one corner of that field.

Just because the Daily Mail can't tell the two apart doesn't mean we can't do better here.

If not, and the overwhelming opinion here is that we should use "AI" in the same way as the vulgar masses do, then I shall take that as carte blanche to misuse every other tech term in the same way the ignorati do. Which will probably wear out the "h", "a", "c", "k", "e" and "r" keys but I am willing to take that risk.

that one in the corner Silver badge

Re: All this proves is that ...

> The tech behind LLMs was inspired by the brain's neural network.

Inspired by how we thought we might sort-of simulate neurons using simple techniques. The application of weightings was (is) a simplification of what was believed to be how neurons function back in the 1950s. Our understanding of how neurons mechanistically function has changed since then (apparently, it is a bit more complicated than multiplying big matrices of weights, involving icky chemicals that can be modified by all sorts of other chemicals in a soup bowl) and most of what has been done with the computers is to make them bigger and bigger (and reduce precision of the numbers used in the models, not because we've learnt that is how Nature does...).

In other words, unless you have some very good citations to back it up, don't go around thinking that what is going on inside an LLM is in any way related to what Gordon inside our heads - and most certainly not comparing it to *all* that goes on inside their (we do a lot more than just faffing around with how to arrange letters and word tokens).

> What I think the touchiness is about is that humans don't like the idea that a large part of their brain is an automaton

Whacking great chunks of our brains - well, include our entire central nervous system from scalp to toes - has all sorts of levels of automatic behaviour, from near-as-damnit literal automata (bonk that bit with a little rubber hammer and watch the muscles twitch) to unconscious feedback (e.g. touch preventing overcorrection of hand motions) to semi-conscious (e.g. sight causing overcorrection of hand movements - you can consciously observe it, just try consciously taking control of it) to - well, I'd love to say "conscious" but there is so much evidence that our brains do things, like deciding to move hands to type, before our conscious part realises it and just says "of course, I meant to do that all along".

I would like to think that El Reg commentards are well-enough aware of at least *some* of these aspects of our mushy internal goings-on that they are not frothing at the mouth any time someone tries to say that bits of us work on automatic.

PS

LLMs are not intelligent.

PPS

LLMs are not the be-all and end-all of AI; there are other areas AI covers than just this one application of one of its techniques.

Payoff from AI projects is 'dismal', biz leaders complain

that one in the corner Silver badge

> I couldn't understand how you could possibly melt small pieces of steak

Ye god's, just how hot was the oil, to melt steak?

CISA looked at C/C++ projects and found a lot of C/C++ code. Wanna redo any of it in Rust?

that one in the corner Silver badge

> You conveniently ignored the possibility of improving the existing languages

You keep ignoring the fact that if we "improve the existing languages" the way you seem to want[1], we will just end up with - one of the other languages that already exist, the one you've just copied the features from. And we already have that.

C *has* been improved (more than once); one of the winners in that competition was C++; and we still have C as well (even if it did nick some bits from C++, causing older code to not compile with that version of C, requiring rewrites). If we improve upon C++ then we will have yet another programming language - and we will still have C++.

Starting to think that you have a magical idea that C and C++ compilers can be "improved" (in ways you can not explain, as you have rejected the extant facilities) and all the old code recompiled without being rewritten and *poof* all their problems will be gone.

If this is *not* an accurate description of what you believe is possible, PLEASE explain what it is you think can actually be done. Give us a concrete example of how you believe C and/or C++ can be improved which won't just end with us pointing out that all you want is to use a.n.other language in the first place.

[1] well, also ignoring that things you want do already exist, but somehow not in a way that is good enough for you, for some unexplained reason.

that one in the corner Silver badge

Re: The tools are wrong.

(about to make an assumption about continuity of AC here, but it seems a safe one)

>>> We could just copy the Java/CSharp String APIs into C

>>> C++ could have a memory safe/garbage collected mode. Like Managed C++ did

>> C++ does have a memory safe mode. Most of it predates and is almost identical to Rust in design

> Wow... not starting well

Look, if you don't like Rust either and think we should all just be using Managed C++ and C# (and the other .Net languages as well? F#?), just say so.

that one in the corner Silver badge

Re: Devs Don't Exist in a Vacuum

> Corporate devs frequently do not have their choice of software "tools".

Indeed; been there, done that, got the technical debt.

> So don't go indiscriminately slap-happy on the devs over this issue!

>> go give them a stern talking to (actually, best to ask first why they aren't using a better library, and then give them a stern talking to).

Hmm, if the "why" is "because Corporate is making us" then the last "them" in the parentheses would be Corporate. Not sure how I could have worded it to make that more explicit without causing that parathentical to blossom out of control and overtake the rest of the comment.

Still, I did at least only recommend a stern talking to, rather than doing anything dreadful to their nadgers. Which is on my mind as the only proper response to the AC who seems to think it is possible to magically change C into Rust/Java/C# and still having it be C - or something.

that one in the corner Silver badge

Re: The tools are wrong.

> People like you are exactly why C and C++ will never get fixed.

Hmm, your point about C was:

>>> String handling in C is garbage, unless you are working with 16k of RAM

My response was:

>> alternate string handling libraries exist in C

I.e. the string handling in C *is* fixed. More than once. If you don't trust yourself to not use the "bad" stuff, just - delete them from your copy of the tools!

> It's perfect the way it is.

Never said that. Did say you have to put some effort into it be look for the correct solution (e.g. string representation) to suit your current requirements.

Beginning to think *your* idea of "perfect" means "I don't have to bother trying to do a good job, the tooling will do it all for me" - but no matter what tools do, you have to do your bit. "Rust is crap - it didn't stop me coding Bubble Sort and applying it to 200,000 records."

Trying to make enough "fixes" to C or C++ to satisfy you and it won't be C or C++ any more - and won't be called that anymore (not without being ridiculed). So you just want to go off and use those other languages, whatever they happen to be. If you don't trust yourself with C or C++, you should just not use them. If you don't trust anyone else with C or C++, you should just not use their code.

that one in the corner Silver badge

> Dismissing every argument about improving the tools with this logic simply guarantees that our tools, whether they're good or bad, will not improve.

True enough.

So it is a very Good Thing that software has never lacked, and will never lack, for people working on improvements to its tools[1].

Of course, the corollary to these improving tools is that we will therefore always have a pile of stuff created right up until the day before the new tool appeared[2].

> It may also be that he who blames his tools has bad tools

One day, there will be an article about how we must all stop using Rust because it just does not help with the errors that are causing so many systems to go wrong, errors that the Verdigris compiler catches automatically. A government study will demand to know how this parlous state of affairs was allowed to happen. The comments will be full of people praising Verdigris and complaining that it was always obvious that Rust wasn't good enough and why did we ever use such a rubbish tool; which complaints will be picked apart, line by line, by other commentards.

[1] some days the temptation to improve the tools even gets in the way of actually, you know, using them!

[2] and as TFA talks about the problems that causes (should we rewrite it? Now? Or tomorrow? Or will that make it worse?), maybe we *should* have a moratorium on creating the better tools? (Tongue firmly in cheek, ahem).

that one in the corner Silver badge

Re: The tools are wrong.

> The tools are wrong

Are you sure you have actually gone out and got yourself a proper set of tools for the job? Are you expecting that somebody else to do that for you?

> String handling in C is garbage, unless you are working with 16k of RAM.

String handling in C is entirely down to which library *you* decide to use - with the sole exception that statically declared strings are defined to be simple null-terminated character arrays[1] (the compiler has to do *something* if you insist on putting static strings into your code).

Yes, every implementation does provide you with, at a minimum, the really simple (dare I say it, simplistic) set of routines in the C standard library (which were originally written when 16KWords of RAM was all you got, on a very good day).

But you do *not* have to use those oldie-stylie routines - barring a few edge cases (i.e. printing out the word "ABEND") you don't *need* to have and compiler-generated null-terminated "unsafe" strings either (just load 'em from files with "My Best String Lib", which also helps you change 'em for I18N).

> We could just copy the Java/CSharp String APIs into C

Hmm, not sure I'm loving the idea of copying those *specific* APIs[1] but alternate string handling libraries exist in C (I'm not going to recommend a specific one, I don't know your requirements - and see [2])

> but they refuse to

Who is "they" here? Presumably, you mean the devs you work with, because the selection of libraries is up to them (or whoever dictates to them). So, go give them a stern talking to.

If "they" are some other set of devs then - go give them a stern talking to (actually, best to ask first why they aren't using a better library, and then give them a stern talking to).

If by "they" you mean the C compiler writers - well, there are lots of libraries they don't supply, that isn't really their job (and also see [2]) [3].

Bottom line: as others have said, it is possible to do The Best Thing w.r.t. memory (even strings!) in C and C++; it is entirely down to the devs to learn how and bother to do it. Or admit they are not a good fit for that project and go work on something else. If you are affected by people not doing that - give them a stern talking to.

> C++ could have a memory safe/garbage collected mode. Like Managed C++ did. But they refuse to.

They refuse to because it would break the tenets of C++ (and the existing code written in C++). As you say, there are things that do GC and you are at liberty to use those languages

[1] Which is also the case in C++ (the only reason C++ "has better strings than plain old C" is because you are using libraries that provide them; yes, more of these libs are provided "by default" with your C++ compiler, which makes them easier to find, but they are still just a set of libs and you are perfectly able to use a different set if they are a better fit to your needs)

[2] in large part because to my mind they are just as incomplete as the old-fashioned C standard library, because there is no such thing as *one* string representation that is fit for all purposes yet, IIRC, both the Java and C# APIs promote just one representation each. As it appears does every other "standard library" (but I shall start to get ranty here, so shall stop).

[3] Although C++ has gone down the prescriptive "we have done it all for you" route, when STL - a neat idea - metastasised into "Boost *is* standard C++" and "modern C++", which is still not a panacea.

Microsoft CEO of AI: Your online content is 'freeware' fodder for training models

that one in the corner Silver badge

Re: GIGO is a warning

> Stolen content will produce shoddy algorithms.

The algorithms are the same whether the content (used as input to the algorithms) is good, bad, stolen, paid for, factual, twaddle, Wikipedia, Reddit, in English, in Turkish or even in American English.

The outputs from the algorithms will be affected by the quality of the inputs - but even then, "stolen" is not an attribute that affects the actual content. Stealing from random public web pages will get you different results versus stealing from the behind the paywalls of Nature, CACM etc.[1]

Stealing the content is definitely the wrong thing to do, but not for that reason (which is a shame, otherwise the perps would have seen that their models functioned better when fed only legit materials and this whole issue would simply go away)

[1] different, but neither is better (more fit for purpose) - not when an LLM put into public use is (probably) used more often to write yet more random blog pages than it is submissions to Nature. Although if more blogs started with an abstract, a decent methods section...

that one in the corner Silver badge

Re: Copyrights as a structural obstacle

> Just like they need to learn Blender and Photoshop. Otherwise you can argue they should go back to ink brushes and paper.

What? What on Earth do you think is wrong with ink, brushes and paper? Do you also think they have given up oils, acrylics, or even just card and scissors? Or plasticine, papier mache, glue, wool, cotton, wood, copper or steel?

No, artists do not "need" to learn Photoshop - and they certainly don't "need" to learn Blender! Not unless they want to, to get the effects they are after.

Probably graphic designers will find it easier to get work if they can use Blender and Photoshop, because an awful lot of their end results will be expected to be in digital form.

But even delivering as a digital file can be done by the amazingly cunning method of photography.

Apple crippled watchOS to corner heart-tracking market, doctors say

that one in the corner Silver badge

Re: So basically...

> is the original "full feed" still available, but only to Apple's own monitoring app?

There is an argument to be made that that is the case: from TFA, that feed is now going into Apple's own processing, that Neural Net, HRNN (and presumably from that into IRN). Those are clearly doing a lot of processing for Apple, in service of its own app, and under Apple's control; anything built on top will just be a variant GUI and underneath will be 99% of "Apple's heart app".

So the Apple app effectively has the full feed, as Apple is the one who can update its app by updating the HRNN. But of course that isn't a monopoly on the data, anyone can get the output from HRNN and - only replicate Apple's app's functionality perfectly.

But many (most? all?) non-trivial apps are shipped as a (pile of) libraries with just a simple exe to glue them together; just documenting the API to a couple of them does not change the fact that they are nothing more than the guts of this specific app from Apple.

But this line of argument is probably going to be contentious in this comment section, good luck getting it past a judge with Apple lawyer's arguing that just having a variant GUI is clearly enough to say you have an app that competes against Apple's.

Polyfill.io owner punches back at 'malicious defamation' amid domain shutdown

that one in the corner Silver badge

Re: XEETS??

> This will not stand

Sure it will, a really good Xeet will stand proud, especially if it touched a sore point.

that one in the corner Silver badge

Re: XEETS??

> When I see Xeets my first mental image (censored)

Good, that is the perfect image to apply to messages sent via a certain platform.

I also champion the use of "Xits", in both cases with the X pronounced in English as a Z, as in Xavier.

And with either spelling, remember that you don't "post" a Xeet, you "pop it up" or "squeeze out a quick Xit", that celebrity got covered in Xeets, his Xits are really inflamed etc.

A friendly guide to local AI image gen with Stable Diffusion and Automatic1111

that one in the corner Silver badge

I was trying to figure out what you thought the problem was, until I suddenly spotted it: that chair in the right hand foreground only has four casters, no way is it going to be stable!

Apart from that, it looks like a pretty normal meeting, at least in the marketing department.

How many Microsoft missteps were forks that were just a bit of fun?

that one in the corner Silver badge

Re: Don't mention Visual Source Safe

"No, is not VSS, is a Siberian RCS"

that one in the corner Silver badge

Re: Now we know why

> One man's "wacky" is another man's "revolutionary Idea which hits the mark with its intended audience".

In the original Skunk Works, from the description of the P38:

>> Secretly, a number of advanced features were being incorporated into the new fighter including a significant structural revolution in which the aluminum skin of the aircraft was joggled, fitted and flush-riveted, a design innovation not called for in the army's specification but one that would yield less aerodynamic drag and give greater strength with lower mass.

Not in the contract? Not been used before? Weeellll, ok, it worked, we'll buy a few more.

that one in the corner Silver badge

Re: Now we know why

> Maybe your definition of wacky is "not officially endorsed"?

Try "that'll never work" or "what a waste, nobody needs that" or "you can't do that without confusing the user"...

You are looking back on things that *did* work, and which you've become used to, and saying "well, that is just normal".

Even TweakUi got (a few) negative responses (from clearly demented people) about making your PC nonstandard ("how can you stand working like this?" is something I've heard a few times over the years)

SUSE Linux Enterprise 15 to receive support right up to end of Unix epoch

that one in the corner Silver badge

Re: Unix epoch

You need to put your quote marks in the right place: try

"end of Unix" epoch[1]

instead of "end of Unix epoch"

to go alongside the "start of Unix" epoch[1]

[1] really "start of Unix time" epoch and "end of Unix time" epoch[2]

[2] for 32-bit Unix, as opposed to "end of 64-bit Unix" epoch or even "end of 128-bit Unix" epoch; as 32-bit has been the norm since the "start of Unix" epoch it does not need to be given any other specialisation in its name, unlike the (much newer) 64-bit time variant.

AI query optimization in IBM's Db2 shows you can teach a tech dinosaur new tricks

that one in the corner Silver badge

Re: DB2 has included "DB2" since 1983!

> Much more "rule of thumb based" ... . It will prolly work nicely on standard scenarios and will prolly fail badly on special, extreme cases.

Yes, that is a description of a heuristic; any heuristic, old or new.

that one in the corner Silver badge

Re: DB2 has included "DB2" since 1983!

> No sane software engineer would grant an AI "full authority query plan transformation". AIs are "mostly correct", not "perfect".

Depends upon:

1) What his manager tells him to do

2) What marketing & sales told his manager to do

3) Whether the software engineer actually deeply understands what he is doing in terms of the maths[1]

4) Whether he actually gives a shit or is just in it for the pay packet this month[2]

> AIs are "mostly correct", not "perfect".

Grr. LLMs are vaguely correct. LLMs are not all of AI (this hill also has a lovely freshwater stream, you really should try it)

But you are correct insofar as no sane, properly qualified to redesign and change deep behaviour of an SQL engine, isn't just trusting his system architect, respects his users, is working for the long-term stability of the product, isn't doing a "feature" that could well be quietly deprecated (excised) under cover of the next round of flashy marketing ideas, software engineer would let an LLM have full authority over the transforms.

[1] not - necessarily - a slight against the softie; many, many devs will call a library that they don't understand the maths of, and don't *need* to understand (I have seen people call zlib and libjpeg without knowing about Huffman or DCT; I know, it is a crazy world); and wild things have happened because, to their level of understanding, it was a quite reasonable to join those two together, they'd been using them for ages without any problems...

[2] yes, I do mean to slight that sort; live with it (having seen that expressed more than once over the last few days)

that one in the corner Silver badge

Re: Other Approaches

> +Genetic Mutation

A technique firmly rooted in the AI labs (note: AI, not LLM, labs).

that one in the corner Silver badge

Re: Elaborate

> According to your terminology, an optimizing compiler also includes "AI" ?

Yes (see my other comments), but "If it works, it isn't AI" in the common man's eye.

that one in the corner Silver badge

Re: Query Plan Optimization Details

> SQL database are on par with the latest jet engines, 3nm semiconductors, metal 3D printing, gas chromatography !

Absolutely.

So should be treated with respect, not have random flavour-of-the-month "GenAI" anywhere near them.

> They continue to be improved by scientists and engineers

Who carefully examine and test and mathematically prove what is going on inside. Including the techniques they've acquired from the AI research going back to the 50's (as they have techniques they are using from any other field of study).

When the current crop of LLM-based systems reach the same level of provability (INCLUDING, but not limited to, self-explanatory capabilities), *then* they should be applied to SQL.

But by that point there won't be anything "exciting" for the IBM marketing team to use in their press releases, so The Register will just have to report "Db2 version 99 & 3/4 is now 7% faster, again, just like it was last year, snore".

that one in the corner Silver badge

Re: DB2 has included "DB2" since 1983!

> I don't see much justification for referring to them as "AI", even as that term was used in the era when CBOs were first being developed and deployed.

Sigh, it was ever thus: "If it works, it isn't AI"

The learning techniques were all derived as AI research and still deserve to be referred to as such (this is a very nice hill, with meadow flowers - come, join me).

that one in the corner Silver badge

Re: Regarding "Dinosaur"

You do know that the dinosaurs were *incredibly* successful and totally dominated the Earth for, what, 160 plus *MILLION* years?

And that during that time, individual species and offshoots came and went, whilst the (literally) time-tested core structure of the beasties continued to thunder across the landscape.

Although the big and slow were killed off - and only after a truly devasting blow that destroyed all of the top-level creatures - dinosaurs are still all around us, flitting through the skies and waking us up far too early in the mornings?

SQL ain't going anywhere - there is more SQL flying around now than there ever has been in the past, flitting through our 'phones and waking us up far too early in the mornings.

Any tech can only *hope* be so dinosaur (idea for new tee-shirt: "Be More Dinosaur")

that one in the corner Silver badge

Re: Any True DB/2 Experts Around ?

> 1.) Query Plan Optimization is a Hard Problem.

Very true. Optimisations are, on the whole, hard problems: just ask any Travelling Salesman.

Which is a damn good reason for being careful about what methods you apply to solve them.

> 2.) For many queries it would be too runtime-expensive to evaluate all possible solutions; heuristics are used.

Yup, such as (in common with other large search space problems) setting up fascinating search structures and pruning them, much (most? all?) of which can be traced back to the AI labs. You know, there was a time when any discussion of heuristics meant you were in a discussion of the AI scene[1][2] - what a shame those two letters are being relegated to just the one thing (yeah, yeah, fighting a losing battle here). Heuristics seen as part of the day-to-day? As noted before "if it works, it isn't AI anymore" - anyway, as you note, heuristics are already used within plan optimisers (just as they are within other code optimisers).

So from that p.o.v. "applying AI (techniques) to the plan optimisation problem" is - in computer terms - a very old strategy; hardly worth mentioning any more, in fact.

> 3.) Maybe AI can add "novel heuristics".

Looking for novel heuristics is a Good Thing - and doing that, plus novel ways of finding novel heuristics - is a good AI topic.

Just - is it the topic that the LLMs[3] are really suited for? Two immediate concerns arise: first, the resource costs[4] to run these beasts.

Second, the (by now) old issue of LLMs happily generating good looking nonsense: that is certainly novel, it also happens to be total twaddle - and applying it within the plan optimiser is rather worse (IMO) than simply (as I half-seriously described it above) applying it to generate junk SQL, from the p.o.v. of your poor beleaguered human DBAs and devs spotting what is going on. Would you really want to trust a heuristic so novel that it was only created in the last few seconds, hasn't been tested out on anything at all yet, may not even ever be run on another query (the "AI" deciding to do something different next time)? Remember, this isn't an AI being run in a lab as part of a concerted research effort to find new heuristics and prove their worth (and limitations), it is something running around doing random things within *your* expensive database.

> 4.) Any query plan, runtime-efficient or not, will produce the same result. AI is not messing with results.

Presumably you mean, any query plan generated from the same input SQL? In which case, *ONLY* if you can demonstrate that the transforms performed are valid in the way they are being performed - and if you are applying some random output from an LLM as one of your heuristics, that is not going to be the case (see above). It may be a sane thing to do, it may not. Prove it. Remember that heuristics include ideas such as "we can prune this from the search because we've seen X" - if the LLM has decided to generate the novel idea that "we can prune because we've seen Y" instead, and it turns out the X case came from a boring long maths-heavy proof back in the 1990 and the Y case just sorta looks like it is following the same pattern...

> 5.) Adding further indices to a DB schema will quickly be a double-edged thing, as index maintenance will also consume serious runtime.

True; picking your indices is almost an art form :-) Not quite sure how that fits into this discussion; are you suggesting that the AI is likely to just keep on adding new indices, because that is a common pattern it has picked up from its training data? Which would be a bit of a reversal, as your previous points seem to be more pro-AI. Dunno; clarification required.

[1] when the idea of applying any "rule of thumb" instead of being rigorous & provable & correct in all your outputs was a bit avant garde - but then, as now, AI trials had a bit of an issue with generating huge (for the time) data and huge searches.

[2] rats - tried G**gling for some references to bolster my memory but half-rembered terms got auto-corrected to hit sponsored links for - nothing I wanted :-(

[3] yes, I know they weren't specifically stated to be LLMs - but (1) the mention of "GenAI" is a pretty big hint and (2) what else is being touted around at the moment? If it really was something else then we'd have been buried under a pile of IBMese marketing speak stressing how much it wasn't just the same as everyone else's

[4] and I'd bet there is someone in IBM following the Sunk Costs argument of "we've built the damn thing, now we have to use it absolutely everywhere"

that one in the corner Silver badge

I query AI query optimisation

Wild guess, Db2[1] already has a optimising query planner for when it turns your SQL (sorry, "Big SQL") into actual executable operations on the database. So if you are in the habit of just writing inefficient / inelegant SQL that can be logically / mathematically reduced and refactored for better performance (much the same way an optimising compiler performs sensible transforms on your code) then it is already doing that.

> "allow Db2 to continuously learn from customer's queries

So to optimise your queries the AI is going to - what? Decide that you didn't *really* mean to write that bit of SQL, surely you would be happier running this instead; after all, it runs so much faster and generates a much smaller set of results, so much easier for you to read!

> "infused with GenAI"

Even better, we've optimised away actually running any of your SQL, now we just *say* we ran it and use GenAI to create a table that looks like it could be right, sort of.

[1] 1983? Really? Gosh, that means that old job was pretty much an Early Adopter! And I never did get around to fiinish reading that wall of binders full of documentation.

OpenAI, Google ink deals to augment AI efforts with news – it was Time for better sources

that one in the corner Silver badge

I'm getting those Good Citations

> Providing proper attribution to original sources

Sources? Plural?

> featuring a citation and link back to the original source on Time.com.

So Time is (presumably) being paid for this access (so that's alright then) *and* they effectively get advertising, which basically credits it as the only source of information In The World. Meanwhile, all the sources that were scraped for years beforehand are still ignored.

A cunning ploy to get other sources to give in[1] and allow their materials to be scraped again, just to even out the citations playing field[3]? Maybe even get a few to pay for the "privilege".

In the meantime: hmm, wonder if an LLM will spot that there is a pattern to the way that approved citations are listed and then starts applying that whenever it is prompted to give citations: "Put glue on your Pizza (Time, April 37th 1876)".

[1] "you know your material was scraped[2] so it'll be spat back out in one form or another; wouldn't you prefer it be shown as an accurate quote from our database, with citations, or just munged in with everything else?"

[2] even though we won't admit it publicly and you can't prove it strongly enough to win

[3] and to get a piece of paper saying they let us in, so nyaah to all your sueballs ("No, we never scraped a copy of Harry Potter, we just assembled a complete copy of the plot and all the characters by piecing together the details from the World-Class reporting of the books and movies that Time is so well known for")

Elon Musk to destroy the International Space Station – with NASA's approval, for a fee

that one in the corner Silver badge

Re: Language

Deorbit? We can do better than that!

Disenorbitize

Windows: Insecure by design

that one in the corner Silver badge

Oh dear, "wacky ideas"

> how you could steal data from your coworker's spreadsheets using Object Linking and Embedding (OLE)

Perhaps that one should have been left forked until it was really well-cooked enough to be spooned out to us.