back to article Vector search is the new black for enterprise databases

About two years ago, popular cache database Redis was among a wave of vendors that added vector search capabilities to their platforms, driven by the surge of interest in generative AI. Vector embeddings are produced by foundational models such as OpenAI's ChatGPT and are used to represent chunks of language – like words or …

  1. Anonymous Coward
    Anonymous Coward

    "Caching ... semantically similar queries."

    I do imagine the phrase semantically similar is doing some unusually heavy lifting in this context.

    How the semantics ("meaning") are assigned to a query (which is essentially a well formed string of symbols conforming with some syntax) is an interesting question even leaving pragmatics out of the question.

    I suspect the semantics of a tokenized query is defined as the output from the LLM that query elicits - a cyclic or self referential definition which I imagine presumes a least fixed point?

    The degree of semantic closeness of two queries for the purposes of caching precludes using the full output of the target LLM so I assume some simpler (minded) metric is used possibly depending on a very much smaller LLM trained for that purpose.

    I sometimes wonder whether the current AI/LLM mania doesn't fall foul of the first two commandments not that the world wasn't already awash with with venal false gods.

    † Thou shalt have no other gods before me. Thou shalt not make unto thee any graven image... Thou shalt not bow down thyself to them, nor serve them. (KJV Exodus 20:3-5)

  2. teknopaul

    LLM for mangle ment twaddle

    Had anyone invented a management twaddle LLM yet?

    What I'm lookin for is something that can automatically answer

    Have you finished yet?

    With

    "Coding phase is close to termination. We started the métrics gathering to assess completenes and help triage the requirements signoffs. Obviously we need to prioritise reliability and redundancy technical workflows...."

    And about 500 words more.

    Different each time they ask.

    I dont have time to write that shit.

    Input from me being simply: 1 or 0

    1. thames Silver badge

      Re: LLM for mangle ment twaddle

      What is really needed to make AI useful and able to increase productivity is an LLM that will attend meetings and write status reports. Then another LLM can read the status reports and use this input to issue emails to tell the other LLMs to work harder. These emails can then be read by still more LLMs which also integrate input from image recognition cameras focused on motivational posters. This output from these LLMs is then fed back into the meetings to close the loop. This will automate the entire meeting / status report business function, greatly increasing productivity and business profitability.

      This is the future, I can see it coming.

  3. HuBo Silver badge
    Windows

    "Simulation, for example, is better suited to planning and forecasting than GenAI"

    Good point! Seems to me, from that Gartner Heat Map (where the word 'stability' has AI/OCRed itself in, instead of 'suitability'), that if you run Simulation+Graphs, your stack is suitable for 92% of use cases (75% High suitability, 17% Medium) ... the only thing missing is Perception (Low suitability).

    And Sims+Graphs might hallucinate quite a bit less than gen/nongen-AI too ...

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like