* Posts by Il'Geller

118 posts • joined 11 Oct 2019


Stack Overflow banishes belligerent blather with bespoke bot – but will it work?


Re: I can't be the only one

I told you? AI came from NIST TREC QA and is textual search.

Don't believe the hype: Today's AI unlikely to best actual doctors at diagnosing patients from medical scans


Re: essentially standard pattern recognition

Set Theory: how to find an intersection between two sets of patterns?

- When “matching” the exact match of two sets is sought for.

- When “searching”, the most suitable is searched for.

Dictionary definitions on the patterns’ words narrow the areas of the intersection and help to “find”, not “match”; as Microsoft, OpenAI and IBM proved. The doctors had not been using Set Theory and, therefore, the match, not search was used.

This post has been deleted by a moderator

Self-driving truck boss: 'Supervised machine learning doesn’t live up to the hype. It isn’t C-3PO, it’s sophisticated pattern matching'


Re: Rivers.

A matching is possible if both a question and its answer are known. The developed for NIST TREC QA technology is for the unknown questions and answers.

After the unknown answer is found it’s tried. If the feedback good, the AI memorizes the pair. If not, a new text is searched for, which is Machine Learning, and the new attempt follows.

Starksy Robotics has shut down after running out of money because they tried to discover all possible questions and answers. However my technology for NIST TREC QA doesn’t do this and is only a tiny fraction what Starsky paid.


NIST TREC QA wanted to find the right patterns into a gigantic number of texts. The resulting technology can be called AI because the patterns should be found by their meaning.

This post has been deleted by a moderator

Yes, true, fusion reactors don't work quite yet, but, er, maybe AI can help us stop our experiments from imploding


Re: Judgement

Everything that is somehow connected, everything what happens with fusion reactors is described or can be described by texts. In other words, it all boils down to searching for what needs to be done at this particular moment in the texts, which is AI.


This is the definition.

If you have another definition one would you share it, please?

AI came from NIST TREC QA as AI-parsing, which replaces n-gram parsing. This parsing is the only novelty in all Science (it’s linguistic part) since the first computer was switched on. If you know another novelty please be so kind and tell?


Re: Kaboom!

Yes, it’ a very bad idea. However it’s the only one.


AI is a search technology, finds information into structured texts.

Indeed, “Details from various fusion reactors, such as their plasma current, plasma energy, radiation power, strength of radial magnetic field, and more are taken into account and used as inputs into the algorithm.”

Therefore it will work.


Re: Kaboom!

Machine learning.

Come back, AI. All is forgiven: We know we've mocked you in the past, but we need help analyzing 26,000 papers on COVID-19, coronaviruses


I can post whatever I want?

VCs warn: Pumping millions into an AI startup? You mean, pumping millions into Azure, AWS or Google Cloud...

This post has been deleted by a moderator

Google says its latest chatbot is the most human-like ever – trained on our species' best works: 341GB of social media


Google uses search technology to get answers and is looking for them offline, although Google is known for finding answers only online. Which means that Google is not an online company anymore and uses new technology.

This post has been deleted by a moderator

Mysterious face-recog AI startup Clearview sued, capabilities questioned after scraping billions of web pics


No need to describe the neurons themselves and their connections as such, but A.I. technology catches the manifestation of their inner essence displayed in texts. It assumes that a single neuron (or a group of neurons) is equivalent to a single phrase. That is, the technology does not need to study “humble drosophila”, but immediately deals with humans and their texts.

Amazing peer-reviewed AI bots that predict premature births were too good to be true: Flawed testing bumped accuracy from 50% to 90%+


The basis of neural networks is Frank Rosenblatt's Perceptron, which have never worked before because they were created using n-gram parsing. However the use of AI parsing led to the fact that neuronal networks began to work.

Image-rec startup for cops, Feds can probably identify you from 3 billion pics it's scraped from Facebook, YouTube etc

This post has been deleted by a moderator

This post has been deleted by a moderator


Artificial Intelligence is a database containing structured texts. Face recognition is an algorithm.

Nowhere to run to, nowhere to hide, muaha... Boffins build laser-eyed intelligent cam that sorta sees around corners


So my AI is parasitic on human texts.


There is a clear and unambiguous definition of Artificial Intelligence:

AI searches for, finds, and uses answers in the form of structured texts; based on their

- explicit contexts that can be read,

- and implicit subtexts that are implied as dictionary definitions and allusions to other texts; where an allusion is an expression designed to call something to mind without mentioning it explicitly; an indirect or passing reference.

Thus, everything is reduced to finding answers to the questions asked by computer.

Europe mulls five year ban on facial recognition in public... with loopholes for security and research

This post has been deleted by a moderator

OpenAI's GPT-2 secret life as a pawn star: Boffins discover talkative machine-learning model can play chess

This post has been deleted by a moderator

Google and IBM square off in Schrodinger’s catfight over quantum supremacy

This post has been deleted by a moderator

I spy, with my little satellite AI, something beginning with 'North American image-analysis code embargo'


Re: Banning software that allows labelling of images?

Yes, this is my AI technology.

The IoT wars are over, maybe? Amazon, Apple, Google give up on smart-home domination dreams, agree to develop common standards


IoT data becomes unique being explained by texts, while now they are either not explained in any way, or annotated by rows-and-columns of SQL tables. However, an IoT record can only be found (for future use-and-reference) if it's unique.

There are two possibilities for standardizing IoT data:

1. Rows-columns of SQL tables,

2. Using the method of Artificial Intelligence annotations, namely clarification by texts.

The just text annotations do not allow AI to operate based on the true meanings of patterns, they provide only contexts and almost no correct subtexts; while dictionary-and-encyclopedic definition AI annotations allows the AI to literally understand the patterns' meanings.


Of course not! IoT is my property.

SAP bet the house on S/4HANA but most users aren't ready to move


SAP gets the uniqueness of records as the uniqueness of rows and columns in a table.

AI gets the much better uniqueness by annotating them with texts.

Is your computer doctor secretly a racist? Two US senators want to find out the truth


The solution to the problem is very simple! For "training" it is enough to use dictionaries and encyclopedias, which have an absolute minimum of bias among all texts.

Larry leaves, Sergey splits: Google lads hand over Alphabet reins to Sundar Pichai


I won.

Explain yourself, mister: Fresh efforts at Google to understand why an AI system says yes or no


Re: Quality and quantity

AI technology annotates words patterns using dictionaries and encyclopedias, which are the only texts with virtually no bias, cleansed from any kind of unnecessary information. I. e., AI makes words unique using the best source of uncontaminated, first-class information.

As for quantity: AI technology builds chains of dictionary definitions related to the meaning of words. These chains can be 50-200 or more paragraphs long. Which provides both the quantity and quality! For example, the AI can find 2-5 websites where Google outputs tens of millions; that was used by me in NIST TREC QA, and IBM-in Jeopardy!.

The Register talks to Azure Data Veep about Synapse and SQL Server

This post has been deleted by a moderator

Amnesty slams Facebook, Google over 'pervasive surveillance' business model


Google and FB sell advertisers uniquenesses, that is what makes patterns different from all other patterns. To do this, they use data about what is popular with people, through spying on them.

AI technology creates unique patterns by analyzing the texts from which the patterns come. In particular through annotating the patterns' words by their unique dictionary definitions, references to encyclopedias and other texts.


You want to be Google and FB slave? At least I make you free.


There is my AI technology, it completely replaces Google and FB and does not need espionage.

Twitter wants help with deepfakes, and Microsoft Azure will rent out new AI chips for its cloud users, and more

This post has been deleted by a moderator

Boffins harnessed the brain power of mice to build AI models that can't be fooled


Yes, it is. For the last 75 years only n-gram parsing technology has been used, which led to a purely mechanical parsing of texts, followed by a purely mechanical search for words (not information). Now it is possible to apply AI-parsing instead, which provides meaningful patterns and helps to find meaningful information.

"Meaningful" means the ability to represent a human's mental sphere externally, without invasive study of his brain mechanics. That is, there is a AI-distinction between medical aspects and the study of cognitive abilities. Mouse are not needed anymore, as well as any animals.


1. If you are financed?

2. What for?


Money, only money! If science does not make money, it’s nothing: your pure science is a kind of Go or poker, suitable only for spending time. Brain analysis is for pastime.


What does the brain and neuroscience have to do with AI? AI is based on language understanding. Indeed, why waste time and effort to understand the cause? When the consequences can be easily analyzed? Especially since over the last couple of thousand years all trying to understand how the brain works has not led to any success.

Google brings its secret health data stockpiling systems to the US


SQL and Google

...and claiming it doesn’t need their consent either....

Google still practices SQL approach: Google annotates, explains patient data from the outside, only what Google has access to. For example, by his queries or unrelated patterns (extracted from the texts found by the patient and known to Google).

And those information, those texts that are not available to Google - for example, which are read in places inaccessible to Google - are simply not noticed. But in a health-related cases the inability to get any details is just dangerous! This is not the Internet search, when Google can make naive eyes and say " Google did not find this and output only what it found."

AI technology, as I've told you a million times, clarifies data from within, using all data in personal devices. That is, the patient annotates everything in there, and ALL related to his health data will be structured and used. (Not just those that Google saw.)

AI technology ensures that everything, on which human health depends, will not be lost! Google and SQL don't.

Is this paragraph from Trump or an AI bot? You decide, plus buy your own AI for $399


"...GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text."

And? What for? To mimic Trump?

AI is a purely commercial project from the very beginning! AI is looking for groups of patterns that are both contextually and subtextually aimed at a practical purpose. For instance they are commands for a driverless car or information for a financial broker.

AI is trained using a very good indexed dictionary, 25Mb big.


OpenAI has no idea that texts contain not only contexts, but subtexts as well. That is, not only what is seen explicitly and can be read, but also what is meant implicitly! For example, all any text's words have dictionary definitions, they are tied to other texts' synonymous clusters (taking into account their timestamps). These implicit definitions and connections are also texts' part, although they cannot be seen or read.

The technology for subtext recovery exists, it is simple: computer needs to select for each word its uniue dictionary definition, which is in the context within before and after this word, in harmony with the rest of the text's dictionary definitions. The same should be done with links to synonymous clusters, finding those texts that are consonant with the given.

Not necessary to train date on terabytes of other date! Enough on several bytes of dictionary. This alone saves millions and billions of dollars, plus AI becomes intelligent, starts to understand and ceases to be a toy (mimicking Trump).

Microsoft's phrase of the week was 'tech intensity' and, no, we're not sure what it means either


Microsoft may very soon lose its main business, because it will not withstand the competition with AI. The slogan "Tech Intensity" is gaining unprecedented relevance for Microsoft, it must decide how to handle with AI.

Microsoft manufactures and sells software products, which are created by programmers; where the work of programmers is to translate texts (so-called specifications), into a structured format (that is, into programming code). Thus Microsoft produces and sells "translations."

AI is able to "translate" the same, but without the participation of people; "translating" texts in what I call "synonymous clusters". For example there is a paragraph:

-- Press the blue and white button. Then press blue again.

A programmer (human) must manually code ("translate" this specification); AI structures it into several patterns:

- and press the blue button

- and press the white button

- then press the blue button again,

using AI -parsing and - indexing.

There are two patterns here, which compose a synonymous cluster on the blue button:

- and press the blue button

- then press the blue button again.

AI "understands" the cluster, can easily find and execute it. Microsoft does the same using people, which is much more expensive.

Enjoy a tipple or five? You might need this AI system to tell you when it's time for a new liver


Re: @ll'Geller - It's not pointless.

The idea of Artificial Intelligence:

1. A personal profile is created, based on structured tests; this is NLP part, where you need it as such.

2. A search query, which may consist of 1-2 words, is expanded to complete-and-meaningful patterns; the technology of which is outlined in my US Patent 6.199.067 (PA Advisors v Google).

3. This search pattern is then filtered through the profile, it's enriched with hundreds and thousands of explanatory patterns.

4. Data is searched.

5. The information found is used by AI. (For example, by Waymo and Uber driverless cars.)

Machine Learning technology helps to refine the queries by the addition of texts.

That's it, nothing more or less.


Re: @ll'Geller - It's not pointless.

"...NLP may provide opportunities to detect cognitive impairment in ESLD..."

Machine Learning is not NLP! These are two different things: NLP just helps structure information, translates it into a computer-friendly and readable format, that's all.

AI uses structured information (texts) to search for another information, formulating (expanding to several hundred and thousands of patterns) search queries. In turn, feedback-based Machine Learning refines these searches (based on attracting new texts).

Finally, the AI ​​acts or does not in accordance with the information found.

They, however, simply compare emails: "...where they compared the emails..."


Machine Learning technology is designed to improve information search results, and not to compare texts and find some anomalies in them. Thus, an attempt to use this technology in this way is pointless.

What could go wrong? Redmond researchers release a blabbering bot trained on Reddit chats


Re: "make sure you're alive when you die"

I tried to sell "Eternal Life" product, as the creation of Lexical Clones (personal AI databases)... 15 years ago. I was not able to attract any attention then, though.


Re: So...

Quite expensive because

1) you should annotate everything into Sesame Street with dictionary definitions,

2) find other texts related to Sesame Street texts, extract synonymous clusters from them, build blockchain relationships and annotate Sesame Street.

Only than you can say that Sesame Street is understood and computer becomes AI.



Biting the hand that feeds IT © 1998–2020