* Posts by Il'Geller

155 posts • joined 11 Oct 2019

Page:

Client-side content scanning as an unworkable, insecure disaster for democracy

Il'Geller

Re: Apple has its own agenda

They all have the same agenda: how to find the best answer in tons of texts, the same as at NIST TREC QA. Only one answer and in its context, only one! Farther the answer should be sold, which is easy.

Thus Apple, as well as Microsoft, IBM, OpenAI, GumGum and a few thousand more companies are trying to gain each user’s texts, distill patterns and sell. Apple is not any different form the rest ...

Il'Geller

Any text in the AI system has the significance of advertising: it is delivered only to whoever wants to read it. Any image in the AI system is annotated with text, delivered based on 1) this text and 2) the image’s specific characteristics.

The problem of privacy in AI does not exist, on the one hand. On the other hand, it does not exist either. A text can be prepared on a personal computer, becoming a set of incomprehensible, not-readable patterns. Thus, the absolute confidentiality.

At the same time, the AI means total control over information: texts can inevitably be censored easily and immediately. No confidentiality at all.

Il'Geller

It is insanely, astronomically expensive to scan texts for further use, such as obtaining ad patterns, externally. Indeed, all words of the texts must be annotated, logical connections between patterns and parts of texts must be established, which costs absolutely incredible money. It is much cheaper to process texts on user computers and then receive patterns, for example, for advertising, already from them directly.

Sharing medical records with researchers: Assumed consent works in theory – just not yet in practice

Il'Geller

The simplest thing is to process the data on your computer, into a set of patterns that are impossible to understand. But these patterns will be quite enough to search for information, provide you with personalized ads, etc.

This how a fragment of model on a document on technology looks like:

datum - be - in : 1794

user - be - in : 1552

profile - be - in : 1441

datum - be - remote : 1335

system - be - remote : 1193

datum - be - plural : 1131

system - be - in : 1110

one - be - least : 1066

Your data will remain yours, and commercial companies will get what they need. For example a pattern “datum - item - plural : 883”.

The wolves are full, and the sheep are whole.

AI caramba, those neural networks are power-hungry: Counting the environmental cost of artificial intelligence

Il'Geller

If one uses NLP he has statistical analyses: it shows how important information is. Mentioning something in passing, in one phrase in a huge text, has less value than the same phrase in a text of two sentences.

Il'Geller

The AI that you see finds information in its context and subtext. That is, the computer literally understands the texts. This is the real AI.

Il'Geller

The idea is to make personal AIs, using personal computers-phones. If it’s done, the demand for them (as well as for energy for the AIs creation) will be significantly decreased. I beliave many tens of times. Why?

Such the personal AIs can be trained much faster, because all the reqiered annotations can rapidly be obtained, following the owners (of the AIs) textual habits. This aloows to save enormous money! And is good for the energy consumption, ecology, etc.

A developer built an AI chatbot using GPT-3 that helped a man speak again to his late fiancée. OpenAI shut it down

This post has been deleted by a moderator

Can WhatsApp moderators really read your encrypted texts? Yes ... if you forward them to the abuse dept

Il'Geller

AI technology and absolute censorship, no escape from this

AI technology assumes absolute censorship, and there will be no escape from this. The AI understands all the meaning, not the words: enough to create a specifically oriented profile, as it will automatically catch 100% of certain information. This means total control over what is happening on the Internet. The Internet has become a database where everything is laid out on the right shelves, numbered and sorted by meaning. The new era, the new rules!

What happens when your massive text-generating neural net starts spitting out people's phone numbers? If you're OpenAI, you create a filter

Il'Geller

The reason for the above (security) problems is the choice of the wrong set of texts for training AI. I initially chose a set of personal tests: for example, the texts of Dickens or Dostoevsky. The fact is that such AIs have all the character traits of their prototypes and can hide and deceive. For instance, an AI Clone of Dostoevsky hid information about his participation in a conspiracy against Russia. Thus a personalized AI can be trained what information it can give and to whom, and which to hide.

I tried to create an AI using collections of random texts, as Openal does. Such AIs are completely unmanageable and simple-minded, they are not able to think and talk complete nonsense...

Banned: The 1,170 words you can't use with GitHub Copilot

Il'Geller

Re: Are you now, or have you ever been ...

I was, 19 years ago.

http://web.archive.org/web/20050318004227/http://www.lexiclone.com/fi_starc.html

Il'Geller

"The technical preview includes filters to block offensive words and avoid synthesizing suggestions in sensitive contexts,"

To understand whether this is an insult, a joke or something else, a large number of implicitly used texts should be concidered, which are traditionally called “subtexts”. For example, Merriam Webster explains what subtexts are: the implicit or metaphorical meaning (as of a literary text).

GitHub Copilot needs to make personal profiles, from which can get extra information, needed to infere the attached to words and phrases meanings.

Imaginary numbers help AIs solve the very real problem of adversarial imagery

Il'Geller

“… correctly labelled by the object recognition algorithm with a 57.7 per cent confidence level, was modified with noise - making the still-very-clearly-a-panda appear to the algorithm as a gibbon with a worrying 93.3 per cent confidence.”

Great! I no longer need to say that the only innovation — in the past 70 years — is text annotations. And insist that everything depends on textual search.

Adding AI to everything won't make sense until we can use it for anything

Il'Geller

Don’t panic! I am here. Microsoft, SAP and others help me.

GitHub's Copilot may steer you into dangerous waters about 40% of the time – study

Il'Geller

Re: Sure it's shit 40% of the time...

The way out is to abandon programming languages, to create a language that uses segments of the everyday language as commands. This will help to increase the efficiency of what OpenAI does.

Il'Geller

Re: it tries to conjure blocks of code that function as described

Yes, I think in three or four years. However, this will not be exactly a programming code in the current sense, but a text structured into meaningful phrases; where each such phrase will have the power of a computer language command sequence.

Only one software giant to make impact on the robotic process automation market, says analyst

Il'Geller

Microsoft is investing in the Future: AI can be trained and can learn. Enough to explain AI some initial texts, explaining which word refers to which part of speech and its meaning, as in the future AI begins by itself. Of course, AI should be helped and corrected, which is no different from teaching a child.

By teaching AI texts, Microsoft using the basics of cybernetics, will also teach AI how to handle receptors and manipulators, establishing a direct connection-correlation with textual information. This tactic will help Microsoft save fabulous money on programmers, since the AI will be trained by itself and using texts.

Microsoft is investing in a gold mine!

Il'Geller

Only AI-parsing is a novelty in the entire field of "robotic process automation", nothing else has been fundamentally new proposed. (This AI-parsing replaces the outdated n-gram parsing, helping to structure texts into a format that is understandable to computer.) And finding the right information in the right context is simple. So, the new thing that Microsoft has added to "robotic process automation", is its search for textual information.

Boston Dynamics spends months training its Atlas robots to perform one minute of parkour almost perfectly

Il'Geller

“That is to say, as far as we can tell, the robots were given a set of basic actions by their creators, and then learned how to use those actions to get from A to B from what they could see around them.”

The same search for the right solutions in this context: there is a context and a solution is searched for. There the context should be annotated in a proper manner, which makes it searchable. This is the AI tecnology.

OpenAI's GPT-3-based pair programming model – Codex – now open for private beta testers through an API

Il'Geller

Re: Goals and criteria

Yes, this is a problem that cannot be solved using such giant models as GPT-3. A specific trustworthy specialist should be lexically cloned, where his texts contain links to the correct and optimally suitable solutions. However, GPT-3 and such have a lot of competing solutions from a huge number of specialists, where only one is needed. Personalization is the solution.

Il'Geller

The reason why it is possible to translate text into a program

The new AI-parsing, which replaced n-gram, is the reason why it’s possible to translate text into program. (See article "parsing" in Wiki?) For example, there is a sentence:

— Anna, Sofa and Angela are singing merrily.

There are three phrases in it:

- Anna sings merrily 0. (3)

- Sofa sings merrily 0. (3)

- Angela sings merrily 0. (3)

This is AI parsing (which makes, by the way, Philosophy a science, because it brings an objectivity, as numbers).

But n-gram can get, for example, these three phrases (if comma is used as a delimiter):

- Anna,

- Sofa,

- Angela are singing merrily.

Do you see any sense in them?

n-gram is also not able to get the weights of phrases (at the right, 0.(3)), which indicate their significance in the text: the shorter, the more laconical the sentence and phrase, the more significant it is.

AI to be bigger than IaaS and PaaS combined by 2025

Il'Geller

Re: Of course….

Not many. Look if NLP is used, and if data is annotated. If annotated that means the use of semantic search. So you see a real AI.

20,000 proteins expressed by human genome predicted by DeepMind's AlphaFold now available to download

Il'Geller

I began the project “Immortality” twenty years ago: http://web.archive.org/web/20050323220928/http://www.lexiclone.com/immortality.html

If you've mastered Python 101, you're probably better at programming than OpenAI's prototype Codex

Il'Geller

Re: 12 billion parameters, 159GB of Python source code

Playing with words: OpenAI is not able to really write a program, but only finds the right pieces of code based on their textual descriptions. And combine them in an order. Indeed, how can a company — famous for writing meaningless texts — suddenly write reasonable code? Where did the ability to write a logically accurate program suddenly come from? Mystery...

OpenAI searches for textual descriptions.

Il'Geller

Re: 12 billion parameters, 159GB of Python source code

The idea is not to write code, but to find the needed piece of code based on its textual description. This is what OpenAI does — textual search.

YouTube's recommendation engine is pretty naff, Mozilla study finds

Il'Geller

Re: So, that's GitHub Copilot & YouTube recommendations..

I agree with you, you have expressed the essence quite nicely! Indeed, naive and stupid to hope that people will consciously provide the right context and subtext. Yes, "people will game the systems with lots of irrelevant keywords, just like used to happen with web sites". Therefore, I believe "what they are lacking is any actual AI or decent ML".

I believe that there should be a download program that creates AI on a personal computer-phone, which does everything, for example annotates all used texts. We must own our profiles and have 100% privacy. And all these monsters, like FB and Google, can go to hell!

Il'Geller

Re: So, that's GitHub Copilot & YouTube recommendations..

As you know, AI finds answers to the questions asked by comparing the contexts and subtexts of both. So, in order for Youtube to recommend something, it needs a lot of explanatory (both to the question and the answer) texts. But if Google solved the problem with clarifying texts (for questions) by creating giant (mostly mannually structured) models, then Google does not have enough texts to explain texts in Youtube database. Really, where can Google get these on stupid lyrics for a rap song, or on a not less silly commercial?

Therefore, Google (and any other company) should provide the opportunity to attach the needed explanatory texts, with their subsequent profiling. So far I see not how Google solves this urgent and difficult engineering problem.

Robots still suck. It's all they can do to stand up – never mind rise up

Il'Geller

Re: Musk, are you listening?

AI finds textual answers to textual questions, where both questions and answers are clarified by their textual contexts (they are annotated). In other words, AI compares textual contexts. Comparing this kind of contexts AI receives textual instructions, explaining what to do.

Thus robot can chose one line among all and is “able to recognise obstacles in the road”, based on text.

Il'Geller

Re: Musk, are you listening?

I am here, what do you need Musk for?

The essence of AI is to find the only answer to a question asked, in the context of it. One needs to install such the AI on the robot, that's all.

GitHub Copilot auto-coder snags emerge, from seemingly spilled secrets to bad code, but some love it

Il'Geller

Three to five years

You don't understand how it will work very soon. AI will take a spec and really understand it, its true meaning. And then, using the solutions kept in its memory, the AI will write the code itself. Obviously, in three to five years Microsoft will carry out this Projects, replacing the present.

Il'Geller

Re: Inevitable... it's Microsoft.

You understand that we are talking about parsing, finding the correct information in its context. This is all the technology Microsoft uses.

Il'Geller

Re: That's the problem with AI

It can both.

Nvidia launches Cambridge-1, UK's most powerful supercomputer, in Arm's neighbourhood

Il'Geller

“The challenge is how to make sure it can translate into an unbiased, equitable model, and to do that, especially in healthcare, you need access to a huge amount of data which are representative of your population.”

Indeed, medical data must be impartial, bias level must be minimal. That is, each word in a medical document must have its only and well-defined meaning. This can be achieved by comparing multiple definitions of a given word (taken from a dictionary) with the context in which this word is located; which requires incredibly huge resources.

AI in the Enterprise: How can we make analytics and stats sound less scary? Let's call it AI!

Il'Geller

Ilya Geller

AI is texts put in an understandable to the computer format, that is in structured and related by sense patterns. These texts are the algorithm and process, they contain logic, define actions to achieve the result and feedback. Therefore, it is absurd to say "where you draw the lines between process and algorithm" — there is no distinction between them in the texts.

AI in the enterprise: Get ready for a whole new era of smart software fueled by mountains upon mountains of data

Il'Geller

All data should be annotated by text, it contains “memory, knowledge, experience, understanding, reasoning, imagination and judgemen”, makes everything working.

AI in the enterprise: AI may as well stand for automatic idiot – but that doesn't mean all machine learning is bad

Il'Geller

Ilya Geller

The wrong technology is used! Instead personalization, which I call Lexical Cloning, should be applied.

'We're not claiming to replace humans,' says Google, but we want to be 'close enough' that you can't tell it's a bot talking

Il'Geller

Ilya Geller

As I said, AI is something that understands and can speak.

Stack Overflow banishes belligerent blather with bespoke bot – but will it work?

Il'Geller

Re: I can't be the only one

I told you? AI came from NIST TREC QA and is textual search.

Don't believe the hype: Today's AI unlikely to best actual doctors at diagnosing patients from medical scans

Il'Geller

Re: essentially standard pattern recognition

Set Theory: how to find an intersection between two sets of patterns?

- When “matching” the exact match of two sets is sought for.

- When “searching”, the most suitable is searched for.

Dictionary definitions on the patterns’ words narrow the areas of the intersection and help to “find”, not “match”; as Microsoft, OpenAI and IBM proved. The doctors had not been using Set Theory and, therefore, the match, not search was used.

This post has been deleted by a moderator

Self-driving truck boss: 'Supervised machine learning doesn’t live up to the hype. It isn’t C-3PO, it’s sophisticated pattern matching'

Il'Geller

Re: Rivers.

A matching is possible if both a question and its answer are known. The developed for NIST TREC QA technology is for the unknown questions and answers.

After the unknown answer is found it’s tried. If the feedback good, the AI memorizes the pair. If not, a new text is searched for, which is Machine Learning, and the new attempt follows.

Starksy Robotics has shut down after running out of money because they tried to discover all possible questions and answers. However my technology for NIST TREC QA doesn’t do this and is only a tiny fraction what Starsky paid.

Il'Geller

NIST TREC QA wanted to find the right patterns into a gigantic number of texts. The resulting technology can be called AI because the patterns should be found by their meaning.

This post has been deleted by a moderator

Yes, true, fusion reactors don't work quite yet, but, er, maybe AI can help us stop our experiments from imploding

Il'Geller

Re: Judgement

Everything that is somehow connected, everything what happens with fusion reactors is described or can be described by texts. In other words, it all boils down to searching for what needs to be done at this particular moment in the texts, which is AI.

Il'Geller

This is the definition.

If you have another definition one would you share it, please?

AI came from NIST TREC QA as AI-parsing, which replaces n-gram parsing. This parsing is the only novelty in all Science (it’s linguistic part) since the first computer was switched on. If you know another novelty please be so kind and tell?

Il'Geller

Re: Kaboom!

Yes, it’ a very bad idea. However it’s the only one.

Il'Geller

AI is a search technology, finds information into structured texts.

Indeed, “Details from various fusion reactors, such as their plasma current, plasma energy, radiation power, strength of radial magnetic field, and more are taken into account and used as inputs into the algorithm.”

Therefore it will work.

Il'Geller

Re: Kaboom!

Machine learning.

Come back, AI. All is forgiven: We know we've mocked you in the past, but we need help analyzing 26,000 papers on COVID-19, coronaviruses

Il'Geller

I can post whatever I want?

VCs warn: Pumping millions into an AI startup? You mean, pumping millions into Azure, AWS or Google Cloud...

This post has been deleted by a moderator

Page:

SUBSCRIBE TO OUR WEEKLY TECH NEWSLETTER

Biting the hand that feeds IT © 1998–2021