"Feedback" means that lexical clones are generated, that is models with strictly defined bases and relatively small sizes. That's all. The sizes are 50-500.000, million max weighted phrases, instead of billions. Plus blockchain technology.
276 publicly visible posts • joined 11 Oct 2019
Re: and another thing..
The problems that you have identified are not related to AI as such, but to the quality of images that AI uses for its training, as well as the quality of specialists' feedbacks who annotate these images. This Google AI only compares the images and answers questions: it doesn’t think but searches, that’s all it does.
Amazon initially has been using an erroneous technology based on the complete disregard of the contexts (language segments, words and phrases), which were tightly tied to strictly defined actions and answers. Seems like now Amazon uses (or just publicly tells that uses, for pure advertising purposes) these contexts, and we can expect Alexa to become much better soon.
Re: Loss of efficiency
Loss of efficiency: Google has been struggling with AI technology for 12 years, slowing down progress. Only the arrival of the behemoth Microsoft, with all its limitless resources, began to gradually change the situation. Soon Apple will come, helping Microsoft to displace Google from the electronic advertising distribution market. I, in my turn, will be glad to contribute to the collapse of Google.
Re: Oh, this is really going to impact,....
Initially and since then IBM has been using the wrong technology: IBM was convinced that if the use of paragraphs and made manually annotations is enough for the Jeopardy! quiz, then the same technology is more than enough for artificial intelligence in general. As a result IBM has constantly been losing the race for AI, not seeing that the main problem is in the removal of unnecessary, meaningless phrases (which pollute the search).
Re: text to pictures - what could go wrong?
Creating images based on text means an unambiguous understanding of the text when all its words are understood only in one and the right sense. What can be done only by comparing the text in question with those texts whose words have already acquired an unambiguous interpretation.
This can go wrong.
Re: Its not quite as bad as claimed...
The solution to the problem is obvious and will soon be presented to us: GitHub and CodeWhisperer are the first step towards abandoning all programming languages. Indeed, any programming language is a formalized natural language, put in the net of the commonly understandable constructions. Microsoft, GitHub and CodeWhisperer formalize the same common, everyday language and get the same constructions, without hussle-bussle with manual programming . Then why programming languages?
Re: Poor models
How does it work in AI database? In it annotations are incredibly huge. For example, a single number, symbol, image fragment, or word can be annotated with many thousands of meaningful phrases, which are organized by timestamps. Moreover, this annotations are done without human intervention, automatically.
While for a typical SQL (or noSQL) database such the annotations are simply impossible, because are too costly both in terms of the necessary manual work and the evident lack of requred information. AI can overcome all SQL problems through AI-annotating.
SQL is over.
Re: Free model
You just don't understand what Microsoft is creating! Now Windows is based on personal artificial intelligence, which means that advertisements are targeted at a specific person, knowing what what he really needs. This is not garbage advertising, as before, this is how Microsoft is entirely revolutionizing advertising.
“Larry Ellison may want to create a central health records database for the United States, but given that he's 78 years old, he'd be very lucky to see the end of the project.”
Of course, I do not know and can only guess what Oracle has, but based on the description of the technology used published by Oracle itself, I can conclude that perhaps Oracle is close to commercializing its product. At the same time, it seems that in this not-SQL database there will be no need for manual classification of information by categories, but the search by meaning. So Larry has a good chance to see this new database within a very few years. And those mistakes that were made are OK, so it is impossible to do new things without trials and failures.
This is what I meant saying “ Oracle has the only solution”, even if others may have it too and Oracle could loose the competition. .
The interference pattern, in Young's 200-and-something years old double-slit experiment, appears on the screen when the width of the slits approaches the wavelength of the emitted monochromatic light. If the width of the slots is increased, then the illumination of the screen will increase, but the severity of the minima and maxima of the interference pattern will fall until it completely disappears. This experiment is practically the same as the all for quantum entanglement.
In the double-slit experiment, we are talking about the excitation of atoms at the edges of the slits, due to the photoelectric effect. That is, due to the inclusion of extra photons in the atoms, which takes them out of a more-or-less stationary state. Under the influence of it the direction of movement of photons changes: instead of the shortest they choose other. Also obvious that the minima and maxima are set by the spins of electrons, as well as by a doubtable possibility that other nucleons (forming atoms) influence the process.
This photoelectric exists because these atoms are accumulation but not material points, with not somehow limited-fixed number of elements (for material there is no such). Indeed, the fact that the interference pattern blurs (with the increase in the width) clearly confirms that, where the capture force on passing by photons (due to excitation) decreases depending on the distance to the center of the atoms; which is confirmed by a rewritten for accumulation Newton's and Coulomb's laws.
In the case of quantum entanglement photons can interact because they are accumulation points; which the interaction can be seen because they are sets with a very few elements. Hence atoms aren't material points too! But it's problematic to fix the same entangelemnt due to their number of elements, even if there is one.
Re: 79, 75, 77
The guys have no theory, just an experiment without it; but Einstein had a constant at the basis of his Relativity. I looked at his constant from the reverse side: instead of the speed of light I have a photon acceptance time in the set. Also, in my theory, I have only one watch. Around this time-constant, through the Avogadro number, a theory confirmed, for example, by this Nobel quantum entanglement is built: everything is an accumulation and not a material point. Which is trivial, right? But the guys got a Nobel for this!.. knowing not for what they got the Prize.
Re: Measuring a property does not set it...
Ok, I am “ a Grade A kook”. What about the fact that quantum entanglement proves the need to abandon the notion of material points in Physics, which have clearly defined boundaries and the same defined number of their elements? Which the notion was introduced more than 300 years ago? Indeed, the fact that the entanglement exists clearly indicates that photons are accumulation points without any quantitative restrictions. I can be crazy, insane, whatever! But explain why my proposed interpretation of quantum entanglement is confirmed by the two-slit experiment, which also confirms my claim to abandon material points in Physics? Why we must continue to be with material points?
Re: 79, 75, 77
Why do you need Einstein? I took his throne. From now on you can discuss what I did and what I didn't do instead. My Quantitative theory, as a continuation of Einstein's Relativistic but without relativism, gives me the right to sit on his and Newton's throne: my theory has already been proven experimentally.
Re: Measuring a property does not set it...
Therefore, quantum entanglement is explained very simply: accumulation points have no boundaries, any such the point interacts with everything in the universe. The fact that “everything in our universe is an accumulation point” is proved by a lot of experiments; for example, by double-slit. That is, the Nobel Prize was awarded for proving that photons are not material, but points of accumulation, since being observed they become parts of the sets. Whereas material points are not sets.
Re: Measuring a property does not set it...
Citations you can find in English and Russian.
The Quantitative theory coincides with Einstein's and is its continuation; without space, speed, acceleration, energy, and so on and so forth; the Set Theory is used, there photon is a smallest element.
Postulate I: In this volume there is only this exact number of elements. (See the periodic table.)
Postulate II: The inclusion time of the minimum element in any set is the minimum possible. (The speed of light.)
And everything is deduced from a single axiom:
— There are one, two, three or more elements into any point of accumulation. (Cantor)
Which makes photon an exception and material point, which doesn’t exist among points of accumulation (string, the String Theory).
Re: Measuring a property does not set it...
My Quantitative theory totally proves all the constructions of Einstein, Schrodinger, Poincare, and all Quantum Physics, while I started on the Avogadro number. That is, I came to exactly the same conclusions purely theoretically, having proved it with a lot of experiments. For example by double-slit, as well as Lebedev's on light pressure and "displacement current" experiments.
All is about the accuracy of the cognitive AI search. For example, why were giant models like BERT type created? For those reasons that even without being able to really understand every word and expression, missing 99% of information, it will still be possible to find something. Even if 99% of the relevant information is skipped!
In that new AI database, or rather many private and not so databases that are intended to replace Internet, the accuracy is absolute. That is, there is no question of losing a grain of information! The key to this miracle lies in understanding every word, every phrase in its context, and in as many annotations as humanly possible. And also in the taking into account subtexts, such as what is not explicitly mentioned in the text and context: here I mean dictionary and encyclopedia definitions, etc., for example. Only if and when such the clarity is achieved the Internet becomes turned into databases.
Re: Too late I think
The Open Internet shall very soon be replaced by a number of private and national databases. For example Microsoft and Apple, Chinese and Russian, French and German, etc databases? At the same time, there is a reason to assume that these should very tightly controlled databases. Indeed, why monitor users when you can control the content? The time of relative freedom is relatively over: the Open will stay as Dark internet, the place for drugs, prostitution, etc. But the time of absolute privacy has almost come...
The Internet is completely outdated and in very few years will end.
The idea of the Internet is that there is a set of documents, websites, images in which the user is looking for what he needs. AI has fundamentally changed this concept! From now on those will search themselves for the end user, who is completely passive, who gets his long desired and complete privacy. That is, espionage a la Google and FB loses all meaning, along with their business model, because all their profiles are replaced by personal AIs.
How? Everything is now annotated by texts, on the basis of which individual profiles (lexical clones, AIs) can easily be made. Therefore, fortunes are to be made by those who provide programs for creating AIs and the same on electronic items, and the final match. The real fortunes, since we are talking about hundreds of billions and trillions; while the process of destroying the Internet and replacing it by Gellernet won't take long.
“But this raises the question of whether Teradata can grab new customers…”
No, Teradata cannot attract both new and old customers: the time of SQL-ideology and databases, which manually process raw data, is over. From now on database will be created by AI, which will automatically process the raw data, annotating and structuring it. Without humans involved. While Teradata relies on manual work.
As I said earlier and say now: a quantum computer is not needed. AI by default produces the desired superposition, because it is an intuitive system.
Indeed, when parsing each obtained phrase acquires its own weight, which characterizes its importance. For example, a weight equal to one says that the phrase is the only one in this text, as well as that the text is laconic. In most cases weights are irrational numbers, which must be rounded to natural or rational because otherwise it is impossible to perform any mathematical operations. Summarising, the rounding creates the desired superposition, which makes the idea of quantum... not needed.
There is no way to publish anything in English, because I am an outlaw, thanks to my patents. In Russian by the same reason.
The difference in measures and in the used philosophies: the Internal Relations theory of Analytical Philosophy, of Hegel and Bradley, in Quantitive. So far, only the External Relations theory has been used in physics, of Russell and Moore. For instance used by Kurchatov, who was an engineer and not a scientist.
As a measures, quantity is used, of the Avogadro number; instead of distance, speed, energy, etc. In the spirit of the Set Theory: electron cannot be removed no matter what. Unless atom is destroyed and nuclear phusion cannot be done.
The first postulate of Quantitative theory: In this volume there is only such a number of elements. For example 100 photons, see Avogadro number and periodical table. This theory has been proved by, among others, double-slit experiments. Which means that the electron must stay in the atom and it is impossible to remove it without destroying the atom. Then it is impossible to reach the nucleus, no matter how the atom heats up. Controlled nuclear fusion is impossible and is a scam.
Re: Wait, What?
Individual Artificial Intelligence, as individual profiles, have somewhere between 200-500.000 unique keys, and perhaps even more. At the same time, such profiles are continuously changing, because they are not static. Consequently there can be no talk of any theft or forgery of such keys, even theoretically: impossible to fake something that changes dynamically each time it's used. Microsoft already makes these individual AIs.
Such AI-operation system, as its foundation, has the AI-assignment to segments of the everyday language of certain functions, just as until now the same segments were assigned through programming languages. At the same time, the search for the segments is performed using AI-search technology, in response to a request made in the common language. The approach leads to the absence of the need for training, studying and other hideous tasks, without which, for example, it is impossible to use Microsoft Windows or Apple. Or Chrome.
The databases used in the AI-operating system will again be radically different from those for Windows or Apple, or any other system; which again means a total democratization of computer handling. These new will be AI databases, which do not require any training and special skills, or formulas, or spreadsheets, as SQL does.
This Windows operating system is tragically obsolete, as well as the one from Apple, as well as Linux and etc. Accordingly, all means of protecting will die along with them.
Indeed, programming and program code are dying before our eyes, the soon coming modern operating systems are designed specifically for AI.
Above, in another comment, I wrote that AIs are intuitive systems, that is able to understand everything. Without yielding in any way to the persons on the basis of whom the AIs were created (lexically cloned). That is, it is enough to choose one lexical clone that is most suitable for the customer — choose by psycological and intellectual merits maximally compatible with the client — and has the right qualifications — and the tric is done! You may start to worry!