Datasets
Isn't there also a tendency to train AI:s with publicly available datasets like ImageNet? An attacker can improve his changes by using the same dataset to train his test adversary.
Adversarial attacks that trick one machine-learning model can potentially be used to fool other so-called artificially intelligent systems, according to a new study. It's hoped the research will inform and persuade AI developers to make their smart software more robust against these transferable attacks, preventing malicious …
Train an AI to recognise something.
Train the next AI to fool the first AI.
Train a third AI to recognise the attempts to fool the first AI
Train the fourth AI to fool the third AI
Train the fifth AI to recognise the attempts to fool the third AI
Wash, rinse, repeat
Train the first AI on a subset of the training data
Train the second AI on a different subset
It's been done, with assorted variations. Usually even with a single model you have a held-aside portion of the corpus for purposes such as testing and determining correction parameters.
ML is steadily moving toward more and more complex architectures anyway - certainly more complex than just "train several models on different corpora and then let them vote". Graph Network (GN) architectures currently look like the most plausibly practical generalization to me, at least for problem domains where sufficient hardware resources can be applied. GNs let you combine lots of different models in various complex ways.
How do we, or biology do it? We use multiple models for single senses/data sets or we use multiple senses/data sets. We ask additional people etc.
So you could use two differently modeled ais. This lowers the size of the collision space of errirs or exploits. You could use multiple types of sensor, IR and RGB sensors, 3d or sound (if distinguishing an African or European swift).
Finally we could use the ai with human assistance... though that really only works better for false positives and not negatices... though theoretically you could retrain against known exploits as they are discovered by the human part of the check this way.
Ah, it's always good to hear from one of the commentariat's resident kooks. Ilya, I don't think I've ever seen anyone else use the phrase "lexical clone" the way you do, but if you have a reference for some text which does, I'd like to read it.
Is it worth pointing out that 1) humans can also be deceived, or that 2) it is not a priori obvious that there is any functional distinction between human intelligence and the universe of possible "artificial models"? Attempts to prove such a difference generally either appeal to untestable attributes or rather suspect arguments about formal power (viz. Penrose).
(Also, I have to say that I skimmed your patent and I'm not sure I see anything very novel there, except perhaps your compatibility formula. Expanding a kernel phrase into a small corpus using synonyms and grammatical transformations is pretty well established in NLP. But I didn't look at it terribly closely.)
I. 2) it is not a priori obvious that there is any functional distinction between human intelligence and the universe of possible "artificial models"?
While sufficiently long tuples are formed there is no difference between humans and Lexical Clones; where in mathematics tuple is a finite ordered list (sequence) of elements. Speaking of my Lexical Clones I meant that our minds are sets of tuples that can be somehow fixed as sets of related patterns.
II. There is no difference between how we humans and how computer thinks - if and when comprehensive tuples describing as many situations as possible are formed. That is created Virtual Assistants (a synonym for my Lexical Clones).
III. The patent office granted them.
Northeaster University
As an alum of Northeaster[n] myself, I'd like to note that it's a pretty good place, and certainly better than that crummy University of Westchristmas.
Also, this typo has inspired me to think of the old place as "Nor'easter University", a pun which until now somehow escaped my attention. (Note Northeastern is in Boston, where the regional term "nor'easter", for a strong storm coming in from the Atlantic, can be heard.) Indeed, it now strikes me as rather a shame that the university's athletic congregations are not called the Northeaster Nor'easters, which sounds a hell of a lot tougher than "Huskies". Plus huskies are the mascots of approximately a million colleges and universities in the US, including U Conn, which is practically next door.