back to article You've been a Baidu boy! Tech giant caught cheating on AI tests

Baidu has been shot in its liquid metal head for cheating in a standardised and independent Artificial Intelligence test. Hosted by Stanford University's vision lab, the Large Scale Visual Recognition Challenge (ILSVRC) saw Baidu's algorithms compete alongside those from Google, Microsoft, Apple and Facebook's FART, among …

  1. Anonymous Coward
    Anonymous Coward

    Not surprised

    Being exposed to this culture every day, one notices quickly that it is a matter of pride - or not losing face, no matter what the price. If you get caught, well, you just were not good enough.

  2. Bakana

    Rule 1.

    The First (and so far Unbreakable) Rule of Computer Programming is:

    You Cannot program a Computer to do Anything you do not know how to do Yourself.

    So, yeah. Until we figure out how WE Think, we're unlikely to get "Artificial Intelligence" by anything except some sort of hugely unlikely Accident.

    It's like that old Math Cartoon with several blackboards full of equations with a small section in the middle of the equations that reads: "And Here, a Miracle Happens".

    One mathematician is pointing to that section saying:

    "I think this part needs a little more work."

    .

    1. Michael Wojcik Silver badge

      Re: Rule 1.

      The First (and so far Unbreakable) Rule of Computer Programming is:

      You Cannot program a Computer to do Anything you do not know how to do Yourself.

      That's either wrong, or must be interpreted so generously that it's useless.

      Any number of evolutionary algorithm systems have produced novel domain-specific algorithms that are entirely unanticipated by the system developers, for example. There are a number of such results from the "Artificial Life" camp, back when that was still hip. (I don't know that many people are working in the area now.) So-called "emergent competitors" were generated over many generations of the system using genetic algorithms, simulated annealing, and the like.

      Alternatively, there are many instances of machine-learning systems that build models which perform discrimination tasks that employ characteristics not consciously obvious to the developers who create them. The fact that human beings can also perform such tasks, or that we may do so using unconscious models that perform similar evaluations of the same characteristics, does not mean (in any meaningful sense) that we "know" how to do what the machine is doing.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2022