back to article AI finally understands primitive sketches – aka marketing presentations

Artificial intelligence scientists have developed a neural-network that understands incomprehensible scrawled drawings of the sort created by children, marketing departments, architects, design creatives, and so on. The academic developers of the "Sketch-a-Net" software proudly boast that their brainchild is actually better at …

  1. Anonymous Coward
    Thumb Up

    The Apple Newton had a primitive form of this in the 1990's - it could straighten lines and circle circles (so to speak) in sketches. Pretty impressive for the time.

    1. Sam Adams the Dog

      OK, so it could circle the square. But could it square the circle?

  2. Sam Adams the Dog

    As in all else, Orwell is correct.

    but... but... if it's so much better than humans, how are humans able to judge its accuracy?

    -Confused in NY.

  3. Your alien overlord - fear me

    "Timothy Hospedales" - shouldn't he be working in a hospital ?

    Mines the white one

  4. Anonymous Coward
    Anonymous Coward

    Does this mean ...

    .... we'll finally be able to make sense of Ian Duncan Smith's requirements for Universal Credit? Or will they still be indistinguishable from an H-Block dirty protest?

  5. Anonymous Coward

    I'm not sold on this, let's run it up the flagpole and see who salutes.

    1. chris swain

      Blue sky thinking from thinkfluencers. Think of the synergies.

      1. LINCARD1000

        You forgot to put "paradigm" in there somewhere :-)

    2. dotdavid

      I think it's visual solutioneering at its finest

  6. Doctor Syntax Silver badge

    This makes a big assumption...

    ...namely that there's actually any meaning in marketing and other creatives' presentations in the first place.

  7. Anonymous Coward
    Anonymous Coward

    speaking of porn

    Speaking of pornography, it would be useful if such algorithms could reliably apply categories and tags to images based on a training set.

    1. tony2heads

      Re: speaking of porn

      How will it distinguish is from Art?

      1. Anonymous Coward
        Anonymous Coward

        Re: speaking of porn

        It doesn't need to. It just needs to tag objects and concepts.

  8. TeeCee Gold badge

    Extra mile?

    All well and good, but while it's riffling though and decoding a slide deck can it also fill out a Wankword Bingo card for me?

  9. Chris G Silver badge

    Screw the Pooch

    Give it a week's diet of those nasty Ikea assembly diagrams. If that doesn't start it on a ROTM course nothing will.

  10. Pascal Monett Silver badge

    So it is 1% better than humans, under specific conditions ?

    Well then use it in those conditions, or hire an intern that you don't pay.

    Oh well, any bit of progress is good, I guess.

  11. JeffyPoooh

    I don't mean to get all Philosophy 101 on you, but...

    Some *humans* gather up a suite of sketches, and arbitrarily assign semantic definitions to each example. Then, a neural network, one that THEY trained, amazingly happens to agree with THEIR assigned definitions.

    To make my point crystal clear, in your mind, replace the sketches with Rorschach Inkblots.

    Who decided what the "correct" answers were?

    And who trained the neural net?

    See the issue?

    1. chris swain

      Re: I don't mean to get all Philosophy 101 on you, but...

      Technically incorrect

      Semantic meaning is surely assigned by the intention of the sketcher in deciding what to draw

      People don't train AI neural networks, data does, although people do select the training data and regime.

      'Correct' presumably means if the result matched the intention of the person doing the drawing

      The article states that the algorithm relies on knowing the order in which the marks of the sketch were made so giving it a Rorschach test is irrelevant. Aren't Rorschach tests about drawing out subconscious influences?

      I see no issues here

      If you really must throw Rorschach tests at AIs use an image recognition AI (I'd be interested to see the results)

      1. JeffyPoooh

        Re: I don't mean to get all Philosophy 101 on you, but...

        @chris swain

        The primary thrust of your rebuttal is flawed. (Short version:) Not just beauty, but also meaning, is in the eye of the beholder. There's no objectively "correct" answer; by definition.

        1. chris swain

          Re: I don't mean to get all Philosophy 101 on you, but...

          I disagree. The article seems to refer to generally accepted pictorial representations of fairly basic objects, not abstract concepts.

          If I look at a photograph of a house and say "that's a car" my subjective interpretation is likely to be met with derision. You can argue as much as you like that my statement is valid from a subjective point of view but I'd still be wrong. Meaning on the other hand is a separate thing, a picture of a house might have a different meaning to me.

          I've never studied philosophy so maybe in that rarified world there is no objective reality, in which case any classification problem is presumably impossible to solve but back here in the real world there does appear to be a basic objective reality.

  12. Anonymous Coward
    Anonymous Coward

    There are at least a couple of ways to get a human level AI. A. Take the simple effective algorithms that are already known and engineer them into an AI. B. Keep elaborating the algorithms and throwing more and more hardware at the problems until you are using 100,000 CPU cores for example.

    Route B is the way they are going. Eventually of course they will realize that they can replace this algorithm with a simpler one that runs 1000 times faster and that algorithm with one that runs 100 times faster etc. Then you are back down to 1 core. Eventually you will see human level AI but there will be delays due to a lack of engineering insight. Even with the engineering solution you are going to need a good amount of low latency memory. 128 Gbytes with a latency of less than 20 ns would be good starting point. You could actually put 1 Tbyte of SDRAM on a single PCB and for engineering reasons run it below its normal speed. It would be about $15,000 for 1 Tbyte at 20ns.

    Just as an example of a simple algorithm. Given a hash function h(x) and a sequence a,b,c,d,e,f..... then the hash walk starting from some seed value x: h(x+a), h(h(x+a)+b), h(h(h(x+a)+b)+c), h(h(h(h(x+a)+b)+c)+d).....

    If you showed that to most AI researchers they would dismiss that out of hand, if you showed it to an engineer they would start thinking "hey, I could do this with the hash finger prints and wow I could do a high compression pattern detector using it that way.

    There is a big difference in mind set. I think it is still possible that human level AI within 5 years. Google clearly has the full skill set required. It is just a question of shifting over engineering types to AI department and letting them do what engineers do.

    1. JeffyPoooh

      "I think it is still possible that human level AI within 5 years."

      SeanS4 (stop tailgating): "I think it is still possible that human level AI within 5 years."

      People have been thinking that for *decades*.

      A bit like fusion power, which is always about 40 years out.

      And Flying Cars...

      Eventually it'll come true; and then some dim-wit will say, "See? I TOLD you so..."

      1. Anonymous Coward
        Anonymous Coward

        Re: "I think it is still possible that human level AI within 5 years."

        Yeh, I know. But if you had read as many of the AI papers out there as I have you would put your head in your hands and cry. I would propose the hash algorithm as a mini psychometric test to see if an AI researcher has basic engineering insight.

  13. Kane Silver badge

    Sounds like... anti-captcha piece of software to me.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2020