back to article How Google's Smart Compose for Gmail works – and did it fake its robo-caller demo?

Hello, here's our weekly AI roundup. We have more information on how Google's sentence prediction in Smart Compose for Gmail works, as well as some questions about its Duplex robo-caller system. Also, decision trees to classify the mating calls of frogs and toads to study climate change. Too lazy? Let AI write your emails …

  1. derfer

    Isn't smart compose just the same as when you are typing on your phone and it tries to guess the next word you want (then shows it at the top of the keyboard)?

    Am I the only one that has tried to write whole text messages using just the proposed words at the top of the keyboard, whilst trying to retain the meaning of the message and some semblance of sentence construction?

    1. teknopaul

      swiftkey

      Swiftkey is great for that. However it requires near-human input to come up with something original sounding like cofefe.

  2. Anonymous Coward
    Anonymous Coward

    Risky?

    Do... you... have... the... stuff... ready... to... buy... I... need... the... strongest... hit...

    1. Destroy All Monsters Silver badge

      Re: Risky?

      "No HAL, you are going down!"

  3. Andy 73 Silver badge

    That last one....

    ...about the frogs.

    Casual reading suggests they've found a proxy for temperature measurement. That is not a good way to detect 'climate change'. Nor is changes in the local frog population (and by definition it will be *local*).

    1. Anonymous Coward
      Anonymous Coward

      Re: See also...

      Most proxies in science labeled as "facts", that boost one argument over the other.

      (I'm assuming downvotes from both those supporting and refuting, as both sides use it aggressively)

  4. Destroy All Monsters Silver badge
    Holmes

    Meanwhile, quanta mag got Judea Pearl on the horn

    To Build Truly Intelligent Machines, Teach Them Cause and Effect

    As much as I look into what’s being done with deep learning, I see they’re all stuck there on the level of associations. Curve fitting. That sounds like sacrilege, to say that all the impressive achievements of deep learning amount to just fitting a curve to data. From the point of view of the mathematical hierarchy, no matter how skillfully you manipulate the data and what you read into the data when you manipulate it, it’s still a curve-fitting exercise, albeit complex and nontrivial. ... I’m very impressed, because we did not expect that so many problems could be solved by pure curve fitting. It turns out they can. But I’m asking about the future — what next? Can you have a robot scientist that would plan an experiment and find new answers to pending scientific questions? That’s the next step.

  5. Destroy All Monsters Silver badge

    Errata

    AlphaGo Zero tops the list, guzzling more than 1000 petaflops per second per day

    That would be an accelerating, possibly driven by Dark Energy.

    The actual number is "petaflops per second times days", i.e. "petaflops".

    "1000 petaflops per second days" is 1000 petaflops/s for 1 day or 1 petaflops/s for 1000 days. Ergo, 86400 exaflops.

    1. Lusty
      Facepalm

      Re: Errata

      This whole paragraph was utter gibberish.

      "A petaflops per second per day is equivalent to performing about 1015 neural net operations per second a day."

      No, a petaFLOPS is 10^15 FLOATING POINT OPERATIONS PER SECOND. Unless those neural net operations happen to be precisely ONE FLOP then it's not true.

      And all this talk of FLOPS per second per day. It's FLOPS, just FLOPS, the OPS already is Ops Per Second. You may use a petaflops of compute for a day, but the wording didn't convey that information at all. If you're consistently using one petaFLOPS of compute then you've no business putting per second or per day on the end as it already tells everything you need to know.

  6. Mike 16

    Smart Compose

    I assume its email output will very closely resemble those timeless tomes allegedly generated by various managers it has been my misfortune to report to.

  7. Dan 55 Silver badge
    Big Brother

    "The model was trained on billions of, probably, mundane emails to nail the prediction process."

    Phew, just beat the GDPR deadline!

    1. macjules

      Re: "The model was trained on billions of, probably, mundane emails to nail the prediction process."

      Correction: “the model was trained on billions of mundane GDPR warnings, spams and desperate attempts by IT recruiters to get software developers to register with them”

      FTFY

  8. Only me!
    Angel

    Sod the story!

    That pic is fabulous!!!!

    Can the frog sue Google of colour infringement/copyright......or anything really :-)

    It has more chance than the monkey ever did of winning with the selfie!

  9. WinHatter
    Pint

    Did Google fake its Duplex demo? No answers here.

    Well if they don't use their AI to reply to journalists that was obviously faked.

  10. SonofRojBlake

    The "amount of compute"???

    Really? Is this how far the language has sunk?

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like