back to article Apple impasse sees China Mobile buy own speech tech

China Mobile has made a defensive move against Apple’s plans to launch a Chinese version of Siri by courting its own speech recognition investment. The carrier is poised to make a strategic investment or acquire local voice recognition technology firm Anhui USTC iFlytek. The iFlytek technology is widely used in Android …

COMMENTS

This topic is closed for new posts.
  1. SuccessCase

    Voice recognition != Siri

    Again the register is wholly failing to understand what these features are all about. Voice recognition is one very descrete sub-component of AI. You can easily drop-in any voice recognition tech to pretty much any AI solution. Nuance, licensed by Apple, provides voice recognition. So the latest version of OSX has just been released with voice dictation, which is pretty much what the Nuance tech does and nothing more. Take speech input and transcribe it into word tokens. Voice dictation solutions displays those tokens on the screen. Siri is all about what comes next, how those tokens (not necessarily displayed on screen, though in Siri's case they are to confirm the speech has been transcribed correctly) are interpreted and acted on. There are multiple good solutions for the former, but the two stand out solutions have been Google's and Nuances. Apple had to license from Nuance due to this. They didn't really have a choice seeing as the only other viable solution was owned by Google. But to-date there is not really a strong competition to the latter. Siri has very strong thoroughbred credentials competitor solutions lack. But having said that, the problem is so hard, the history of failure with AI so long and densely littered, even a very good AI solution as distinct from a mere traditional "command interpreter" doesn't offer that much additional value. So Siri may be the best, but must people won't notice too much what extra value being the best brings. Mainly it allows users to speak more naturally because it retains a semblance if context. So you can say things like, "book an appointment with Jill for 7pm Thursday" and then later only have to say something like "move that appointment to 8pm" However here the problem is we can't rely on Siri to have exactly the same powers of understanding context as humans do, so you don't get away from the need to have to craft your commands to ensure Siri will understand them. So you still have to learn what will work (and get to feel stupid if you try what doesn't in public). Arguably you get to feel stupid more often because the rules are more sophisticated than a straightforward command interface and so you are tempted to try a more natural style of language instead of learning a pattern of commands that will work every time they are correctly transcribed into tokens.

  2. PAW

    name calling

    Sorry to tell you, but it's Siri that looks stupid in public when it fails to understand natural language.

This topic is closed for new posts.

Other stories you might like