iJohnny-come-lately
Yawn. I've been using voice search and the voice text input method ever since I got an Android phone.
Voice-transcription software has come to the iPhone from the long-time industry leader in speech-to-text software. Dragon Dictation (App Store link) is from Nuance, the developers of the Dragon NaturallySpeaking line of speech-to-text converters for Windows and the supplier of the recognition engine behind MacSpeech Dictate …
This isnt really about the iPhone. Voice commands are available on many different handsets, Input based voice recognition Is only really possible with server side processing, which is similar to what Google's voice search does, but Nuance a leading here in every sense when it comes to input.
The Reason why this is free I believe is to improve quality and accuracy before selling a paid version. This will be part of a much bigger strategy for them as they move towards offering ODMs very accurate voice recognition systems that can then be sold as a true feature that actually works.
Voice recognition is certainly one of the next big frontiers for mobile devices, IMHO, despite the constraints the act of speaking aloud has in public or private.
Invest in Nuance*
*If you do and shit hits the fan, dont blame me, and just some Commentard!
Nice to see iPhone getting features that have been on Android for some time now. Who knows maybe iPhone will get Google Goggles as well by Summer 2010 :-) Shame iPhone won't be able to perform image recognition and voice-to-text *at the same time*. Maybe Apple will add real multitasking one day to support this.
but Android did it ages ago, and I'm pretty sure WinMo had it a while back as well. I am glad to see that the iPhone has finally got cut-and-paste, though... Only took what, 3 generations?
We really should get Apple to release a set of 3D glasses as an Apple world first- they seem to have a knack for coming in late to the party but pulling along a whole lot more friends. Then claiming loudly that it's a party thrown by Apple itself.
I was about to post a similar sentiment - it seemed to work fine for my dad on a 133mhz Cyrix yonks ago, and I'd be shocked if an iPhone's ARM was less pokey than THAT bargain basement bit of silicon. PC voice recognition apps were all the rage then, before people got bored of them. Similarly it worked fine for medical dictation on a boggo Dell workstation (probably something like an 800mhz P-III) when I was working for the NHS about 5 years ago, and though that's probably a bit beefier than Apple's baby, it also had a fair bit more going on in the background - all of a domain-based XP install's overhead plus occasional visits from a remote terminal fairy that chucked imaging data at it (needing processing after capture on an aiiiiiincient Sun Sparc).
It just sounds like some massive excuse to have your speech and the resultant transcription going in and out of their servers for no good reason but a whole load of potentially nefarious ones. Plus add in the fact that it's almost certainly going to be compressed in order to travel through the tubes at anything resembling realtime speed, which needs a fair bit of power in of itself for good quality (on a par with, oh, say a 200~300mhz Pentium MMX) but would still probably compromise the accuracy vs just analysing the PCM original in the phone...?
The recognizer is probably tuned for American-accented speech, and will be less accurate with British accents. It also only uses American spelling, which some silly people would complain about. I expect they'll produce versions for the UK and other countries after this trial phase.
Apparently (as I cannot check, being in the UK) the Licence agreement permits them to transfer all your contacts to Nuance, who in turn can hand them over to your ISP/carrier. As Nuance are in the US, this is probably why they will have difficulty in getting it on the UK store, under the US terms.