I think this article has a well thought out premise - search engines collect data about data. What this is suggesting is that the method by which those results are sorted and presented could be enhanced by adding cognative analysis first, and then where appropriate, displaying the resulting options visually rather than textually.
Imagine a situtation where you walk up to a computer and say to it 'can you find the best prices on flights from london to new york - I need to leave this weekend. And also, I want to buy a suit locally ahead of the trip'. The voice software uses voice recognition to determine who you are and links this to a database containing all your personal data. It then converts the query to text and splits it into two queries.
One search goes off and explores all the sites offering flights, plugging in your personal data, and filters the results as a 'here's the best price, with options for times and dates - should I use your personal details to book and pay for this?' You authenticate to the touch screen and say Yes, and it books the flights.
The second search grabs your location from satnav data and takes your personal measurements from your private data. Using this it does a search of all the local shops in your area which sell suits and have stock, using real time data feeds from their inventory databases. It then calls up the graphics of the suits and lays them on top of your avatar stored in your private database. Using a graphical viewer, such as Second Life, you can wander into the shops and try on the suits to see how you look in the virtual mirror. If you want to buy it, pay in the 'game' and walk to the real shop and collect the item.
Science fiction? Not really. All of the above is entirely doable with today's technology. The change required is to link these systems, and integrate them with intellegent search capable of displaying the results in a variety of textual, visual, and virtual data whcih you can better interact with and act upon.