Statistical analysis is hard, and costs a lot before you can hope to start generating revenue.
Good. When there's a high barrier to entry, less bullshit is created.
Just look at IoT for the reverse example.
Despite all the hype around artificial intelligence, trendy startups built upon the tech are said to have lower margins than funding-magnet software-as-a-service (SaaS) companies. “Anecdotally, we have seen a surprisingly consistent pattern in the financial data of AI companies, with gross margins often in the 50-60 per cent …
No, it just means the product or service is poor. The Marketing ensures that it's ages before people notice.
The Watson "winning" Jeopardy is almost a party trick. The "Medical" Watson sold by IBM to a major US hospital wasn't actually the same software at all. Also it didn't work.
So called "AI" if done properly is very expensive to set up and needs huge investment of "domain experts", data and custom programming.
So what's different to late 1980s early 1990s "Expert Systems"? Some of the software, lots more storage and CPU and all of the marketing. Yet the Image Recognition (really pattern matching as no computer system recognises anything, zero sentience) is not much better than then, considering the very many orders of magnitude of CPU and storage.
"Before these models can even be taught and deployed, however, a significant amount of labor needs to go into curating the training dataset. That data, whether it's a series of images, audio clips, or pages of text, needs to be labelled and cleaned, typically, by humans."
Actually it all has to be selected / curated, labelled, cleaned and checked by EXPERT humans, or it's misleading and nearly worthless. Biases in the capture, curating, labelling, cleaning and checking are a real problem. These can be deliberate, or due to the source of data, or due to the selection and training of the humans. Identifying the biases is hard. Proper testing of the "trained" system is really hard.
Also to consider is the motivation and aims of the the companies big enough to tackle this and how ethical the acquisition of the initial data is. See Clearview AI and Google/alphabet.
We need to remember that these companies started offering hosting services to monetise their latent capacity. It's therefore easy to set up and designed in a way that costs 'eye-watering' amounts of money if you take huge volumes of resource that requires them to add more capacity.
Hyperscale is super easy and has the ability to do pretty much anything you want it to, which is great if you're building something complex like an AI application. There's no infrastructure capex or capacity planning. It's the perfect dev environment. The issues normally start when you use the hyperscale dev environment for production. Resource intensive applications like AI create so much cost it can cripple the growth of the start-up.
For production environments, a specialist hosting provider can offer environments that perform much better and cost considerably less. Bare metal and VMs in a private cluster, won't be as slick and instant as through a hyperscaler, but the performance and cost advantages make it by far the better way of doing things.
Biting the hand that feeds IT © 1998–2022