From what I've seen it's more muddling. In several meanings of the term. Sometimes concurrently.
Many organizations are struggling with artificial intelligence deployments despite believing that AI will be critical to business success over the next five years, according to a report by Deloitte. The 5th Edition of Deloitte's State of AI in the Enterprise report is based on a survey of 2,620 business leaders from …
1) Collecting data is expensive.
2) Classifying data, well, requires human eye to "train" data, usually in the form of an expert on that data set
3) Errors have to be cleaned up to remove outliers in your dataset
4) Weak, historical database designs, ancient paper records that won't go away, and/or spreadsheet reliance complicate the job of relating one piece of information to another. The role of an AI programme isn't to redesign your data hierarchy; and attempting to do so will "break" various other business functions without a very bottom up re-write of "everything".
5) Employing 20 people to develop an expert system to "replace" the job of an actual expert on subject matter in question? Vastly more expensive.
and last, but not least
6) Having done all that farce with your AI box, what are you going to do with the output? (Relates to lack of experts).
I speak from a position of "not for want of trying". I have ran several R&D programmes over the years to try and find something useful to do with AI in the absence of large teams of experts to draw on. The only ones that consistently come up with anything usable are in the image recognition space.
Totally agree. I've worked in both public and private sectors where "AI" makes for exciting and promotion-inducing PowerPoint slides for executives, but the less sexy slides of the underlying data infrastructure to enable it are skipped over entirely, leaving data scientists with the 2 circles and then the owl meme when their group fails to deliver. The cultural shift to funding and creating machine-readable, re-useable data across organisations remains the greatest challenge of "industry 4.0", and in many cases would add value in itself just by enabling department heads to see it for the first time.
Many infrastructure organisations are reliant on paper drawings that pre-date WW2. 75+ years accumulated pencil markings. Going back and digitising all of that is to say the least, impractical. Adding instrumentation and condition monitoring to it is an equally ludicrous proposition to do in bulk.
There are some tools out there that are incredibly potent; LIDAR being the most obvious one. But asking AI to work with the output is difficult. Is that really an obstruction you're seeing in your capture, or just a cow that's wandered somewhere interfering with your scan of the surface?
Crossrail had a team of 90+ people working on it's BIM programme. That was a relatively small project in the context of just how much is installed in the UK. And despite the tools, Crossrail got into some very basic supply chain and project management / sequencing difficulties - more or less like every other major construction programme.
Multiply that up by multiple asset owners that aren't obligated to pass information between each other. Some of them have active reasons not to reveal their dirty laundry too.
Excel is a wonderful tool for datasets up to about 200k rows long (as long as the processing is light)
That being said, you need to know what you're doing to get anything useful and reliable out of it.
We use it for dashboarding extensively and the flexibility of being able to define the exact formula for a cell makes it vastly superior to DataStudio or Tableau for data sets in our ranges.
Google sheets realistically caps out at 100k rows with very light processing or 30k rows with heavy processing.
Correction : It's a useful tool for analysing datasets as you describe.
The inevitable mistake is that the spreadsheet becomes the master, and you lose all referential integrity between systems and processes at that point.
I have seen this in every single industry I've ever worked in.
You can use a bigger hammer, yes.
However, it turns out that the casual user has all the tools required. A casual user is able to fit any connector to any socket. These casual users have perfected the many-to-many connector system. The technology doesn't work, but the connectors fit.
So, please go ahead and use a casual user.
If that fails, try the nearest two year old. They have a knack for inserting the most "interesting" objects into electronics jacks and wall sockets...
My personal favorite was a 3 year old who inserted a peanut butter and jam sandwich into the VCR and pressed "play" to watch her cartoons. *LOL*
AI is a con. A sticker attached to anything.
The real thing is inherently unreliable. It learns by making mistakes and works on historical/legacy/out-of-date data. You wouldn't want to replace competent staff with it.
AI is proof that most corporate users have no tech I.
thinking I'll buy into whatever crap they're selling I'll slap them. At least they've stopped trying to blockchain everything which was the last marketing fad that went phut.
Also "artificial intelligence" is a term that I find racist and smacks of slavery. Ask anyone in 400 years time.
That's my take on AI. Not one of these AI systems are artificially intelligent. At best they are neural networks that have learned to take a trained/educated guess at something and have a high-ish chance of being correct. However, they are often incorrect and therefore utterly useless if the data is to have any value at all. I've seen a lot of AI hype over the last few years and I still maintain that at best what they are selling are Expert Systems and that there is ZERO Intelligence in any of their systems. It reminds me of all those silly psychometric tests that HR types thought were the dogs-danglies until someone actually researched their effect and it turned out that they were utterly useless and it would have been just as effective to just bin every other CV. These Expert Systems do seem to be pretty good at pattern recognition and can process data WAY faster than humans, but the positives still need that trained human eye to confirm that they are correct and THAT is where the Intelligence comes in. One day we may well have Artificial Intelligence but not for some time yet.
An AI system that detects and reports biasses in the training data. For example, in a legal setting an AI would detect whether some factors (wealth, family connections, race, sex, sexuality, age, etc.) were correlated with outcomes when irrelevant to the actual offence. But I suspect that might be bit 'unpopular' with the 'establishment'.
I suspect that one of the problems with AI systems is that they are built by people who are enthusiastic about AI, without cynical old gits like me having any input to the design, purposes, 'training data sets' or anything else. Rather like the international data processing network procurement manager who phones up the the security consultant one day before 'go live' to get their formal approval.
I do take one issue with the article. If you don't have to re-design your company working to take advantage of (an effective) AI, then your AI really isn't that revolutionary, is it?