sigh
Machines are really good at working on a narrowly defined problem. The classic example is chess. A massive possible data set, but the rules for what can be (how pieces move) are very strict and very few. There aren't that many areas where that kind problem domain and definition holds true.
The problem with AI is that the promise of AI is that you can replace human analysis with machines. Unfortunately machines can easily miss what humans may not, because humans have real intelligence and intuition. How will AI end up being used - to augment humans and do what they can't, or to try to replace humans and so end up losing the more intelligent part of the team?