The idea that Hadoop is "cheaper" is a myth. Hadoop solves the "expensive server" problem by spamming a whole bunch of shitty consumer-grade hardware at the problem. If you do the research into the subject and talk to the right people there is rather a lot of dissention as to whether or not this actually results in an over price drop.
You see, the expensive databases (Oracle, DB2, etc) are really tightly coded the hardware for performance. They aren't perfect, but they are a hell of a lot more efficient than Hadoop. Plus, you generally get away with doing what you need to do on a single (or smallish number) of exceptionally powerful boxes. This drives down your power, cool, space and networking bills by quite a bit.
You can overcome some of the inherent limitations with Hadoop if you have shit-hot programmers, but as you pointed out, SMBs don't. What's more, as the traditional DB folks are being kicked out of the higher end positions thanks to Hadoop actually being useful (and cheaper) when you get to petascale, the cost of the expertise required to do Neat Things with traditional databases is plummeting.
I have on hand a handful of system that could theoretically be Hadoop nodes. They would be exceptionally shitty Hadoop nodes and they wouldn't come anywhere close to providing the compute, IOPS or network bandwidth required to do the imagery analysis discussed above. Assuming, of course, I could find a dev to program it.
The ability to use consumer hardware doesn't mean it's cheaper. It means it scales out in a more linear fashion. When you have a small scale budget, limited space, limited cooling and big requirements, Hadoop just isn't the thing.