Reply to post: @StuCom Re: It would help if the author knew what he was talking about.

BIG DATA wizards: LEARN from CERN, not the F500

Ian Michael Gumby

@StuCom Re: It would help if the author knew what he was talking about.

Naw, sorry any sort of 'points' are lost in the blubbering noise.

In the enterprise, there is always going to be the need to have transactional systems so you'll need the RDBMS. WRT Hadoop, Hive uses an RDBMS to manage schema data. So too does HCatalog. Then there's Amabari (Hortonworks) and Ranger for security, while Sentry (Cloudera) apparently does not.

But where relational modeling falls apart is that is an inflexible schema that is set at the beginning. Hadoop's tools have a 'late binding' schema at run time. (Sorry for lack of a better description, schemas are enforced when you run the job not when you load the data in to the files. )

There's more, but you should get the idea.

The author really doesn't know much about Hadoop and the other tools in the ecosystem so for him to make a suggestion to look towards CERN is a bit of a joke. No offense to CERN because they have done some really good work there and they do know what they are doing.

My point is that CERN and the F500 are two different beasts.

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Biting the hand that feeds IT © 1998–2022