Re: Isn't that like any project
Not really. Hadoop in general and Spark in particular is massively over-hyped and misunderstood. People think that they can just load all their data into HDFS and run their existing workloads on it, because the vendors tell them they can, but the vendors don't say that doing so is likely to be massively inefficient and error-prone (both reliability and data quality). You need to do a huge amount of work to make Hadoop replace something existing, because it is unlike anything existing.
Contrast with other major projects, an on-site CRM to Salesforce migration for example, where the target capabilities are relatively well known, are similar to existing business processes, have a stable release cycle and don't tend to break on new releases.