This is an advertorial for Databricks. A "data lake" is a (poorly defined) logical architecture, not a technology, and many organisations use it in BI applications successfully. Even more use it unsuccessfully, but such is data.
Spark is only one of the engines in use in such architectures. MPP SQL engines such as Redshift, Presto, Hive, Impala, Snowflake and so on do the bulk of the analytical (i.e. high concurrency) work. Spark, as a batch engine, tends to do the good ol' ETL and ETL-adjacent workloads like ML.
Databricks would like that to change (ETL is commodity), but they don't own the lake and they don't dominate it either. Their "new" "SQL Analytics" product is lift of Apache Impala. Delta Lake is a table storage format and has little to nothing to do with the mode of access or data architecture.