Well, you'd have a bit more for the setup and regular data import, which is a real issue.
For a while I ran a local fork of the old httparchive code, which was very poorly modelled (used manually created hashes for indexing!) and thus very inefficient. Even then, I only ever worked with the pages tables. Once I'd optimised the import scripts, imports took about 30 minutes for each data set on my laptop. Nearly (anything except full text domains searches) all subsequent queries were very fast, which is what you'd expect for a properly modelled DB.
But the current engine uses map/reduce on the reports, which gets very expensive if you run queries that cover any period of time, because you quickly start analysing terabytes of non-normalised JSON. I think exports to CSV are still possible and these are the way to go for any kind of extensive research. The "free" units per month are a good way to get a feel for the service and writing the very-nearly-but-not-quite SQL of Big Query.
Google is effectively sponsoring the project, which I still consider to be a great resource to have an idea how websites were built at any one moment in time over the last 15 years or so, but GCP really is a challenge for new users, especially budget management. For new users, it would be nice to have some kind of rate limiter that you explicitly have to deactivate for work.
It's also worth noting that this is the first time this has come up since reports switched to Big Query.