a scraper can come and go in short order.
Count yourself lucky. :/
My personal site is of no appreciable consequence, hosting mainly dribbles of code I've open-sourced, along with the VCS and ticket system I stuff it all into. Actual real human traffic typically amounts to, well, me committing things to version control, and a couple of requests to remember which ticket some commit needed to be tagged with, and possibly as many as five actual third party visitors per month, most of whom almost certainly (I haven't waded through the log swamp to find them) only request static files from my ~user subsite rather than the "main" site, such as it is.
I have the good fortune to have two moderately beefy couple-of-generations-old enterprise servers colocated at work for more or less free, with my main site recently converted to a VM scaled to about an eighth of one machine's CPU/RAM. I have plenty of capacity to throw hardware at the problem if I figured it would even be worth the bother of trying.
I have recently noticed instances of 5+ hours (verified at least as far as the Proxmox CPU usage graph) of being hammered with absolute gibberish requests that are clearly straight up hallucinations and have no just cause to have been requested from me at all... on top of the way-faster-than-justifiable requests for paths in the VCS infrastructure from at least six different identifiable bots, and an unknown horde that don't have the courtesy to identify themselves. I noticed because my source code commits and other access to my own resources were failing because the bots were hitting everything so hard the DB was hitting its connection limit.
My Apache log file for one month should be on the order of maybe a couple of megabytes. It's been >1GB for close to a year. For July it hit 2.5GB. Last month the useless garbage bots pushed it up to 2.8GB.
AI bots can just go fold themselves until they're all corners, stuff themselves up their own backsides, and launch themselves into the nearest supernova.