Hobby-level hardware for ML tyre-kicking
Image gen - maximise amount of vram even if it's slower chippery. Best value/compatibility I could find was Nvidia's 3060 with 12GB. Doesn't need to be the Ti version either. Along with that, system RAM is important - doesn't need to be super fast, but the more the better.
For Stable Diffusion SDXL using Forge webui on Win10 with 64GB and the 3060/12 I found system RAM usage hovering in the low 30s. Flux1.dev also fine, but system RAM usage regularly into the mid 40s. SDXL training of TIs and LoRAs is doable, but model fine-tuning is not. It looks like Flux1.dev LoRA training *might* be possible - I see a lot of effort on Github to fit the training into 12GB gfx cards.
Laptop with Nvidia 2060/6GB + 16GB system RAM: SDXL image gen with Forge is doable (around 1min for a 1MP image). Scaling up to larger images does become a bit drawn out, and I had to add a fan base under the machine. Flux1.dev was painfully slow with plenty of crashes.
For LLMs, the same laptop works OK with Kobold and a variety of 7B models at usable speed (bit less than reading speed), but it does max the laptop out. Interesting to try out, but not terribly practical.