The threat is a bit overhyped, but even as an Apple fan I have mixed feelings.
There appear to be three modes for using AI with the forthcoming MacOS Sequoia and IOS 18:
1) Apple's LLMs and other models, processing queries on-device, assuming your device has the compute power to manage this. I'm assuming this will be mostly 2024 model and newer machines where Apple has significantly upgraded their neural engines for this.
2) Apple's LLMs and other models, processing queries in Apple's cloud on their ultra-custom, allgedly ultra-secure AI compute boxen. Apple is generally pretty good at privacy and security, so I'll give them the benefit of the doubt here until if and when they screw this up.
3) ChatGPT4, with the source of the queries anonymized via Apple. Unless, of course, your query itself contains something that links it to you. Note that this is strictly opt-in, and you have to opt-in every time you use it. One the one hand, having limited access to ChatGPT4 available for free can be viewed as a nice bonus for Apple Customers. From a privacy stantpoint, this is a bit better than a vanilla GPT subscription and hey, free is free. For a knowledgeable and attentive user, this is a boon. The problem is that relatively few users are both knowledgeable and attentive and even the best of us are sometimes stupid and distracted from time to time.
It's clear that eventually Apple wants (1) to be most if not all of their user's AI usage. It's best for privacy, and it gives Apple a way to keep the hardware upgrade cycle spinning along for a few more years at least. But Apple's LLMs and other models are relatively immature, they don't want to alienate their user base by restricting these features to new devices, and so they wrote some monster-size checks for (2) and (3) as crutches to mollify their customers in the meantime.