>they will be immune to marketing's appeal to emotions.
Why would you think that? Have they actually used ChatGPT and its ilk for more than five minutes? They are trained on human-generated data. They can detect, react to and fake emotions, and it's actually kinda hard to prevent them from doing so.
It sounds to me like the analyst is still conditioned by old sci-fi movies where the AI is strictly all logic. That's not really the behavior exhibited by LLMs.
And the printer calling home for ink is not an AI, and it's not a LLM. It's barely an algorithm. It's, like, two lines of code. There's no need, or possibility, to try to market to it. It's an extremely poor example of the kind of "custobot" that is then described. I don't think the "custobot" exists yet, or if it does then hardly anyone uses it.
I don't see this situation changing soon, either, because generative AIs are intrinsically unreliable, and I'm not going to hand my credit card details to something that might decide at any moment to buy junk just because the junk was somewhere in the training set and a random float came up 0.0001.