Two possibilities
1) they changed their policy, customers hated it, and they pulled it back and blamed it on the AI
2) they trained their AI using customer service chats (maybe their own, maybe some third party set) and it "learned" on its own that making stuff up and saying "that's how it is supposed to work" is how customer service reps close tickets
Neither is particularly good, but I'm curious if we'll be able to find out which it is.