Maybe there's more to it...
"Machine learning models can only regurgitate what they’ve learned, so it’s, essentially, the training dataset that’s to blame."
Actually they tend to reinforce what they have learned by selectively weighting new input according to the existing template (just like bees). This is essentially "prejudice" and it's how they intrinsically operate. So it proves quite difficult for them to "unlearn" established patterns. A human trait they don't exhibit is embarrassment as there is no emotional capacity. Consequently there's no impetus to rethink anything once "learned" unless a large volume of contrary information is provided.
The brain is not an analytical engine - thought (including learning) is driven by emotion. This is recognised intuitively by the word itself - "emotion" means impetus to move (or act). As an AI system hasn't got a body, the drive to act (rethink), which in humans can be triggered by quite small stimuli (the "eureka" moment), is absent.