They're wrong, even if not lying
It's virtually impossible that Bard has not been trained by ChatGPT responses, even if it has not been done deliberately so. That's because answers and content generated by ChatGPT has been all over the web for years now, and since Google was and is unable to differentiate between AI and human generated content (even in cases obvious to humans), the text base Bard was trained on has had to include several examples of ChatGPT-generated content.
That's the real problem with AI, which will be more and more a problem in the months and years to come: they all will feed more and more on each other's output, even if not deliberately done so, inflating and aggregating (literally) each others' flaws and misconceptions, and will gravitate to the same subpar average, as do humans, unfortunately, since content publication has been "democratized". Their answers will become as stupid and unreliable as that of the average facebook commentard's.