One would hope that these AI's got to argue with other AI's and left the real people to get on with their lives.
However, if AI was pretending to be a thing in order to stimulate some form of response, the chance that another AI would be the respondent would invalidate the results.
The other, and less forgivable, result is that people seeing these posts would believe them and use them as 'proof' to support their stance, or disbelieve and look for evidence the post is a lie then use that as proof of their opposing stance. Yes, this includes rape stories: There are people who believe anyone born XY is a rapist and the number of claims of rape in social media is proof this is true, where as those who think rape claims are mostly fake will look for proof, find it, and use that as evidence that most claims, if not all, are fabricated.
Meanwhile, genuine victims suffer because they're lost in the chaff of these fake stories.
Oh, yes, and other researchers who are using social media as a source of data who are not involved in these AI experiments will have had their research data corrupted, invalidating their research and wasting their funding and their time and effort. Sure, serves them right for using social media in the first place, but sometimes it's where research needs to start. And heaven help them if they actually engaged with the AI thinking the AI was a genuine victim...