I don't think it was a bug
A lot of these big companies diving into AI will end up bitten when it turns out they never grasped what they were doing.
"AI" in the guise of LLM will lie, leak conversations (that is use details from previous conversations in future ones), and give deliberately misleading answers. Because that is precisely what it has been trained to do.
Don't believe me ? Ask ChatGPT itself:
Are the recent examples of chatgpt disclosing the results of previous conversations a feature or bug ?
As an AI language model, ChatGPT does not have intentions or motivations and operates solely based on the input it receives and its programming.
That being said, disclosing the results of previous conversations could be seen as either a feature or a bug depending on the context and intended use case.
If the purpose of ChatGPT is to provide personalized recommendations or assistance based on previous conversations with a user, then disclosing the results of those conversations could be seen as a useful feature that improves the quality of the interaction.
On the other hand, if the previous conversations were meant to be private and not disclosed to others, then disclosing the results would be a bug and a violation of privacy.
In summary, whether disclosing previous conversations is a feature or a bug depends on the intended use case and whether the disclosure is consistent with the expectations and privacy preferences of the users.