Authority
This has been worrying me for a while.
To coin a phrase, we are used to and sort of expect 'computer says no' type of behaviour from our IT. In fact we strive for it, to make our systems unquestionably reliable and deterministic. We've had around 75 years of teaching people that largely when a computer gives you a response, you can rely upon it (Horizon excepted, obviously).
This however is being up-ended with AI. These are not computer systems that we are used to. They are non-deterministic, and almost deliberately so.
However, the general public have not been taught to differentiate between the two. And in fairness neither have most of us.
So, there is a challenge here in that the scenario pits a person which knows they have a fallible memory against a computer which the person will believe has a perfect memory and does not wilfully deceive or make errors of judgement.
So, in some ways there is something new here. The relationship between people and people in authority positions is something that has been explored many times over, however the relationship between a person and a system where a person may have preconceived ideas of it's fallibility is new and probably needs research.