A surgeon insisted that his students work from drawings they had each made of a patient, not photographs. This ensured that the student had actually noticed and recorded relevant details, rather than relying on some other entity (in that case a camera) to 'notice things'.
I do hope that there were extensive trials and ongoing checks to ensure that what a clinician would record about an examination and interview is correctly 'captured' by GPT-4. It's propensity for 'making things up' is worrying at best of times, but for medical needs certainly quite scary.
e.g.,https://www.theregister.com/2023/05/31/texas_ai_law_court/
"After a New York attorney admitted last week to citing non-existent court cases that had been hallucinated by OpenAI's ChatGPT software, a Texas judge has directed attorneys in his court to certify either that they have not used artificial intelligence to prepare their legal documents – or that if they do, that the output has been verified by a human."