Robots.txt is a request, not a technical measure
No browser currently uses robots.txt as a method preventing ordinary users from accessing web pages. It is only there to ask *robots* not to visit certain pages.
Equally, there is as far as I'm aware not even any *ability* for the end-user of archive.org to look up the robots.txt file, and certainly no compulsion on them to do so.
It might have been more plausible if the site owners had tried to sue archive.org for ignoring the robots.txt when they originally crawled the site (though I suspect the fault was probably the host site failing to deliver the robots.txt when requested). But to even consider sueing an archive.org end user for accessing public domain material is ridiculous in the extreme and the very strongly suggests the action of a drowning man clutching a straw.
However, robots.txt is surely just a recommendation followed by many but not all robots, and is not a technical measure. If I put a number of different coloured notices on a board and another notice which say s "please do not read any yellow notices" then that is surely just a request, not a technical measure. Because there is absolutely nothing technical that is *stopping* me from reading the yellow notices. If on the other hand, I had distinguished the yellow notices by putting them into sealed envelopes, that might indeed be viewed as a proper technical measure.
As I see it, robots.txt is only a "please do not read" notice, and if you really don't want people seeing your pages you ought to use a proper technical measure (such as password protection) rather than merely putting a "please do not read" notice on the site.