Nice to see ..
...that incompetence know no international boundaries.
What is the US equivalent of Capita by the way?
The US Patent and Trademark Office has taken a novel approach to dealing with the problem of patent trolls. The solution? Shut everything down. The Patent Application Locating and Monitoring (PALM) database forms the backbone of a distressingly large number of US Patent and Trademark Office (USPTO) systems and, during some …
I can't see IBM as a equivalent of Capita as IBM actually (ok well, occasionally) makes things. Capita is like a leech feeding on the British taxpayer, sucking up everything it can, lying low for a while and then starting all over again. It has very rarely ever successfully produced any final product and never once on time.
Unfortunately, all the Examiners who can't work effectively will just have to rush through their backlog once the systems come back up. That sort of rush will make it even more likely for BS patents to be excreted out the back end of the USPTO.
I just desperately hope that one day we will learn what really happened in an issue of "Who, Me?" This has all the markings of something like running a backup in the wrong direction.
The incremental progress the USPTO keeps talking about suggests they restored from tape and are now hand-entering all the lost transactions from log files. At least that's where I'll place my $20 bet.
They must be using the same HP 3PAR systems with the same type of maintenance contract that the ATO was using when it had issues last year.
NOAA's servers are having issues, too, although it's probably unrelated. Specifically, www.aviationweather.gov has been down all day. There is a backup server (bcaws.aviationweather.gov), but it doesn't have all the API endpoints.
At around 0530Z, network access to our web provider went down. As of 2000Z, some web access is available but the data are old (last data still from 05Z). The web pages are still unavailable. We are monitoring the situation. An effort to recover lost data will happen once network access is restored and stable.
Remember there was a quite severe level Oracle bug notice sent out very recently? And you know those encrypt-everything Ransome-ware's been making the rounds... It's a gigantic Oracle installation so either storage array(s) went ka-bluie or their DB got trashed thru other means necessitating a full restore. If only they were running "in the Cloud(tm)" and they could have just restored from S3 snapshots. *fnar, fnar.
maybe I won't be so surprised when an Amazon SnowMobile is parked in the loading dock...