Paranoia is a good thing
The main rule when dealing with user input in one's application has always been to never trust said user. Expect the worst kind of mangled, hopelessly incorrect data. Ergo one sanitises incoming data and bails out early if something seems fishy. With third-party libraries and code it's no different. Even for one's own code and libraries checking input data (when called from a function, or as the return value from calling some method) has to be standard, not optional.
Clearly Facebook's library did not bother checking the input, which then cascaded into taking down the rest of the application with them. Of course, with JavaScript and increasingly more new languages that are weakly typed (Kotlin, Swift, Rust, etc.), a lot of (static & dynamic) type validation is being tossed out of the window, with things seemingly working fine until the runtime hits a type conversion that is impossible, throwing an exception.
With a language like Java that has no stack-based variables, one technically had to validate every incoming parameter for being a Null type. Since nobody every did this, NullPointerExceptions are still super-common in Java code. With weakly typed languages (like JS and Python) the only time that you will encounter the really fun bugs is when you get a stacktrace barfed at you (Python) or the app fails silently (JavaScript) while the code runs in production (because testing & staging is for losers).
Does anyone ever really trust code someone else wrote, or worse: code one wrote themselves?