Wait how long? "some algorithm" to do what?
If you'd waited until MD5 and SHA-1 were broken to decide what cryptographic hash to use, you'd have waited until SHA-2 was standardized in 2001 to do any asymmetric cryptography, HMACs, etc.
There are certain kinds of proofs of security that we can, in principle, find or create (depending on your stance on mathematical Platonism) for algorithms. We can say "under these assumptions, we can prove a lower bound on the amount of work needed to reverse this construction without the secret", for example. We can talk about Random Oracle proofs and the like.
We can't prove something is secure in an absolute sense, because 1) that would involve either proving a negative (there is no viable attack) or exhausting all possibilities; and 2) there's no such thing as "secure in an absolute sense". It's a nonsense concept.
And we use lots of cryptography which doesn't have particularly strong proofs. There's no proof of the hardness of factoring; there's just no published algorithm for general integer factoring that's better for large integers than GNFS. (There are special cases, such as when the factors are relatively close to one another and you can use Fermat's, where other algorithms are better.)
That's why we have a three-stage NIST competition for PQC, which is now in stage three. And things continue to shake out; Rainbow was broken just a few weeks ago.
But we can say some useful things about PQC. Like, for example, that if there's an algorithm in complexity class BQP for solving lattice problems, then the complexity hierarchy collapses, which would be a Pretty Big Deal and seems Rather Unlikely.
The oldest PQC schemes, McEliece and NTRU, look to be reasonably secure. They've received a lot of attention. The problem is they're expensive. So people come up with related schemes that use smaller keys or smaller signatures or are faster or whatever, and then other people try to break them. (Well, that's one problem. They also don't have some results we'd be happy to see.)
Most of these schemes are variants on McEliece, which is conceptually pretty simple. You have a matrix-based error-correcting code. You inject some noise into your generator matrix using a permutation P and a linear transformation S. P and S become the private key; if you know them, you can remove the noise and correct out the errors that were injected into the message.
I think it's a little early to be advertising baked-in PQC, since the NIST competition is still running. Even if we get practical quantum supremacy in, say, the next few years, breaking RSA or ECC with decent key sizes will be expensive; attackers won't be breaking everything left and right. And applications that deal directly with cryptographic resources such as keys and signatures will probably need changes to handle the huge keys and signatures typical of these algorithms, so organizations will be slow to switch to PQC.