There are so many things to discuss here.
You appear to concentrate on input checks, this is extremely important but it is not the only source of security problems.
You have developed a threat model and you check your code against this security model, this is much better than most of us but have you got a third party to review your security model to see if there are attack surfaces that you had not thought of? Has your code been subject to a security audit?
If you use a compiled language have you checked for Compiler Introduced Security Bugs (CISB)? Your compiler may optimise your security related code away.
If you use an interpreted language how do you ensure that the interpreter on the end user's system is secure?
Bruce Schneier said years ago “Security is a process, not a product.”
Could you provide links to your website so interested parties can look at your product to confirm that it is secure?
Here are links that you may find interesting.
Source of the Bruce Schneier quote https://www.schneier.com/books/secrets-and-lies-pref/
Short USENIX talk on CISB https://www.youtube.com/watch?v=ZLPwy9bnov8
Example of CISB in True Crypt from "Security Now". There are lots of
> We also talked about, and VeraCrypt had fixed and these auditors verified - and you'll remember this one, Leo, where - TrueCrypt is written in C. And the authors were using the memset function, which basically allows - memset is a C function or a library function which allows you to say "Set the following block of memory to this value." And typically it's zero. And so you want to zero out or null a block of sensitive data like the password, or like the master key or something. I mean, and so secure systems are often having to have transient, highly sensitive data, and they're often allocating it dynamically. So you say, "give me a buffer." You ask the operating system for a block of memory. And it says, okay, here's a pointer to the block of the size you requested. You're then able to use that until you free it, that is, you essentially tell the operating system, "I'm done using that block of memory, thank you very much."
> So what the programmers of TrueCrypt did was they were careful, because they don't know what the operating system is going to do with the memory that they give back to it, they were careful to zero it, to write zeroes across the buffer, then return it to the operating system so that memory that might have had something sensitive didn't just go floating around and may be available to somebody else. The C compiler, though, as one of its optimization strategies, is looking for things that don't do anything. And some clever programmer somewhere said, oh, look. This memory is being zeroed and then immediately returned to the operating system. Well, so there's no purpose to zeroing it. That must have just been a mistake. And so the C compiler optimized out.
> Leo: They didn't want to do that.
> Steve: Yeah, that serves no purpose. You've zeroed the memory and given it away.
> Leo: Isn't that a funny error. Wow. Too helpful.
> Steve: Yeah. So there is a secure zero memory function which doesn't have this problem. And so one of the early things that VeraCrypt did was to switch over wherever memset was being used to this secure zero memory function which doesn't risk being optimized to nothing by the compiler.
> And there were too many subtle TrueCrypt kernel driver problems fixed to mention that were fixed by VeraCrypt since it inherited TrueCrypt. And I won't go through them all. I mean, there were just a collection of little subtle things. But the takeaway is these guys, the auditors, really did a nice job. There's no way you could come away from this thinking, wow, this hasn't been scrutinized deeply.
https://www.grc.com/sn/sn-582.htm
The security audit of Vera Crypt from 2020 https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/Publications/Studies/Veracrypt/Veracrypt.pdf
XKDC on perceived and real threat models "Encryption" - https://xkcd.com/538