The history of bugs itself is interesting: the term comes from the problems when insects got into the computers and caused errors.
I suspect that, in theory, as all code is essentially the expression of logic, it should be possible to have code that implements the logic perfectly and run separate tests on the logic. However, I think the time has passed where it is possible to do this because modern programs are so extensive. We're also increasingly seeing the same in hardware, which is essentially software in silicon, for the same reasons.
Best practices including static code analysis and extensive testing have helped significantly reduce many of the most common mistakes. But there are still classes of fairly common errors as the lists (input validation, memory use, etc.) continue to point out, though the errors are often less due to programming mistakes than the inevitable consequence of the complexity.
And then there's the multimedia world in which we live. This means that, at some point, control of the hardware is given to the software. Not only does this increase the complexity, it also significantly increases the attack surface.