Software/hardware paradigm is the problem
We have ignored this problem for too long. The problem is insufficient architectures coupled with low-level programming that target the weaknesses in those architectures. People like C.A.R. Hoare knew in the early 1960s that software should be verified and built into Elliott ALGOL - verification like bounds checks. These checks were dynamic, thus slowed processing down. Performance was critical in the 1960s, and the scientific programming/hardware community won out and did not put checks in, especially soft checks like in Elliott ALGOL.
However, Bob Barton at Burroughs in 1962 decided such checks would be better done in hardware - for speed and for security. However, this still came at a small performance penalty - but it was an example of complete systems design, not just a CPU. Such checks are not just software verification checks - in a multitasking environment they are critical security checks. Burroughs released the B5000 in 1964 and these machines are still going in Unisys Clearpath MCP. The scientific community hated the B5000 because it spent cycles on in-built security checks. (Burroughs came out with a scientific processor BSP as a backend to the B5000 - watch for this architecture in quantum computing.)
Then it was decided we could statically check software with type checks. Programmers hated types "why should we have training wheels" - this thinking is a completely false analogy.
Fast forward to 1969 Dennis Ritchie throws out most of the advances of ALGOL over FORTRAN, except for the better ALGOL-based syntax and block structure. C was built around low-level CPU instruction sets (PDP-8 where the awful ++ operator came from). That was a strength of C, but also its prime weakness. Yes, you could let the programmer do anything which appealed to programmers egos, (and it is also great to teach this level to programmers, but that would be the equivalent of training wheels) but it has proven to be completely the wrong approach to non-scientific, everyday computing. End-user computing needs to be more secure than anything else. Server computers are run by professionals with tight controls. (Linux is good here, but not appropriate for end-user systems, but that's another, although related topic.)
C's philosophy was 'trust the programmer'. But in retrospect, that was naive because not all programmers have noble intentions. At the least now it is a stupid philosophy, but more likely negligent, and due to security problems, it should become criminally negligent. If engineers built such a sloppy bridge, they'd be gaoled.
We could build verification into code generated by compilers. But that is still not good enough. We need to build verification checks into CPUs as in the B5000. We have plenty of silicon on a chip to do it now. Programmable Logic Controller (PLC - the hardware that directly controls physical-world objects) designers are coming to realise this due to Stuxnet, but we now need to apply it to rational CPU design as well. Security experts and CPU designers need to study the B5000 architecture to understand the basis of what to do in the future. (The current release is downloadable from Unisys and runs on PCs.)
Of course, there are security flaws at higher-levels of abstraction, but until we build strong legs and a sufficient foundation, the rest of the body will be vulnerable at the lowest levels.
Make no mistake, the big elephant in the room is low-level programming with languages like C, C++, and assembler. C, C++, and most CPU architectures must be replaced and the sooner the better. Stop ignoring the elephant in the room.
Note: this is not a popular message. Like the issue of climate change, it is unpopular with many people, who will try anything (mostly bogus) to try to deny this message. The problem is that they are having fun, and those with messages like security and climate change (planetary security) are unpopular party poopers.