This is quite an old concept.
Almost 30 years ago I worked on a compiler for dbC (https://ieeexplore.ieee.org/document/279474). A key feature of this language variation was arbitrary length variables. The initial motivation was SIMD, then quickly FPGAs.
Since the hardware structure has no inherent word size, a language that allowed exactly the desired precision resulted in code that was smaller and faster. And for a certain class of problem that used modulus, it resulted in significantly clearly code.
The problem was that for all other types of problems, allowing arbitrary precision was a huge distraction. Programmers micro-optimized the range, and then were bitten by bugs or unexpected behavior. A 32 bit variable is a huge waste when you are typically iterating to 100, and still a huge waste when you change that to 500, but you don't have to worry about a u_int8 (or the equivalent of a u_int7) biting you in the ass when the change is made.
Back then it wasn't a stupid idea. It was a research project that happened to product a negative confirmation. Today...