An FPGA is not a CPU
When generating code for a typical CPU, it is generally more efficient to work with native integer sizes, rather than penny-pinching on word lengths. I believe the CPU has to do some work to align smaller types onto the favoured word boundaries. In an FPGA, I am assuming that there is no optimum word size, and allocating more bits than you need consumes some gate resources. So declaring a five bit integer type could be more efficient than using a standard eight bit type, because the larger type commits more gate resources, which are not needed in practice. I do not think this type of economy happens much in normal coding for CPUs rather than FPGAs. Packing data into smaller sizes is an optimisation for disk storage, perhaps. There might be some merit in packing data if one is concerned about RAM usage, but I have found that there is always some work unpacking the data before it can be used.
The important thing, of course, is that appropriate type declarations tell the compiler what you want to do, which then means that the constraints imposed by your stated intents can be checked at compile time, and optimised code is generated within these constraints.