strlen returns, essentially, a byte count until the null terminator.
For ASCII and other character sets where one byte maps to one character this is the same as the character length.
The UTF8 encoding scheme of the Unicode standard, however, uses more than one byte to represent most characters. (The bottom 127 characters are identical to ASCII, characters with hex values lower than 0x800 require two UTF8 code points, characters below 0x10000 require three code points, and any character over 0xFFFF requires four.)
Emoji characters occupy the character ranges above 0x1F300, so all base emoji characters require at least four code points in UTF8. However, there's a Zero Width Joiner character (0x200D - three UTF8 code points) that can be used to combine multiple emoji together to create a composite glyph.
So for the single emoji character "family: man, woman, girl boy", you would encode it with the following Unicode characters: 0x1F468 0x200D 0x1F469 0x200D 0x1F467 0x200D 0x1F466, which when encoded as UTF8 (which I leave as an exercise for the reader) would result in a 25 byte string - plus the null terminator.
So in my original dummy example, you could type a single Unicode emoji and the password system would accept it as being sufficiently long to be accepted.
A lot of operating systems use UTF8 as their standard encoding scheme these days. I wonder how much broken software out there makes this assumption?