Cool... but
JPEG is a very simple format with the exception of the headers (more accurately the trailers) which are a nightmare. It's a simple early DCT based encoding based on 8x8 macroblocks. Implementing a cleanroom JPEG decoder is simple and easy to do cleanly.
JPEG's primary form of loss comes in quantization and "zig-zag truncation" which is when the image is transformed from the spatial domain to the frequency domain, generally a lossless or less-loss operation allows simply dumping high frequency detail from the macroblocks and then representing frequencies in steps rather than values of a generally guassian distribution.
The "compression" aspect in the sense that after we're done just arbitrarily dumping data is based on quite old and inefficient entropy codings.
Most other DCT compressions could easily implement the feature you're talking about and achieve similar results. As a matter of fact, it would require no changes to the standards and with only a little work could run some mathematical magic to merge 8x8 macroblocks into larger macroblocks with negligible loss. The only thing required to do what you're suggesting is to implement an abortive decoder which simply decodes purely the frequency domain (therefore pre-iDCT) and then pass that data to the compressor of any other standard. The one requirement is that the decoders for those codecs make use of the standard (thanks to JPEG) DCT coefficients for the process.
Here's the kicker... JPEG-XL is a monstrosity of a code nightmare. To implement all the fantastic features which JPEG-XL's baseline requires, there's a tremendous amount of code complexity. A hand written, sandboxed, safe implementation of such a decoder could be very time consuming. Where a good cleanroom JPEG decoder would require a few days to create, JPEG-XL implementation could take a year. Also, without extensive mathematical analysis, I'm concerned the entropy coder could have obscure malicious table jumps which could be quite problematic in the security world.
Finally, there's the performance issue. JPEG-XL isn't nearly as bad as JPEG-2000 (which is a truly amazing standard if impractical), but coding the large amounts of hand-tuned assembler needed to implement the standard is going to take some time.
There just isn't a strong enough use case for JPEG XL to justify its existence outside of niche areas such as newspaper photo archives.