
At least!
...a scientific explanation for the appallingly low quality of my collection of free-to-web grumble.
Although I'm not sure HOW people would compress a movie?
For all the fuss over algorithms and machine learning, computation can't quite compete with people when it comes to lossy image compression, it is claimed. Comp-sci boffins from Stanford University and student interns from three San Francisco Bay Area high schools in the US devised a system to assess how code instructions …
'Although I'm not sure HOW people would compress a movie?' - you get loads of people to do it each 1 frame at a time and then have a human multiplexing system.....Oh, you wanted real time GOPs and I, B and P frames?......ah.
Joking aside - reminds me that the first 'computers' were good fast human mathematicians if I've heard the story correctly.
The were not mathematicians--they were in fact the 'programmers'. (Or engineers were, I presume.) The "programmer' was someone who broke a complex computation down for for the 'computers'--who were generally high school educated (when that counted for something). My understanding is that the programs were executed multiple times and the (intermediate) results cross-checked.
Based on the headline I figured there was some sort of AI that was trying to squeeze down pics using whatever AI magic it had and that this was then compared to human-written compression schemes like JPEG and the like.
But then the article starts going on about about high school students editing photos of giraffes and I lost the plot.
Can anyone explain better?
Here's how it works: take an image, then have one person describe to another person what can be removed or reduced from the image without turning it into garbage. This works better than have a computer compress the image using an algorithm.
Just added a bit more to the piece to explain it.
C.
I went ahead and read the entire paper the whole thing still makes absolutely no sense.
They took a reference image and had a human manually recreate the reference image's composition using bits from other photos. They refer to this process as "human compression." This is not, under any definition I'm aware of, "compression."
Compression takes an image and reduces the number of bytes needed to store it and in the case of lossy compression it does this by discarding some of the information in the original. Reconstructing the image, on the other hand, creates a new and separate image that bears a superficial resemblance to the original but does not contain any information from the original image.
What they appear to be getting at is that you might be able to "compress" images by working out a way to *describe* the image such that a computer program can assemble bits of *other* pre-existing images into something that kinda sorta maybe resembles the original - like some sort of automatic digital collage. Which even if possible is pointless - nobody wants to have images that sort of look like a thing, they want images that look exactly like the thing.
Ah well, thanks for the feedback. It's just research we thought people would find interesting. To be fair, we do say it is impractical - it's just an amusing way to 'compress' images.
I've added more background to the piece to it's more obvious to folk who don't read the paper or the code.
C.