Think of the HDD makers!
-40% size for similar image quality!?
Google has open sourced a new "lossy" image format known as WebP — pronounced "weppy" — claiming it can cut the size of current web images by almost 40 per cent. CNet revealed the format with a story late this morning, and Google soon followed with a blog post describing the technology, which has been released as a developer …
Baseline Jpeg2000 is explicitly royalty and licence free, but the patent holders haven't put their IP into the public domain.
This is not significantly different from GPL-ed software, which is explicitly available for use and modification, but the copyright remains with the authors precisely so that the agreement that permits free use can be backed up with legal force if necessary.
Talk of submarine patents is just FUD.
Jpeg2000 is the image format used by Second Life--it has a few advantages--but it suffers from some poor-quality code in open-source graphics codecs. One advantage is that you don't have to download the whole file to get a lower-resolution version of the image. But the image can end up as just the low-res version in one corner of the full-size bitmap.
Get smaller files with the same image quality, and the advantages for this sort of graphics-intensive net gaming are obvious. But there would be a huge amount of data to translate, in any existing game, and there's always some quality loss. There are alternatives to Second Life, currently using compatible software. But if this lives up to the hype, is the compatibility worth it?
The patent owners have thus far refused to put it in the public domain. My guess is that JPEG 2000 is the benchmark for Google's new format which they might actually be using to encourage opening of the the JPEG 2000 format.
* JPEG 2000 is particularly nice if you have text in your images as text suffers so heavily from artefacts in JPEG.
I Wiki'd JPEG2000 to see a comparison. I noticed there are potential lurking patent issues (doesn't everything these days? <sigh>) which could be an impediment. However, the thing that amused me was the example comparison image... which was exactly the same size as the standard JPEG.
The software I use for doing my JPEGs (PhotoImpact5 - old but reliable) allows you to play with the JPEG type (progressive/standard), the image colour coding (4:2:2 etc) and the quality. The size of my JPEGs depends upon what I'm prepared to sacrifice, and for some things (monochrome scans of letters) you can get away with quite a bit.
Perhaps this is why JPEG2000 didn't take off? Maybe it was additional complication without enough returns to make it worthwhile?
"It's no secret that Google is on a mission to make the web faster — in any way it can."
What a bunch of bastards!
"The faster the web, the more cash Google rakes in."
First, that's not strictly true. There's a point where faster loading hardly matters, because you can only read and see so much so quickly. At that point, increased compression or throughput mainly helps increase options for the web developers, not increase the number of served ads.
At any rate, so what? Google is 'raking in cash' - they're a business. In case you haven't brushed up on your economics lately, that's what businesses -do-. By gratuitously ripping on a company for making money while improving the net for everyone, users/customers or not, all you do is make yourself look like a petulant kid shoving over the chessboard. And you also reduce the credibility of any legitimate criticism of Google that shows up on El Reg, because the very fact that your articles are published in the form they are suggests an inherent bias.
El Reg? Biased? Who would have thought... Anyway, in regards to your first point. You're forgetting that Google is forking out tons of cash to pay off its bandwidth usage. If they can reduce the amount of bandwidth services such as Google Maps and Google Images use then they will be saving a pretty penny or two.
No, you still buy bandwidth, you just start to call it "dark fiber". You do stop worrying about the "Gb" and start thinking in terms of "strands" and sometimes "(Coarse/Fine) wave division multiplexing." You also start to spend lots of time worrying about "idiots with backhoes."
Mines the one with the OTDR in the pocket.
A bunch of JPEGs, PNGs and GIFs were converted to the new format and, across the lot of them, they saw an average 39% saving. A meaningful test would have been to compare it to JPEG only, since otherwise an unknown proportion of the argument is the senseless "we switched from lossless to lossy and saved a lot of space, hence our lossy format is the best format".
Most JPGs already suck, especially at smaller sizes. Compressing them with ANOTHER layer of lossy compression doesn't seem like a good idea. At least not to an old fogey like me. But then, I think most of the video quality on youtube is unbearably bad compared to standard-def TV, so what do I know? Smeary, indistinct, grainy pictures - that's why we got broadband 10 years ago, innit...
"Most JPGs already suck, especially at smaller sizes. Compressing them with ANOTHER layer of lossy compression doesn't seem like a good idea"
Unless you're directly transcoding a JPEG then that's not how it happens.
Most of the video on YouTube is cack because it's shot through the tiny plastic lens of a £100 cameraphone/digicam at a lower resolution and framerate than standard def TV and with on-the-fly video/audio compression performed by a tiny processor. This probably goes a long way to explaining why it's not quite as good as standard def TV. The compression algorithms themselves are not necessarily at fault, as in this case they are very much limited by the amount of processing power available.
Video quality on YouTube is entirely due to the authoring and mastering processes used prior to upload.
I recently uploaded some footage taken with broadcast HD cameras, down converted to DVD size, encoded using WebM and uploaded to YouTube. The quality is actually really impressive.
Smeary, indistinct, grainy pictures - that's camera phones and webcams, that is.
Plus, I think you miss the point. It isn't the case of compressing already compressed jpeg images. Its about compressing newly authored images - that's the way you keep the quality.
To gain popularity, it will have to get into the camera market, and that will depend on how efficient the codec is. Jpeg is trivial to produce a low power hardware codec for, this being essential for small cameras, phones etc.
One quote in the article does make it sound like this would be an additional layer of compression for JPEG.
"Google decided to figure out if there was [sic] a way to further compress lossy images like JPEG to make them load faster"
I nearly came to the same conclusion when I first read it.
Actually, it *is* another layer of compression. From the blog...
"We expect that developers will achieve in practice even better file size reduction with WebP when starting from an uncompressed image."
I read your comment and agreed with it before I read the blog, so I'm as surprised as you are, but I suppose this makes sense in context. After all, in the majority of cases, web sites no longer have have the uncompressed image, so "Can it squeeze my existing JPEGs?" is a fair question.
The blog also links to a gallery of comparison images: http://code.google.com/speed/webp/gallery.html. (No Lena, for copyright reasons apparently.)
my failure of comprehension is generally well understood, but perhaps you should learn the basics of reading before spewing your bilge.
From the article:
"Some engineers at Google decided to figure out if there was [sic] a way to further compress lossy images like JPEG to make them load faster,"
"Google has tested the format by re-encoding 1,000,000 existing web images, mostly JPEGs, GIFs, and PNGs, and it saw a 39 per cent reduction in average file size."
@Bilgepipe & James Hughes 1
I suspect he was referring to people using the command line app mentioned in the article to convert existing JPGs to this new format ; in which case the already lossy pictures will lose even more detail.
ie. think twice before converting your JPG pr0n collection to this new format, just to save space...
"Reading comprehension Fail. You're not compressing JPEGS - JPEG has nothing to do with this."
Oh really? Are you're saying that all your current photos are in WebP and your camera(s)/mobile phone(s) produce photos in the WebP format? If not, then it may not be Pirate Dave who has been inflicted with a lack of comprehension.
There's two things with YouTube. Firstly YouTube transcodes your input into their own format (another level of quality loss) plus it seems to transcode to a lowish (800kbit?) bitrate so you can see on-screen the colours are blotchy and flattened. I know, from having uploaded a 1800kbit XviD from a 2500kbit H.263 source recorded from HQ video using a Neuros OSD.
However, the truly terrible videos are from cheap cameras and mobile phones. I'm not sure there's an excuse as my small Agfa digital can do decent looking MJPEG video at something like 820x560 (shame the sound recording is awful). My Nokia 6230i, on the other hand, is just awful. I can barely tell the difference between its 3GP high quality and the low quality, and given the blocky mess that is the result, I'm not sure the word quality even factors into it.
In short, while YouTube introduces its own problems, most of the cack on YouTube looked like that before YouTube got its hands on the video!
FWIW, I think us older timers (who remember what a clear analogue picture looked like) will always be slightly disappointed. Yeah, it's cool that I can watch the video of my choice in realtime from places like YT and Vimeo. It's cool that I can store nearly 200 hours of video on DVD-Rs standing in a pile as tall as a single L750 tape (3h15m). It's cool that we can now have a billion channels with nothing worth watching on any of them. And it's cool that we can fill a ridiculously large screen with a picture with sufficient resolution that you don't see the individual pixels. The flip side? If you know what an MPEG macrocell artefact looks like, you'll see them EVERYWHERE. Bluray/HD suffers horribly from this, from the demos in supermarkets.
well, my point about youtube wasn't so much about the actual quality of the clips as it was that such a level of quality is apparently considered "acceptable" to a large part of the Internet. My kids watch videos on youtube that are barely discernable. It's like 20 years ago when we had those crappy Autodesk Animator flicks. Then we got Quicktime and MPEG videos, and things were much better. Now Youtube and Flash are lowering the standards again.
I think the level of "acceptable" is a trade-off between available bandwidth and how much you want to see the content. Don't forget that services such as YouTube are designed for in-situ viewing (which is why add-on software is required for downloading from such services). Because of this, YouTube needs to choose something of acceptable quality which isn't going to saturate your connection. I'm on a 2 mbit link and most things non-HD come through in real-time (my little netbook can do HD video, just not *H.264* in HD, too intensive).
But is this new? Remember in the good old days it was "acceptable" to use a VHS-C video camera to record a ciné screen (and many of them were fixed at a sync rate different to cinema projection leading to flickering) and, no, you didn't watch that tape. If you were lucky you saw a copy of the copy of that tape, which was so degraded it was barely discernable. What's new is that back then you needed to be friends with the video shop guy. Now it's open to anybody who is able to use a web browser and type the immortal words "cute kittens". :-)
Perhaps in the future, when we *all* have 100 mbit connections (I won't hold my breath!), we'll see a return to video of a better level of quality; but given that minority satellite channels are choosing lower bandwidth per channel in order to squeeze in more channels, I won't hold my breath for that either...
@heyrick: " If you know what an MPEG macrocell artefact looks like, you'll see them EVERYWHERE. Bluray/HD suffers horribly from this, from the demos in supermarkets."
Err... HD DVD almost exclusively use Microsoft's VC-1 codec, and occasionally H.264; not a disc in MPEG-2.
Blu-ray is almost exclusively H.264 for movie encoding, it was only the really early discs that used MPEG-2 extensively.
So having a bit of trouble wondering why you're seeing MPEG-2 macrocell artefacts, when these formats use variable block sizes ... maybe you're seeing them EVERYWHERE when they don't really exist. ;)
Looking at the sample images, the ones where the new format sees the biggest improvements are those with large areas of solid colour or simple gradients (or close enough).
I guess that isn't too surprising since JPEG essentially treats each 16x16 pixel block independently, so there are easy wins for any format that takes a more high level view.
"Google has tested the format by re-encoding 1,000,000 existing web images, mostly JPEGs, GIFs, and PNGs, and it saw a 39 per cent reduction in average file size."
Well it wouldn't surprise me if applying lossy compression to 24bit PNGs resulted in a reduction in file size. It would surprise me if it didn't.
How about you give us something unambiguous Google?
You should perhaps not assume that a short article such as this one contains all of the information that Google released, or indeed that anything not marked as a quote is actually a quote.
The information in the article seems to be a distillation of the information which Google has actually provided on their website on WebP (including a breakdown of how many PNG/JPEG/etc):
If you're using gifs or pngs, you're probably doing it because a) you want some form of transparency and (for pngs) you actually want *lossless* compression - for example fountain fades or text that is/are crisp with specific clour rendition and artifact free.
So using pngs or gifs *at* *all* as part of your experiment is null and void. What are the figures without those types? Oh, not so good eh? I see...
Agreed. It'll be interesting to see this in independant hands.
The "mostly JPEGs, GIFs, and PNGs" is meaningless: re-encoding a JPEG is daft, GIFs and PNGs are solving a different problem (lossless), and "mostly" just undermines the whole thing.
Show us a JPEG and WebP of an actual RAW photo with filesize comparisons.
I'm not a naysayer, but their use of stats is awful.
Anyone else feel like we're back in the 90s, with the image format wars... fun times.
Solving a problem that didn't exist.
1. everyone is moving to faster connections all the time, so a few (porn) pictures downloading quicker (not that you would notice) is going to make no difference.
2. never have I thought "these graphics are taking an age to download".
3. Bandwidth is becoming bigger for video, so the savings in this new format are negligible overall. The internet is becoming a broadcast medium for video, so who cares that a webp image is going to load a fraction of a second faster?
Although I agree with you in that this is a solution to a non-existent problem for the majority of the world, it is a problem in Google's realm.
Perhaps they have realised that hoarding everything from everyone forever may not be as practical as original thought, and that their data centers do have storage capacity limits.
You're right about video, but the other noticeable trend is the explosion in mobile browsing.
If you've browsed the net over GPRS, or even 3G, the thought "these graphics are taking an age to download" will be at the forefront of your mind. It looks like the mobile operators are unwilling or unable to increase bandwidth rapidly to meet demand, so smaller graphics could be a win in the short/medium term.
Mobile phone providers already do compress images on their connections.
Also, you can crank up jpeg compression to anything you want. Jpeg isn't a straight compression, there is a scale, and if you crank it to 1% you will end up with 1 or 2 very large pixels on the screen.
The poster above posed an interesting theory that this is about saving Google money in storage costs, so its not exactly altruistic of Google to release this, when the prime benefactor would be themselves.
As a photographer, the thought of being able to use a newer format that offers a reduction in file size when delivered to the customer seems great... but... it uses the YUV "colour space" and not a derivative of sRGB (or Adobe RGB). This means two things... conversion from a native format into a format not well suited for still photography, and secondly, storing in a format that favours brightness over colour means that many photos will look poorly coloured, especially where there is a strong contrast between colours, at edges.
So for the casual punter, this will probably be ok, but for the discerning eye, I'll stick to JPG thanks!
"So for the casual punter, this will probably be ok, but for the discerning eye, I'll stick to JPG thanks!"
as a photographer with a discerning eye, i think i will stick to RAW and PSD files.
I only ever use jpegs if a customer requires it for a website, even then I will suggest a PNG may be better.....
Mines the one with the pocket full of SDHC cards
@DavyBoy79: "So for the casual punter, this will probably be ok, but for the discerning eye, I'll stick to JPG thanks!"
So, you object to Google's new format because it uses the YUV colour space, and will use JPEG instead.
You know that the first step of JPEG compression is to convert RGB to YUV colour space, right? It's more efficient to compress the chroma (hue) and luminance (brightness) separately, as the chroma can be compressed much more than the luminance.
That's not a fail. In fact I'd say the fail is in your comprehension.
This is photographic compression we're talking about, not bitmap graphics. Of course "the rest of us" (whatever that means) will continue to use PNG's and GIFs... for the purposes PNG and GIF were designed for (i.e. not photos).
... so that people will stop believing it's every action is "evil?" Reminds me of xkcd's "Password Reuse" strip.
So, it's done a bad thing by opensourcing this format? Here's something it COULD have done to generate revenue: Held the format closed source, or ensure an encoding license requirement or so, then used it's ubiquity to allow the format to display in browsers, then ensured that only it's ads are in the fast-loading format. Anyone else trying to use the "fast loading" format will have to pay Google a license to do so. What would you write in this case?
Spewing your vitriol at every Google development is neither fair nor reporting. Yes, it's trying to leverage it's revenues; at least it is open about it. You don't like it, use Bing. Do you think they do not collect your data? Or hey, here's a thought: Go Facebook. I'm SURE they are better than Google.
Well I could make a list. Just for a start
- Pay creators a reasonable share of the advertising revenue
- Introduce a take down system on Youtube and the like that doesn't put the onus for everything on the people they are ripping off
- stop trying to rip off reators by attempting to get as much as possible labelled as orphan works
- provide proper non-onerous opt out for stretmap, including democratic vote by communities not o be involved at all
- stop coming up with BS about collecting data "by accident" and then not deleting it
I'm sure plenty more could be added... go for it folks.
> What exactly does Google have to do so that people will stop believing it's every action is "evil?"
Stop telling people that they aren't evil for a start, stop trying to compete with everyone else and concentrate on their core stuff, stop pretending that their brand is bigger than it is, stop doing a book deal to prevent others from competing in the same market, stop copying everyone - and for heavens sake will they stop trying to beat Facebook - as I (and the rest of the planet) aren't about to sign up to a new social network just to make Google richer - deal with it.
In fact, if they just stopped becoming the 21st century Microsoft, that would be a real bonus.
... at least not quickly.
Microsoft still has the lions share of the browser market and given their track record, they won't adopt this anytime soon, if at all.
JPEG as a lossy format is good enough for the job. Many years back, we thought we'd see the end of GIF images when PNG started to become prevalent, but they are still around on the web - it also took microsoft until ie7 to support alpha transparency in PNG files!
I'd like to be wrong, but experience tells me I'm not.
When do we want it?
"In due course!"
PNGs are now replacing GIFs, look at the comment icons on this site, around the web but they will never replace the legacy GIFs. This isn't a problem as you don't have to pay royalties to read GIFs only to write them. Actually, hasn't the patent already expired?
Same is likely to be true of a new lossy bitmap format. However, with C33-3 media queries that shouldn't be too much of a problem to implement. And if the savings of 30% per photo are possible then uptake is likely to be pretty good. Yes, I know this means storing two versions of the same image but disk space is less of a problem than bandwidth.
Yes it will.
Opera are on board.
Opera provide the browsers for the greatest number of mobile devices (Bing for the numbers, I can't be bothered to), the Wii, and a bunch of other web capable consumer devices.
It's particularly relevant to mobile and users with low bandwidth connections, say in the 'developing world', where (IIRC) a majority of internet connections are mobile (don't think smartphone, though, think Nokia candybar style phones).
I did wonder about Google's choice of format to improve. Had I guessed based on my own browsing, I would have picked PNG as the format to work on. As other commentators have said, new compression techniques have been found in the intervening years, and PNG didn't quite manage to replace GIF due to its lack of animation.
If it's true that Google Maps uses JPEGs, that obviously explains a lot. And I suppose sites like flickr and photobucket impose an increasing cost to cache these days. Anyone have more information on this?
PNG is overtaking GIF for lossless still pictures because it's not patent-encumbered and provides better compression (Deflate vs. LZW). Although newer lossless compressions have emerged (such as LZMA), most require more resources to implement (LZMA, for example, is memory-intensive) and are not recommended due to the fact that some web browsers have resource limits.
As for animated PNGs, they're still arguing over the matter. It's currently between MNG and APNG, and it may take a while longer for the dust to settle. Plus there's the possibility of the whole thing being moot thanks to things like Flash and web video.
I work with image codecs an awful lot and, and this is a really good thing IMHO - a patent-free alternative to JPEG2000.
JPEG also uses YUV under the hood - most (all) lossy compressors do, as it makes sense to allocate more bits to luma than chroma. It's web-focused, lossy and intended as the final step in the chain, so converting between sRGB to YUV is no biggie.
You're not going to see transparency in this sort of codec, because it's for photos and continuous tone images. Using JPEG for anything else is wrong, just wrong - use 8-bit PNG.
The glaring omission for me is there are some useful chunks Embedding an ICC profile would have been nice, an option for grayscale or CMYK would have been great and chunks for resolution and RDF/XML metadata would have been a no-brainer. Given how metadata-oriented google is these are pretty surprising omissions, especially as JPEG can already do some of this.
As for solving a problem that doesn't exist, duh. Try asking an ISP what a 10% reduction in bandwidth use would save them. If they'd added alternative colour spaces I'd also suggest asking a professional photographer how many GB of photos they have scattered around, but I think they may have missed that trick.
"Some engineers at Google decided to figure out if there was [sic] a way to further compress lossy images like JPEG to make them load faster".
I'm struggling to understand the relevance of "[sic]" in this sentence. It's normally used in a smart-arse way to flag up errors of spelling or grammar.
There's a split infinitive, but that comes after the "[sic]", and a split infinitive is generally regarded as a minor error of style at worst.
I suppose the "was" that precedes the "[sic]" should be subjunctive rather than indicative. You'd have to be a dedicated grammar Nazi to sic up on that, though.
... I'd point out that the use of [sic] simply means that the source material is quoted exactly and not paraphrased by the author. It is not usually taken to indicate a judgement by the author on the quality of the original prose: it's just intended to stop people piping up "You mean WERE" or "You split your infinitive".
Of course, I agree with you that it is acceptable to occasionally split infinitives and, if I was asked I'd also be relaxed about people not using subjunctive forms. But I do find it ironic that in an attempt to avoid comments about the finer points of style in a quoted section, the author has attracted a comment about the way they've quoted it!
..that I wasn't the only person who's first thought was to rush to the comments section to express confusion over this.
I would argue that [sic] isn't really appropriate for grammatical issues anyway given that it of course stands for "spelling is correct". I'd argue that given in the case of direct quotations surrounded by quotation marks [sic] is only relevant to show that you've not introduced a typo yourself.
I'll be honest, if I'm quoting something with spelling errors or grammatical atrocities in it and the errors are themselves not relevant to the context of the conversation/piece, I usually just fix the errors, rather than behave like an arse.
Perhaps the original quotation did have a spelling error and it got removed when the author's spell checker was run against the piece :p
Are the bane of my life, spotting them in images where JPEG should never have been used, text heavy banners for example.
Then again, this isn't solving that. PNG already did that, but somehow people fail to understand that different image compression techniques are applicable to different types of images.
JPEG specifies the encoded data stream, and how to decoded. Exactly what psychovisual model you use to reduce the data is up to you. The very best modern JPEG encoding today (Adobe Photoshop "Save for Web" is pretty good) gets a far better ratio of real quality to byte-size than the coding of 10 years ago.
Even if the new algorithm is better, that's not to say that a lot of the images on the web couldn't be made smaller and/or better by re-coding from the uncompressed source using a newer JPEG encoding algorithm.
In my experience of 3G, its the compression steps applied by the mobile operator which SLOW DOWN down the loading. That and the 100ms+ latency.
Hilarious. The whole web is just hanging off Opera's every decision so they can chase the decades-ahead-but-not-used-by-anybody-for-some-strange-reason "titan" of browsers.
It's not just MS, FF and Safari need to come onboard too.
Are Google getting support into their own code, or the shared base used by other browsers?
Safari uses WebKit, so they'll be among the first to implement WebP.
And as the article states, Mozilla is already collaborating on the project, so they're interested, too. Once the standard's nailed down, expect an update to include support for it (perhaps even Mozilla's helping to push the standard in time for the 4.0 release).
Even though comparing the size of two lossy compressed images is totally meaningless if you do not provide that both have the same viaual quality or at least something like a SNR ratio ...hell, even compressing one using the other as source is totally meaningless.
Anyway, the ratio of 40% sounds not too far off. If you'd took out-of the box h.264 and applied all the tools applicable to still images, you'd probalby end up with a similar figure compared to JPEG, at the same "computed visual quality". No surprise there ... It's almost 20 years since JPEG was born (well also almost 10 since h.264, but still 10 difference...).
Looking at JPEG2000, it also claims around that marging to JPEG. Frankly, i think it performs worse than JPEG at high quality. And while details are somehow better preserved at qualities you really don't want your eyes to suffer through, it looks like shit there. IOW, it is technically a worthy progress and may have some merit at the lowest quality levels, but no thanks.
Now, this web-P ...it's derived from VP-8 .. aren't On2's video codecs based on ..wait.. wavelets, like, um, JPEG 2000? Regardless of whether that is acually true or just a misunderstanding of mine, On2 videos damn well look like having the exacly same crappy degradation performance as wavelet compressed images do. Maybe it is technically not exactly wavelet but its failure mode looks a lot more like typicaly wavelet than typical block-transform. It just doesn't look pleasant.
I hope "40%" is just not enough. While h.264 got away with a "50% off MPEG-2" - basically because the industry needed some new video format to sell "HD", i hope google doesn't yet have the power to force this on everyone. I mean, i perfectly recognize JPEG is old and by now world and dog could do "better" compression wise, but just because google says "40%" i don't think it's worth it. After all, JPEG-2000 faceplanted with wavelets; if it wasn't for youtube/flash crap-league videos, nobody would even know how a wavelet-encoded video looks; h.264 isn't wavelet even though the tech "existed" at the time and the next MPEG codec doesn't smell like it were based on wavelets either. To me, hints at "wavelet compression isn't ready for humans to look at".
Sorry for the tirade. I like the wavelet concept technically (eg for computer image analysis like feature recognition), but my eyes don't like waveletted images at all.
Unless your camera is only taking pictures in 256 colors, you're losing a great deal of data in a GIF. It is primarily useful for commercial graphics and illustrations. Just because it encodes every pixel does not mean it's maintaining all the data. PNG can be either lossy or lossless, even though it keeps every pixel, because you can downcode to small numbers of color bits per pixel optionally in that format.
So Microsoft comes up with an image format based on adding overlapped blocks to JPEG to get rid of blocks and shrink images with the same quality, and they are roundly castigated for attempting to embrace and extend to a proprietary format - even after releasing the spec, the reference encoder, and a royalty-free license. Google is praised for doing essentially the same thing, but with a much more complex encoder that will never be fit for most hardware users. Various image codecs have been built off of h.264 as well, but none have taken off, and all have so far been pale imitations of JPEG2000 at low data rates, and barely equivalent at higher. What makes Google think that its own will get any more traction outside of google images, when most sites will just use the bog standard good-enough that everyone can read instead of storing two or three copies just for a few "advanced" readers?
At least one h.264 image codec has the advantage that you can use it with flash, which world+dog already has.
The in-progress h.265 codec is the only one that can surpass it so far - at the cost of 15-minute encoding times for each image.
They made WDP royalty-free, yes, but they never allowed others to tinker with it. This is the difference-maker with Google's effort. They're going all the way and OPEN-SOURCING the entire works (and if they use the same license as WebM, it'll be based off the well-recognized BSD license). This means it's open season and anyone with good ideas can improve on it.
Biting the hand that feeds IT © 1998–2020