...what source though?
Unless I'm missing something, until recording sources are used that utilise that colour space, it won't look any different, will it?
Sharp has developed a full HD LCD panel that mixes the hue of each pixel from a palette of five colours rather than the usual three. The result, the company claimed, is the ability to render faithfully the colour space of the unaided human eye. Sharp's prototype display measures 60.5in (1.5m) in the diagonal and has a 1920 x …
Unless it's old film that's not be re-digitised most new digitally shot images aren't going to have that information are they?
Plus if they gave it an LED backlight then the green spin is pointless, if they don't I for one wouldn't be buying it as that's the only reason to update my current LCD.
Shouldn't we all be moving to OLED screens shortly anyway - soon as they pull their finger out.
This post has been deleted by its author
Surely the recording devices aren't missing out the colours that the LEDs can't display properly, are you thinking that people film in just 3 colours or something? All the information we've been filming should display excellently on a TV that can show what's recorded, it will look 6 times better than blu-ray.
Anyone that has seen the output of an 8-ink photoprinter compared to the standard 3 ink business inkjets can understand what the difference should be like. This is the same concept, although still with only 5 colours instead of 8. But even that should be a huge step up in colour rendition and colour depth.
If you do ANY photo or video editing on a monitor you will want one of these. And maybe a Sigma Foveon-sensor DSLR to shoot the source materiel with...
The logistics of implementing the new colour pallette would require a ground-up re-design of all our video standards.
The 3 channels of RGB are currently hard coded into both our software and hardware, so adding capability for these 2 extra channels would be a truly monolithic task.
The only other alternative would be that the monitor takes a "guess" at the missing colours. Not ideal.
There is a glimmer of hope though in a niche market for this standard to hopefully take root.
Anyone who has studied the fundemental basics of graphic design (like I have, with emphisis on the basics) will know about the colour space chart, and the difference between the additive and subtractive (RGB Vs CMY).
http://dx.sheridan.com/advisor/cmyk_color.html
This means that designing for print can be a real problem, since some of the colours you produce on screen will not be there in print and vice versa.
There's an option in photoshop to highlight the colours in your RGB spectrum that won't come out in print, but the other way round isn't really possible.
This display could be very useful. Since photoshop already has a good understanding of CMYK, modification of the code would be a relatively small step.
Profesional graphic designers don't usually bat an eyelid at forking out huge sums of money for highly accurate display hardware, so if a specialist graphics promised totally accurate colour production...
Naturally, to begin with the monitor and card would be a bundled affair, but by this stage, the R&D has already been done, and tech like this has a habbit of filtering down into the consumer level hardware over time.
The only sticking point from from here is video connection standards. Shame this didn't come out BEFORE HDMI was standardised. Then again, since it is a digital standard...
I'm afraid I don't know enough about DVI or HDMI communication standards to hazard a guess on whether it could be adapted easily to support the two new channels.
People really don't understand colour spaces, do they?
CIE colour space contains all visible colours. RGB monitors allow you to represent a triangular subset of this space. RYGCB allows a pentagonal subspace and more colours. I've seen RYGCBM monitors and they have gorgeous purples; not sure why Sharp stop at 5 instead of 6.
These colours do not exist in any current encoding, and no processing will recover them. We need new content recorded live from 5-channel cameras or generated by computer from 5-channel textures. In other words, everything from scratch.
Old film is also based on trinary colourspace and won't contain extra information for this.
And ...
8-colour inkjets are better than 3-colour because ink is crap compared to phosphor. 8-colour inkjets still use standard 3-channel source material (your JPGs from your 3-channel camera). They need the extra inks to get good reproduction of just that gamut. The new TV will need a 12-colour printer.
Finally, stripy skies are not caused by colour depth; 8 bits is fine. Compression introduces these artifacts as MPEG-2 cannot handle smooth gradients. AVC is better.
So after all the efforts of the film crews and the post-production teams who spend man-years and millions of dollars to scramble all the colours with all these artistic effects this screen is going to unscramble it all and show us the picture as it should have looked in the beginning, right?
"are you thinking that people film in just 3 colours or something?"
Yes, most of us are thinking that. Probably because it is true: That is why RGB connections have three conductors: one each for Red Green and Blue.
Most printing is done in CMY or CMYK, where again there is only three colours (plus grey-scale).
This is even true for 'old' celluloid film - there are several layers, each containing a different dye. Not sure which, or how many, but still a descrete number, not the infinite rainbow that some people like to pretend.
All that R&D and still the dopes who actually buy them will stick the colour and brightness up so high as to make Dracula look like Dale Winton.
Has anyone ever seen a TV set up correctly, other than their own? No, exactly.
Add to that the degradation caused by digital video sources and he tweaking that goes on every step of the way between the film editors, broadcastors and whatever else, and what's the point in having a laboratory-calibrated screen? It's like sterilising a glass before filling it with an Albanian's piss and swigging from it.
The answer is simple;
Normally we see green, blue, red but to see the opposites (magenta, yellow, cyan) we must assume absence of one of the primaries, which is why we need the white backlight, so all three colour tv's are "fake" and trick our eyes because you can only add colour, imagine mixing paint, you'll never make true black or true white, you don't need to change the source, you need to interpret it differently and instead of "absence" logic you show an inverted colour.
Although, what about the tetrachromats (the people who see in four colours)? when will we have TVs which display 4 "real" colours (the extra one is "sort of orange") for them?
I've seen a 6 color projection system-- which as this LCD system promises, looks WAY better than standard RGB fare. It was able to effectively reproduce scenes illuminated by ultra-violet light, among other things. Sure, it'll require new cameras, and current transmission technologies won't do it, nor will film technologies, but once you see how great it looks you'll want all of those things to get with the program!
The human eye uses three types of cone to determine colors; this works, because the cones have overlapping color sensitivities. So a camera with three types of color sensor can obtain more color information than a three-color display can reproduce.
While it is true that backlighting requirements are reduced by a display with a better color space, that might be counterbalanced by the fact that now each color covers only 20% of the area of the display instead of 33%, especially if the display is to be able to show highly saturated colors.
There is a huge misconception about the manner in which human vision sees colour in most of the above posts. Popular idea has it that the eye sees three bands of colour - red blue and green, and people imagine that the eye has three channels, which repond to these colours. Problem is that the eye doesn't. It has a very large overlap in spectral sensitivities between the red, blue and green sensors. (It must, otherwise it would see a spectrum as only three discrete colours.) The overlap between the red and green is huge, and is really only a slight shift in the centre of sensitivity. (The brain also doesn't even get fed RGB, but luminace and a pair of difference channels based upon the RGB values.)
So, the popular idea of seening colour off a monitor is that the eyes blue sensors see the light from the blue pixels, green green, and red red. Your camera sees RGB, these values eventually make it to your eye as RGB, and thence the brain reassembles the colour. But it doesn't work that way. Light from the green pixels is also picked up by both the red and blue sensors in the eye. If you want to make yellow, you send out red and green light from the screen. But you can't stop the eye's blue sensors from picking up some of that green light, and so the yellow is diluted. If there was real yellow light (i.e. spectrally pure) it would be of a shorter wavelength to the green ligh you are trying to synthesize the perception of yellow with, and would not be picked up by the blue sensors as much, and thus the yellow would remain vibrant. This is the key point.
You don't need new camera sensors. Camera sensors are designed with a much more controlled set of spectral sensitivities, and they don't have anything like the overlap that the eye's do. If the colour is representable in the CIE colour space (which can be represented from RGB), the five colour display can workout what the right mix of the 5 colours is, and the gamut is extended.
The key to the whole thing is to realise: that camera and film's colour definition is much more well controlled than that of the eye; and that in order to get the information to the brain, you have to dodge the problem of the eye's colour sensing mechanism. There isn't a simple one to one mapping from RGB in the camera to RGB in the eye. This display is a step to managing that, and it does not need new camera technology.
it also doesn't require new 5-channel image formats. Oversimplifying (neglecting various weighting factors and gamma curves and the like), Y is min(R,G), C is min(G,B). If C+Y > G then they're both decremented by G-(Y+C). Then R'=R-Y, G'=G-Y-C, and B'=B-C, and you display the colors as R',Y,G',C,B.
The end result is that you get a yellow which doesn't also partially trigger blue receptors (which real-world yellows don't, as they are NOT red+green), and a cyan which doesn't also partially trigger red (same reason).
@Francis Vaughan; excellent explanation. Correct me if I've misunderstood, but it seems one implied and logical- if initially counterintuitive- conclusion boils down to this:-
On a purely theoretical level (even with a "perfect" setup), a set of R, G and B sensors using the same principles as human vision can sense- and possibly record- more colours than can subsequently be reproduced using just R, G and B lights.
Or put another way, RGB sensors can record certain colours (or sets of stimuli) that cannot be accurately reproduced using R, G and B lights alone.
(An explanation using your example of pure yellow, which stimulates the eye's cells- or an electronic sensor- with 50% red, 50% green and 0% blue. We can accurately *record* this response electronically, but not exactly *reproduce* it- for human viewing- for the reason you gave... the green component stimulates the eye's blue sensor and dilutes the saturation in a way that the original yellow didn't. Of course, if we had added a pure yellow light, we could reproduce it exactly... which brings us back to where we started. :-) ).
"The 3 channels of RGB are currently hard coded into both our software and hardware, so adding capability for these 2 extra channels would be a truly monolithic task."
Yes and no. There is an awful lot of software that uses RGB but software /intended for video processing/ almost certainly is already designed with the pixel format as one of its points of variability. Within the video world, there is Linear RGB and almost as many variations on YUV as there are permutations of those three letters. Then there are the colour spaces used in printing which aren't even three dimensional. So software developers who actually care about colour will have no trouble supporting this new format. It's the least of their problems!
Not everyone's eyes have the same sensitivity. Some single figure percentage of the population (female? I can't remember) actually have four different receptors. And then there is colour blindness in its many forms, again affecting a similarly sized fraction of the population (almost all male this time). And then there is just natural variation in all of us. The result is that people disagree about when colours are indistinguishable.
Give me a device where I can specify the exact emission spectrum (within the visible band) using perhaps a dozen control points, and I can probably give you a telly that everyone is happy with. I believe the boffins can do some very smart things with tunable LEDs these days so I expect that day will come. But not yet.
To start with, the human eye does not actually see red, Green, Blue.
The vast majority of humans see Blue, Yellow/Green and Yellow/Red - the latter two are extremely close to each other and very wide-band (low purity) receptors, while the blue is very narrow (high purity).
- This is partly why Blue LEDs look so bright at low power levels.
Some women have tetrachromancy, and have a fourth colour receptor as well.
At low light levels, there is also a 'brightness' receptor that has yet another curve - this is usually completely swamped, but becomes important at low brightnesses.
Sharp have done a lot of work in converting YUV and sRGB into their new five-colour system, and it will be very interesting to see the real result.
As initially stated, if you feed the display with RGB data it won't change much. The RBG colourspace is less than complete, but with a RGB feed it's left to the screen to convert the colours. And who is gonna tell it that one particular green+red spot was supposed to be "pure" yellow, whereas that other was really supposed to be green+red? That's what I thought.
So unless you have a 5-colour process from start to finish, you're just replacing gaps in the gamut by errors in the resampling. I don't see it being a plus: instead of a consistent bias to which the brain can adapt, you introduce random inconsistencies. I'd bet the first impression is "wow, impressive colours!" immediately followed by "but it doesn't look quite right for some reason". That's until you get 5-colours TV transmission* of course.
The 5-colour thing also makes each pixel 66% larger (probably not a showstopper in the long run but I'd bet it's one of the reason why their demo 1920 x 1080 display is so gignormously huge).
*or whichever source you fancy
If you don't believe that a consistently imperfect gamut is better than random errors in colour resampling, make this simple experiment: take a technicolor movie and a crappy piece of TV show (let's say, Baywatch, for the abundance of red and fleshy tones). Watch one for a few minutes. At first the colours look weird, then you get used to it and reconstruct the real colourspace from what your brain knows (Baywatch swimsuits are red, bananas are yellow,...). Now switch to the other source. Ouch! It hurts, doesn't it? All these colours are waaaayyy off! But then, after a few minutes, it's all OK again! Switch back and forth a few times, and you'll understand why introducing random inconsistencies in colour rendition is way worst than having consistant errors.
That's why I don't think this new display technique will bring anything to the table unless there is a 5-colour feed to channel through it.
I've read all the comments, some of which are very helpful, and I can see how the introduction of Y and C emitters can get you some extra colours compared to a traditional RGB monitor. However, isn't it the case that all those extra colours are highly saturated colours that hardly ever occur in nature or in everyday life?
Meanwhile, the introduction of Y and C emitters doesn't help at all with displaying violet colours that RGB displays can't show and which really do occur, for example in flower petals ...
Some of what you say reflects what I originally thought.
(Disclaimer: I am not an expert in this area, and the following is based on reading and considering what has been said in this thread).
Bear in mind that if the camera's sensors work *exactly* like the human eye, then a recorded result of something like R,G,B (50%, 50%, 0%) could not have come from "fake" yellow (red + green) because the green light would have partially stimulated the blue sensor, leading to the blue reading being greater than zero.
Of course, things get more complicated (I would guess) if the electronic device's "red", "green" and "blue" sensors have (e.g.) peak sensitivities at slightly different frequencies, differing levels of overlap and/or different responses.
Or perhaps the camera's sensor's characteristics have been slightly tweaked to allow/compensate for the fact that they would be reproduced with red, green and blue lights which- for reasons I gave above- cannot reproduce all colours that can be *sensed* or *recorded* with RGB sensors.
And what about entirely artificial pictures that were (e.g.) created from scratch inside a computer?
Bear in mind that all this talk of colour ultimately comes down to what the eye will perceive (or what we want it to perceive), and maintaining and reproducing that through one or more intermediate steps.
As I said, do *not* take the above as the words of an expert. But the fact I can get these issues from what one might think was a simple idea (the eye has three colour sensors, so we simply record and reproduce those three colours) shows that colour perception *is*- like others have said- quite complex.