I'm Confused...
...I thought this was supposed to be a mobile phone chip. Every single spec point seems to be better than my current desktop machine.
BTW who has got the eyes to do justice to a 4k display on a phone? Superman?
Yesterday, Qualcomm teased its Snapdragon 855 processor, which is aimed at next year's top-end 5G Android phones. Today, we've got hold of more details of its insides. Qualy reckons these specifications will set a high benchmark for next-gen devices from Apple and other gadget manufacturers using non-Snapdragon chipsets to …
I have a Sony Xperia XZ Premium with 4K, ~800ppi, HDR screen. Sure, it's overkill for most cases. By default only select (generally the video/gallery) ones run at 4K, the rest stick to 1080p so no noticeable battery impact. Professional photographers love the 4K for pixel peeping on the go. Just select Professional/sRGB mode.
Even at 1080p "effective" resolution, the 800ppi dot pitch means small details, like serifs on text, flourishes on graphics, or detailing in notification bar icons, look painted. The same for e-Reader pages or text heavy web content. It seems to cross the same uncanny valley we experienced when laser printers crossed ~600dpi and print "grain" just disappeared.
tl;dr: Yes, mostly an indulgence, but a very satisfying one :)
No idea how reliable it is on battery consumption, but the latest Intel chips, running on mobile, are suppost to ramp up/down with workload, and get compute done quickly, then idle back down to a lower powerstate. If I've understood it right.
So can these chips do likewise? Idle/sleep cores, lower speed/power draw between user/wake calls?
So can these chips do likewise?
Because downvotes without explanation are pointless (no,I am not one of your downvoters) - yes, of course they can. Arm has been doing this kind of stuff for years, and so has Intel.
There is a difference, though - this chip seems to be a refinement of a refinement of the idea. Not only can individual cores have their clocks ramped up and down to meet demand, and be switched into near-zero power hibernation modes at a whim, but workloads can be switched from the 'simpler', power-sipping cores to the 'complex' cores when a task is either speed-critical or the scheduler calculates it would actually use less power overall to finish a task quickly on a very fast, but hungry processor than to take longer on a more frugal one.
Arm call this idea 'big.LITTLE' and I don't think (though I'm willing to be corrected) that Intel has anything quite equivalent - it would be like putting an Atom on the same chip as an i7 and deciding which one to use according to workload.
M.
"The hardware can, therefore, record and render video and images with more than a billion shades of color thanks to the 30 bits-per-pixel range."
Since scientists think most humans can only recognise around 1 million colour shades. Offering a billion colours sounds like marketing hype to me. Even 24-bit colour with 16 million colours has 15million more shades than most people are able to differentiate between.
"Each A76 has 128KB of L1 cache (64KB four-way instruction cache with four-cycle load-use latency, 64KB for data), 256 or 512KB of 1280-entry five-way L2 cache, and shares up to 4MB of L3."
I've often wondered why the L1 cache size on modern CPU's are so small. ARM3 back in 1989 had a 4K cache I'd have expected more than a 32 fold increase in nearly thirty years. Later CPU's now have two extra levels of cache and I understand a little bit about cache coherency. I'm sure there must be a good reason. Anyone know what it is?
How much slower is L2 and L3 I wonder?
The reason is physics. The further you get away from the actual computation cores, the bigger the latencies become. Therefore, you can assume that the small caches "run really fast".
What you observed is basically the dilemma of shrinking dies vs. increasing clock frequencies. ;-)
As there are no fixed/standardized sizes/clock freqs. for caches, your question is hard to answer. Nonetheless, here's a links with a few examples (AMD Ryzen and Intel Core i7):
https://www.techpowerup.com/231268/amds-ryzen-cache-analyzed-improvements-improveable-ccx-compromises?cp=1
[AMD's Ryzen Cache Analyzed - from 2017]
I've often wondered why the L1 cache size on modern CPU's are so small.
To answer the size question, L1 cache is fast because it is effectively part of the processor, but that means it is built on the same die, and space taken up by RAM is space that cannot be used for computing functions, and L1 cache is a completelt different beast to he DDR RAM used for main memory - it takes up more die per bit, especially when you include the lookup tables. It's a trade-off and a case of diminishing returns.
At least, that's the way I have always thought of it. I can't find an image now, but I have a (possibly wrong) memory of seeing a micrograph of the ARM3, and the 4k cache took up as much die space as the whole of the rest of the processor.
M.
"This relies a library of pretty much every known material on Earth, and details on how light scatters when it hits their surfaces."
Clearly the Commentards are interested in the world's most important materials' reflection properties: beer and, the $Deity's personal favo(u)rite: red wine. I'd be happy to validate their models - please send a case of Chateau Lafite 1961.