But I really can't help thinking of the old Absolutely sketches about that Scottish town council every time I see "Sandy Bridge"
Intel took the wraps off its new Sandy Bridge microarchitecture Monday morning — now officially branded as the 2nd Generation Intel Core Processor — revealing a number of notable improvements over its current Nehalem-based processor line, including what the company claims are greatly improved on-chip integrated graphics. " …
"users want video encoding to take place in seconds, not minutes, and how he claims that Sandy Bridge's video-transcoding capabilities will deliver that level of performance."
actually what users want is <b>X264</b> type high quality ,high profile 16:9 HD 1920×1080 16:9 level 4.1 encoding.....
non of this crap GPU baseline/main profile ipaid 3:2 SD and lower crap as your main/best option in 2010/11, will this internal video-transcoding ASIC give us x264 visual quality and flexibility or more of the same GPU type crap video quality at at the same bit-rate that is Nvidia/cuda/Badaboom , AMD/avivo video converter and even the pro PS3's Cell chip Fixstars Corporation assisted video encoder was crap and couldn't provide x264 high profile visual quality and AVC/H.264 setting's tweakability etc.
I keep hearing how bits of hardware will revolutionise the computer world, but it's largely untrue.
These are just minor iterations and speed increases. In effect it's just comparable to the day they added an FPU into a processor.
The addition of an FPU made certain operations faster, but unless you were a ray tracer you probably didn't notice. The processors were that slow back then (25Mhz) that a dedicated floating point unit made sense. Processors are much faster now.
So will a GPU inside a CPU really make that much difference? maybe it will be cheaper. A single fan for the CPU and GPU. But what about multiple displays and dual cards linked for performance?
I think the only revolutions are in computer form factor and software. The hardware used is largely secondary.
Obviously it depends on how good the on-chip GPU is.
However, an on-chip GPU is connected to the rest of the chip with "wires" millimeters at most in length. Speed-of-light latency: a few picoseconds. A GPU that's separated off on PCI-express is at a remove of several centimeters. Latency: at least twenty times worse. Bandwidth: much harder to maintain. Latency is the speed of light at work, it can't be finessed by any sort of engineering.
There's a biological analogue. Our eyes are as close to our brains as nature can arrange. Nerves are quite slow and bulky: there's a penalty for eyes on stalks, or for putting the brain in a safer location deep inside the torso. Which is why brains are perilously exposed on the end of necks: better visual bandwidth.
@Giles Jones, having an FPU still makes sense, or math operations would be pretty glacial -- the CPU has it built in. I do agree with the main point though, having a GPU built in is really not revolutionary compared to having a discrete one. It does the same thing, and the integrated ones tend to be poor compared to discrete too (good enough for a lot of users but nothing to get excited about.)
@AC re: "non of this crap GPU baseline/main profile" etc etc.: The GPU is pretty general purpose. If the current encoders are crap that's up to the programmer, it's not some inherent limitation of the GPU. Particularly hopeful for having any arbitrary encoder run GPU-accelerated, GPGPU ("General Purpose GPU") should allow almost unmodified code to be compiled to run on the GPU, meaning that almost any encoder can be trivially converted. (The "almost" part, currently the compiler doesn't autoparallelize so loops that can be split up to run efficiently on the GPU have to be flagged -- but, these flags are treated as C comments if you compile for the CPU instead, so the modified code will still compile and run on the CPU as well.)