Last night over dinner here in Taipei, Intel said that video encoding has always been a task designed for the CPU – and nothing will change that in the future.
We were talking about CUDA and how Nvidia’s claimed massive performance increases could potentially change the paradigm for software developers, especially with industry heavyweights like Adobe already on-board.
Intel’s representatives said that DivX uses the CPU to dynamically adjust detail across the scene to ensure that detailed areas of scenes have the detail they require.
“
When you’re encoding on the CPU, the quality will be higher because we’re determining which parts of the scene need higher bit-rates applying to them,” said François Piednoel, senior performance analyst at Intel.
Piednoel claimed that the CUDA video encoder will likely deliver poor quality video encodes because it uses a brute force method of splitting the scene up and treating each pixel the same. It’s interesting that Intel is taking this route, because one thing Nvidia
hasn’t really talked about so far is video quality.
“
The science of video encoding is about making smarter use of the bits and not brute force,” added Piednoel.
I asked Piednoel what will happen when Larrabee turns up because that is, after all, a massively parallel processor. I thought it’d be interesting to see if Intel would change its tune in the future once it had something that had the raw processing power to deliver similar application performance to what is being claimed with CUDA. Intel said that comparing this to a GPU is impossible, because the GPU doesn’t have full x86 cores. With CUDA, you can only code programs in C and C++, while x86 allows the developer to choose whatever programming language they prefer – that’s obviously a massive boon to anyone that doesn’t code in C.
Intel claimed that not every developer understands C or C++ – while that may be true to an extent, anyone that has done a Computer Science degree is likely to have learned C at some point in their careers, as the first language I learned during my Computer Science degree was, you guessed it, C. And, after learning procedural programming in C, I then applied the knowledge I gained from that to then learn to write procedural programs in other languages.
What’s more, Intel said that GPUs aren’t very good at branching and scheduling, while the execution units themselves are suited only to graphics processing – this was something I disagreed with rather vocally, because not only are today’s GPUs handling tens of thousands of threads at once, they’re also designed to be very good at branching, as it’s a part of the spec for both major graphics APIs. And that technology
definitely isn’t limited to just working in graphics workloads. From what I can make of this, Intel believes that the stream processing units in AMD’s and Nvidia’s latest graphics architectures are too dumb and not accurate enough to do anything other than push pixels – and I guess that’s why Intel is using fully functional x86 cores in its Larrabee architecture.
What do you make of all of this? Share your thoughts
in the forums.
Want to comment? Please log in.