In a conference call about Intel technologies at the VLSI Symposia 2008 yesterday, Intel let loose a few more Nehalem details for us.
First off, the Nehalem processor family will cover everything from Mobile to Server in much the same way the Core 2 architecture has done so for the past few years.
Nehalem will have a 25GB/sec inter-socket bandwidth using 6.4GT/sec QPI links, which is currently "
three times larger than our best competition today," said Rajesh Kumar, Director of Intel Circuit and Low Power Technologies. It'll also have 32GB/s memory bandwidth thanks to the triple channel DDR3 1,333MHz (at least on Bloomfield that launches in Q4).
Nehalem has four enhanced cores, an uncore for connecting the cores with the I/O, and third level cache. In addition, it includes:
- Configurable clocking
- fastlock low-skew PLLs
- high reference clock frequencies
- analogue supply tracking system
- adaptive frequency clocking
- low jitter Intel QuickPath interconnect
- integrated memory controller clock generation
- jitter-attenuating DLLs
The memory, processing cores and I/O centre are all completely decoupled with regards to voltages and frequencies so each can optimise their own working environment with regards to performance and power. However, Intel was keen to point out that unlike the competition's asynchronous design, these three were intrinsically linked with synchronous interfaces to offer lower latencies and higher performance.
Nehalem's memory to cache latency, for example, will be "
drastically smaller" compared to the competition. The decoupling also further allows a benefit of a modular system, where extra components can be easily dropped in because they are essentially self contained.
Intel's EIST and C1E states for clock changing to save power will now work 56 percent faster in Nehalem and the chip frequency will also adapt to power supply voltage changes and vDroop - this should make a system ever more stable, but we think it might push enthusiasts into looking for the best motherboards and PSU combinations that completely minimise this clock down effect, especially if it affects performance figures.
Intel even dropped an interesting titbit that it was thinking about completely decoupling itself from rated frequencies because of the constant clock changes, however it found customers and retailers were very much against this move. Despite the fact that, internally, the CPU is constantly adjusting its clock speed, from the outside it appears like a fixed frequency due to overall averaging. No doubt this continual variation will surely make our job testing hardware reliably that much more difficult though - it depends on the level of clock changes and the quality of motherboard and power supply.
Finally, Intel also mentions in its documents that the Duty Cycle adapts to transistor variation and lifetime stress - does this mean that even if your CPU isn't made as well as the next guy's, instead of dying outright it will reduce the time that part is working. Does this translate into reducing the core frequency over time?
In other words, after 12 months of overvolting and overclocking, your CPU might end up running at a lower speed you bought it at, or have less cache available as the chip turns down the use of these tired transistors? Considering CPUs die very rarely these days we can't see it being much of a problem - unless you put some silly voltages through it, that is. However, the long term implications and resellability might be of concern for some end users. We will endeavour to find out the answers from Intel.
Discuss in the forums.
Want to comment? Please log in.