Intel gave us a sneak peek at its 'Tera-Scale' computing initiative this morning, ahead of the official start of the Intel Developer Forum tomorrow.
Justin Rattner, Intel's Chief Technological Officer, told assembled hacks that with teraflops of computing power and terabytes of data storage, the next 10 years would see major changes in the way we live, making personal computers more personal.
The major benefit of this new 'Tera-Era' will be the enabling all kinds of new applications with more human-like characteristics, he said.
Rattner outlined the three major challenges to making era of Tera draw closer:
- Silicon: Major research is being done to make silicon amenable to the multiple cores - possibly hundreds - that will run together. Processors will need configurable caches, better interfaces to the rest of the system and better power efficiency.
- Platform: The rest of the system needs to be built around this scale of computing. This will mean more radios for better network access, high speed input/output devices, memory that can handle multiple transactions simultaneously and virtualization technology to allow one physical computer to act as multiple systems.
- Software: Programmes being written will need to exploit this technology. This will require an analysis of computer workloads, tools to make programmes threaded automatically as wella s better compilers and libraries.
"There are some substantial obstacles to achieving the performance we want," Rattner said. "The real challenge is how do you interconnect these chips, how do you feed the beast with enough memory bandwidth to make all this worthwhile, and then how do you programme such a thing."
Rattner said that memory transaction technology will be crucial to the future, and showed off a demo of how it could work. Currently, if data is put into memory, only one process can access it at any one time, since the data needs to be updated (a scenario familiar to many A-Level and Degree-Level Computing students). This means that any process needing the data at the same time has to sit and wait. With transaction technology, the data is kept up to date by the processes, meaning simultaneous access works fine. Intel showed a demo where 8 process all had to work on the same section of memory. With current technology, the calculation tasks took 10.4 seconds to complete. Using a prototype model of the transaction tech, the calculations were done in 4.6s, a fairly hefty performance benefit.
Intel now has hundreds of researchers working on more than 80 projects related to the Tera-Scale initiative.
What will this new level of performance and storage give us? Well, apart from Unreal Tournament 2012 at 7000x3400 with 150x Anti-Aliasing, and capacity for an entire internet-worth of porn, it will enable entirely new computing scenarios. The example given in the Q&A session at the end of the talk was that computers could be fitted with machines with teraflops of processing power. The computers could be able to drive cars for us, allowing us to kick back and have a snooze or read the paper. Rattner cited the
Intel-powered Stanford vehicle in the DARPA challenge as an example of this technology in prototype.
We'll have plenty of coverage of IDF as the week progresses. In the meantime, drop us your thoughts over in the
News Forum.
Want to comment? Please log in.