In-Depth

Q&A: Itanium – Succeeding Where Brute Force Alone Isn’t Enough

A cooler, more flexible, and more powerful Itanium design may yet succeed where brute force alone hasn’t been enough

Earlier this month, Intel Corp. delivered a server chipset for its Nocona processor, the first of its 32-bit chips to feature 64-bit extensions. For nearly a decade now, however, Itanium—first known by the code-name Merced, called for a time IA-64, and subsequently rechristened the Itanium Processor Family—has been Intel’s 64-bit go-to chip.

We spoke with Jason Waxman, director of multiprocessor marketing for Intel's Enterprise Product Group, about Itanium’s future in the aftermath of 64-bit Nocona. Waxman points to a number of forthcoming enhancements in the next version of Itanium, code-named Montecito, including a dual-core design with aggressive multi-threading, as well as reliability, performance, virtualization, and power management features. Itanium volumes have been less-than-spectacular, Waxman concedes, but a cooler, more flexible, and more powerful Itanium design may yet succeed where brute force alone hasn’t been enough.

Itanium hasn’t exactly been flying off of OEMs’ shelves, so to speak. Could you talk about your expectations for Itanium 2 and its successor [Madison], and about whether or not you’re meeting these objectives?

We announced last year that we shipped over 100,000 Itanium 2 processors during the course of the year, and I think that was a pretty substantial milestone. We’ve not mentioned in the last year where we’re at; we try to keep [these numbers] pretty confidential for the most part, but we’re seeing major growth in our first half over what we saw last year. It’s definitely a pretty fast-growing product line. What you’ll notice is that there are a lot of factors over the past year that have driven substantial uptick on Itanium.

Such as?

Microsoft last year introduced the first Windows server production version for Itanium, and that was a huge step forward in driving volume to the architecture. Another big step was the fact that the price point came down substantially, [because] Intel introduced a range of processor SKUs, [and] the net result was that the price points were attractive and continued to drive volume. And the third aspect was that we were seeing higher and higher levels of performance out of the box. Moving from the first Itanium 2 generation to the Madison [next revision of the Itanium 2] generation, we’ve had a platform that’s been compatible and allows for substantial upgradeability, and what we’re looking at is having a platform called Montecito, which is a dual-core chip.

That [Montecito] is a multi-threaded design, too, right?

Montecito is going to be a dual-core CPU, and, yes, it will also be multithreaded, so it will support up to four threads per CPU. So two cores, two threads per CPU. If you look at our Itanium CPUs today, each has four megabytes of cache, but what we’ll see in the Montecito generation 24 MB of cache. We’re focusing the requirement for Itanium on high-end performance, high-end reliability, and high-end scale.

One new trend in processor design is that of reduced power chips. Itanium was a notorious offender in this regard, dissipating about 130 watts of power in normal operation. With all of the new real estate you’re packing in these dies, are you concerned about your power requirements as well?

Part of the reason we’re able to put all of that stuff on a single die is that [Montecito is] on our 90 nm process. That’ll be the first Itanium on 90 nm, so in addition to dual-core, multi-threaded, [and] four times the cache, [Montecito will] also be 20 percent lower powered than the product we have today. By the way, we have a low-voltage and low-power version of Itanium as well. There are products we have today—our Deerfield [chip], with 62 watts versus [Itanium’s] 130 watts.

Mainframe CMOS has long featured advanced data integrity, recovery, or availability logic, and to some extent—I’m thinking of what Fujitsu’s done with its own SPARC line, as well as IBM with Power5—we’ve seen this replicated in RISC chips as well. Is this something that Intel is thinking about for future iterations of Itanium?

For Itanium, what we are designing it to do, it’s got to deliver on a couple of different fronts. It definitely has to deliver high-end performance, especially performance for large enterprise apps, as well as technical computing, high-performance computing.

Performance isn’t enough. You have to be able to deliver reliability. The same customer that values the mainframe may not be valuing it for performance, so we have to deliver on the reliability features. You also have to deliver on something like power management, because there are very large end users that are putting millions of dollars into their data centers, just to manage cooling and power management, and they’re asking us to help them with that. So with Itanium we’re trying to go beyond just the speed and the performance.

Okay, but can you point to anything concrete that you’re doing about this?

We’re working on a technology that we call Foxton, which is a way of accelerating the performance of the CPU. What we’ve found is that as long as we maintain a certain power envelope, we have the ability to ratchet-up the frequency of the processor, so with Foxton, we’ve put something together that will allow the CPU to ratchet up the frequency of the processor in certain cases, when [the overall load is] still beneath the power requirements of the CPU. This gives us faster transaction processing for larger databases, for example.

We’ve got something else we call Pellston that is something that is a cache or reliability feature. Sometimes, cache or reliability may fail in a processor, but if one of those processors fails, you don’t want to bring the whole system down. So Pellston shuts off the small portion of cache that will be affected. It shuts a very small portion of the cache off, only if there’s a bad portion of the cache. There’s something called Chipkill, where if one of the pieces of memory in a system fails, you shut that off and allow the system to keep on running. What Pellston does is the same thing, but with the cache.

I’ve heard of something else called Silvervale, which has something to do with virtualization. Are you able to disclose any details about that?

Right now, there are a number of customers who are starting to evaluate virtual memory management to virtualize portions of their environment, and we think that’s definitely a trend that will be driven into the future. What we’re looking to do with Silvervale technology is put special hooks into the processor that make the interaction between the virtual memory managers and the hardware—it will accelerate the performance.

You mentioned things you’re doing with power management. Could you disclose a little more about that?

We’re working on [a] demand-based switching technology that allows a processor to look at the workload coming in on a server. A large server probably isn’t cranking out as many transactions at 3:00 AM as it in the peak of the day, but other servers require a higher level of transactions if they’re doing batch work overnight. When the servers are not being fully utilized, what it does is allows the processor power to be ratcheted back. We‘ve seen as much as a 25 percent power saving at a system level.

Will all of these enhancements be able to do what Itanium’s raw performance thus far hasn’t? That is, drive uptick of the Itanium Processor Family architecture itself?

I think so. I know customers that are evaluating the move to Itaniums now, not even waiting for Montecito. But if you look at a lot of the platforms out there, especially from some of the large Japanese vendors and companies like HP, they are doing exactly that. They are building mission-critical platforms based on Itanium. They’re absolutely designed to migrate off of the big RISC Unix and mainframe systems, to allow people to get some of the performance benefits from mainframes, with mainframe-like emulation technologies on Itanium.

Itanium continues to ramp into more and more segments, and part of our long-term strategy on Itanium is to continue to increase into the mainstream. By 2007, we expect that the Itanium processor in that platform will be up to two times the performance of [the 32-bit] Xeon at that time, so that transition is going to continue to drive Itanium volumes.

About the Author

Stephen Swoyer is a Nashville, TN-based freelance journalist who writes about technology.

Must Read Articles