Networking Processor Requirements in Cloud Computing Environments

Service provisioning using cloud computing poses challenges for the data center. What’s needed is a multi-core, heterogeneous architecture.

By Nabil Damouny, Senior Director of Strategic Marketing, Netronome

Cloud computing has become a widespread phenomena in enterprise and service-provider networks alike. Driven by the centralization of data center resources and the advent of virtualization, cloud computing is becoming the preferred way to provision services to both enterprise users and public subscribers. Private clouds deliver IT-related capabilities “as a service” using Internet technologies to multiple departments or sites.

Public clouds allow service providers to provision compute, storage, and networking resources to its subscribers, in what has become known as the “multi-tenant cloud” service architecture.

Such architecture enables scaling data center infrastructure resources to meet target performance and SLAs (service-level agreements) while minimizing the total cost of ownership (TCO).

However, there are many challenges that must be addressed to extend the concept of service provisioning using cloud computing technology in the data center -- the least of which is to have an effective systemwide virtualization including: CPU, memory, and I/O resources; stringent security measures; and, low-latency infrastructure.

The Cloud Dictates New Requirements

There are many trends driving the need to effectively address the emergence of cloud computing including:

  • High-speed networking: For example, 10GbE (Gigabit Ethernet) has become the de-facto standard in data centers, soon moving to 40/100GbE
  • Network security: As services provided over the cloud need to be secure, they are fueling the need for virtualized firewalls and intrusion detection / prevention systems (IDS/IPS)
  • Low-latency: The network that lives between the user and the private or public cloud (including content, application, and services) must be capable of delivering an experience to the user that makes their separation from those resources completely transparent
  • Inter-VM switching: changing the ratio of applications to servers creates an important change in traffic patterns in addition to the traditional north-south (client-server) traffic, east-west traffic (VM-VM) become important -- allowing, for example, traffic to traverse a security firewall VM before it is presented to the application VM
  • Storage and network I/O are being consolidated onto the same networking infrastructure

To meet these challenges, a networking processor (NP) is required to assist the x86 compute sub-systems in meeting high-speed networking, in-line security, I/O virtualization, and inter-VM switching, all while maintaining low latency for the best QoE (quality of experience). The combined solution is a multi-core heterogeneous architecture: the x86 takes care of compute and application processing while the NP manages the virtualized, secure I/O subsystem.

Silicon Requirements for the NP: Addressing the Challenges for Cloud Computing

Processing Requirements: Programmable and Multi-Threading

To provide effective processing for cloud services, the NP needs to utilize a CMT (Chip Multi-Threaded) architecture for effective computation and to hide memory latency. In addition, its processing requirement needs to be programmable to meet the I/O connectivity requirement to the x86 subsystem, backplane, external memory, and TCAM, if needed. A key function of the NP is packet classification into flows, handling millions of flows simultaneously.

Memory Requirements

To keep up with higher line rates, the NFP memory subsystem must support key metrics such as packet rate and number of lookups per packet. As a result, the NP will need to support two to four DDR3 channels -- all ECC-protected, and external TCAM over a high-speed, pin-limited interface such as the Interlaken LA interface.

Network/Storage I/O, Peripherals, and x86 Interfaces

The NP will need to support the following interfaces to effectively support the network and storage I/O, peripheral components, and the x86 compute subsystem:

  • Multiple 10GbE today, and in the future 40/100GbE interface to the network subsystem
  • Interlaken interface for high-speed serial connectivity to external chips
  • PCIe gen2 interface (up to 8 lanes) to the x 86 subsystems -- with each lane capable of a 5GT/s, and on-chip IOV-over-PCIe capability; the resulting system is a tightly coupled, virtualized architecture

Support for Virtualization and Load Balancing

The NP will ideally need to support I/O virtualization (IOV), similar to SR-IOV, over PCIe2. The PCI-SIG IOV workgroup developed extensions to PCIe; SR-IOV enables one physical PCIe device to be divided into multiple virtual functions. Each virtual function can then be used by a virtual machine, allowing one physical device to be shared by multiple CPU cores and VMs. In addition, a flow-based, load-balancing feature allows an effective way to best utilize the cores and VMs in the x86 subsystem.

Inline and Look-aside Security

As we focus on high-speed cloud computing, there is a need to perform inline and look-aside encryption/decryption at line rate. In this case, the NP processing architecture needs to support many levels of security protocols: MACSec (or IEEE802.1ae) is the security protocol for Ethernet layer 2 (MAC layer) level security; IPSec is the network layer (L3) level security, and SSL or TLS (L5) -- the Secure Sockets Layer (SSL) and the Transport Layer Security (TLS).

Intelligent Low-Latency L2 Switching for Inter-VM Communication

The NP must integrate an intelligent L2 switching function to serve as the inter-VM communication between the tens of VMs in a virtualized data center environment. This creates the need for a high-performance, low-latency, intelligent switching mechanism between the VMs in the same tier and between tiers of servers, and hence supporting the new requirement for a secure east-west traffic requirement.

Storage Convergence and Virtualization

The need for storage in the enterprise and data center has been increasing at an exponential rate. This trend has been driving storage platforms to be virtualized for better efficiency and to share resources. In addition, this trend is putting a stringent requirement on security for the stored data -- both “data at rest” and “data in flight.” One way to address storage requirements is through the use of FCoE (Fiber Channel over Ethernet). The NP must support such protocols, allowing storage area networks (SAN) and Ethernet data networks to converge into a unified network.

Enhanced Reliability with ECC Protection

The NP logical blocks, internal busses, internal memories, and external interfaces will all be ECC protected. This becomes increasingly important as the process feature-size continues to shrink, leading to more failures and soft errors from alpha particles.

Low Power

The combination of the NP and the multicore x86 need to consume less power while executing the required workloads at line rate. In a nutshell, the combination needs to consume, at most, 75 percent of the required processing power when compared to the multicore x86 doing the job on its own without the NP.

The Final Word

Cloud computing is expected to grow exponentially in both enterprise and service provider networks. There are many challenges, however, that need to be addressed to extend the concept of service provisioning using cloud computing technology in the data center. An NP is required to assist the x86 compute sub-systems in meeting those challenges. The combined solution is a multi-core, heterogeneous architecture: the x86 takes care of compute and application processing while the NP manages the secure, virtualized I/O subsystem.

Nabil Damouny is the senior director of strategic marketing at Netronome. You can contact the author at