While FPGAs are often be used to perform bridging functions for Ethernet and Gigabit Ethernet (GbE) links, they've not often been associated with low power consumption. Here's an exploration of how new mid-range FPGAs address this issue in an age of increasing demand for Ethernet and decreasing form factors.

In today’s increasingly connected world, demand is growing for Ethernet and additional Gigabit Ethernet (GbE) links in various industrial, communications, and data center applications. FPGAs are often used to perform the bridging functions for GbE interfaces because of their low design costs, high performance, time-to-market speed, reusability, and a combination of fast and flexible field upgrades.

They have not, until recently, been known for the low power consumption and ease of use that designers need if they are to employ a single FPGA to create today’s hybrid solutions incorporating many different 10G and 1G interfaces. This all changes with the latest iteration of mid-range FPGAs that provide multiple GbE ports in a single device and can implement power-efficient 1G interfaces without the need for transceivers, significantly reducing power consumption.

 

Purpose-Built for Power-Efficient GbE Interfaces

Traditional mid-range FPGAs supporting 10 Mbps, 100 Mbps, 1 Gbps and 10 Gbps speeds have helped drive demand for more connections in a single product. The challenge with these FPGAs at the top end, with 1G interfaces, has been that transceivers were required, increasing power and package size. This is no longer the case with the advent of new mid-range FPGA devices that offer the more scalable option of using general purpose input-outputs (GPIOs) for implementing multiple GbE interfaces. This is more power-efficient, and also enables developers to reserve the use of transceivers for high-speed system implementations employing protocols like 10-Gb/s Ethernet, CPRI, JESD204B and PCIe.

GPIOs are easy to use in today’s mid-range FPGA devices. They are supported by a highly configurable receiver and driver circuitry behind each pin, and can to dynamically adjust signal delays (including those associated with clock gearing ratios). They implement per-pin clock and data recovery (CDR) circuit functionality and support popular I/O standards and terminations.

 

System Implementation

Many key GbE interface functions can be implemented today by configuring a pair of differential GPIO output pins and a pair of differential GPIO input pins. These functions include the serializer, de-serializer and CDR, as well as the bit slip functionality for symbol alignment. Hardened GPIO circuitry interfaces seamlessly with physical coding sublayer (PCS), media access control (MAC), and higher layers implemented in the FPGA fabric, yielding a highly configurable GbE solution. The GPIOs support a wide range of I/O standards operating with supplies between 1.2 V to 3.3 V nominal, with speeds up to 1.066 Gbps for single-ended standards and 1.25 Gbps using differential standards.

The following high-level block diagrams show how the same FPGA device can be used to implement two different 1 GbE solutions, one over GPIOs and the other over transceivers.

 

Figure 1. 1 GbE Implementation over GPIO using the Microsemi PolarFire FPGA.

 

Figure 2. 1 GbE Implementation over Transceiver using the Microsemi PolarFire FPGA.

 

In the first example, system-on-chip (SoC) FPGA design software tools are used to implementing the interface functionality over GPIOs. The FPGA’s Ethernet interface IP includes a core that combines a GPIO and CDR that is available in every GPIO bank lane of the device to provide clock and data recovery for 1 GbE data transfer rates. Each side of the device can have more than one of these cores sharing high-speed signals from a phase-locked loop (PLL) located at the corner of the FPGA fabric. The GPIO core is instantiated from the software suite’s catalog and then configured by selecting the data rate (in this case, 1250 Mbps). It is combined with a PLL core and MAC transmit and receive logic to complete the design. The snapshot of the GPIO core is shown in the diagram below.

 

Figure 3. GPIO core GUI configurator.

 

Power Comparison

While there is no difference in fabric resource availability between instantiating the Ethernet interface IP for a GbE-over-GPIO implementation as compared to instantiating the transceiver core, transceiver PLL, and reference clock for a transceiver implementation, comparative power efficiency is a different story. GPIO CDRs consume less power than a transceiver, thus reducing power consumption for applications using multiple GbE links. To compare the power numbers of transceiver-based implementation versus GPIO-based implementation, we used the advance (initial estimated information based on simulations) power numbers for the PolarFire MPF300T device, package FCG1152.

The following tables list the total power consumed by different rails for a single lane of GPIO or single lane of transceiver, eight lanes of GPIO or eight lanes of transceiver, and 16 lanes of GPIO or sixteen lanes of transceiver.

 

Table 1: Power Comparison—1 Lane of Transceiver vs 1 Lane of GPIO

 

Table 2: Power Comparison—8 Lanes of Transceiver vs 8 Lanes of GPIO

 

Table 3: Power Comparison—16 Lanes of Transceiver vs 16 Lanes of GPIO

 

SGMII over GPIO Delivers Additional Advantages

The latest mid-range FPGAs also introduce the opportunity to support numerous 1Gbps Ethernet links by implementing Serial Gigabit Media Independent Interface (SGMII) over GPIO.

In the past, designers could only use mid-range FPGAs to implement SGMII over GPIO if they employed larger packages with additional transceivers. Often, they had to move to FPGAs with higher logic-element (LE) counts, increasing both power consumption and cost.  With the latest mid-range FPGAs, though, it is easy to implement SGMII-over-GPIO, and fewer configuration blocks are required as compared to implementing SGMII with a transceiver. The GPIO-based implementation uses a shared PLL across multiple lanes and banks while a transceiver requires a dedicated PLL, resulting in lower overall power with GPIOs.

Looking at the resource comparison below, it is obvious that more ports can be implemented with GPIOs than with the transceiver. An additional advantage of using GPIOs is that the high-speed transceiver lanes can be reserved for other protocols such as 10 GbE, CPRI, Interlaken and PCIe.

 

Table 4: Resource Comparison

Data provided is for the Microsemi PolarFire FPGA.

 

FPGAs can be ideal solutions for packing more GbE interfaces into today’s smaller system footprints, as long as they can meet increasingly challenging power requirements. The latest mid-range FPGAs achieve this by offering the option of implementing this interface functionality over GPIOs using an IP core that combines a GPIO with a CDR.

Delivering multiple GbE ports in a single device without the need for transceivers, this approach significantly reduces power consumption while making it easier to implement a hybrid, high-performance solution with multiple 10G and 1G interfacing ports, and to scale port density with very low incremental increase in total power. The approach is particularly attractive for designers of low-power small form-factor pluggable (SFP) modules, custom industrial switches, scalable L2/L3 switches and other systems who can take advantage of the small form factors and a large number of inexpensive, low-power and efficient GPIOs that are available with today’s mid-range FPGA solutions.

 


Industry Articles are a form of content that allows industry partners to share useful news, messages, and technology with All About Circuits readers in a way editorial content is not well suited to. All Industry Articles are subject to strict editorial guidelines with the intention of offering readers useful news, technical expertise, or stories. The viewpoints and opinions expressed in Industry Articles are those of the partner and not necessarily those of All About Circuits or its writers.

Comments

0 Comments