Twenty Years of PCI Express: The Past, Present, and Future of the Bus
As we approach the twentieth anniversary of PCI Express, here is a look back—and forward—at the reigning expansion slot.
The PCI Express (PCIe) bus was born in an era when the number of expansion slots in your PC was as important as CPU clock speed or the amount of system RAM. Since then, the PCIe bus has evolved from a set of slots to plug-in expansion cards to a high-speed interconnect topology.
The latest SSD (solid state drive) interface is electrically a PCIe 4.0, four-lane interface in the M.2 form factor. Image used courtesy of w:user:snickerdo via Wikimedia Commons (CC BY-SA 3.0)
The Start of the PCIe Bus: IBM and the 5150 PC
The PCIe bus has its roots in the IBM PC model 5150, introduced in 1981. Popular predecessors to the 5150, like the Apple II, used open standard buses or published their bus specifications for third-party expansion boards. This competitive pressure led IBM to open the 5150 bus and publish its specifications.
With IBM behind the box, an entire industry was built around designing and supplying expansion cards for the IBM PC bus. IBM’s second PC model, the PC/AT, upped the bus data width from 8 to 16 bits and maintained the open architecture. A number of companies used the bus in their clones of the PC, called PC compatibles. The broad use of the bus in the expansion board and PC clone industries led to the industry standard architecture (ISA) for the bus. This was good for consumers and PC clone manufacturers, but it took control and licensing revenue away from IBM.
IBM Attempts to Regain Standards Control
In the late 1980s, new processors and faster speeds were rendering the ISA bus obsolete. IBM introduced its new proprietary Micro Channel bus in an attempt to address the ISA's shortcomings. The company kept Micro Channel proprietary to profit from licensing fees sold to PC compatibles makers. The PC industry, however, migrated to Intel's 32-bit peripheral component interconnect (PCI) bus, maintained by the PCI special interest group (PCI-SIG).
State-of-the-art PC-compatible i80486 motherboard in 1995, supporting both ISA (four black slots in the foreground) and PCI (three white slots center). Photo by Duane Benson
While the PCI bus, like the Micro Channel, was faster, it was an open industry-wide standard. PCI pioneered an architecture that connected motherboard built-in peripherals to the bus without an add-in card. With the prior ISA architecture, peripherals built into the motherboard often required custom, non-standard interface circuitry. The PCI bus provided an onboard peripheral interface that was electrically equivalent to plugging a board into a slot, allowing for easier onboard integration and software support.
The PCI Bus (Still) Fell Short
Though PCI offered higher performance than ISA, it carried over a number of shortcomings from the ISA topology. Like ISA, the PCI bus used a shared parallel data bus architecture. While PCI was a major step up in speed potential and signal integrity, it still required each peripheral to share resources and negotiate for solo bus access.
Graphics accelerator card manufacturers ran up against these limits sooner than manufacturers of other interface cards, which prompted the development of the accelerated graphics port (AGP). AGP was a superset of PCI that departed from bus sharing and delivered a direct path between the AGP card slot and the motherboard chipset.
Enter PCI Express
In 2003, the PCI-SIG further responded to these challenges, introducing the PCI Express bus still in use today. PCIe supplanted all of the mainstream PC buses, including the AGP interface. PCI-SIG developed the standard between 2001 and 2003, and PCI products began shipping in 2004.
The PCIe bus differed from the PCI bus in two significant ways. Instead of using a shared bus-master topology, it employed a point-to-point system to connect devices directly through a common host controller. It also moved from a parallel data path to a unidirectional serial data path.
Shared bus in PCI vs. serial point-to-point topology in PCIe. Image used courtesy of Matifqazi via Wikimedia Commons (CC BY-SA 4.0)
With the old PCI and ISA bus-master topology, only one peripheral at a time could access the bus. Each negotiated for master status as needed, waited until it could grab control, and then took action. Even with direct memory access (DMA), little could be done in parallel. These old topologies met the needs of slow applications back in the 1980s but were far from adequate for gaming, high-speed networking, or complex graphical interfaces emerging in the new millennium.
Why PCIe Was a Big Step Forward
PCIe is more than just a physical slot standard. The workhorse of the bus is the topology. PCIe is used to connect built-in peripherals, add-in cards for laptops and mini-PCs, and SSD storage. Mini PCIe uses the same topology, encoding, and specifications and is electrically compatible with regular PCIe. The now-common M.2 SSD interface also uses PCIe topology.
The serial data path for PCIe uses unidirectional, differential pairs to improve signal integrity. While these pairs need to be length matched for de-skewing, the two traces of each pair are a lot easier to deal with than 8, 16, or 32.
De-skewing techniques for differential signal traces. Image used courtesy of Intel
High-speed parallel buses can also be handicapped by cross-talk, which is a signal bleed from one trace to another. This leads to data corruption and limits the bandwidth. Differential paired signals cancel out most cross-talk and deliver a cleaner signal.
The Strengths of PCIe's Differential Pairs
The unidirectional differential pairs of PCIe consist of four traces as a differential pair for each direction. Each set of four connections is referred to as a lane, and PCIe slots can support one to 16 lanes. The group of lanes used to connect two PCIe devices is referred to as an interconnect or link. Modern graphic accelerators typically use 16-lane slots—some requiring two slots and extra power connections.
Common PCIe motherboard slot configuration with a different number of lanes. Top to bottom: PCIe x4, PCIe x16, PCIe x1, PCIe x16, and a conventional PCI 32-bit slot for backward compatibility. Image used courtesy of D-Kuru via Wikimedia Commons (CC BY-SA 4.0)
The differential pair arrangement speeds up transmission and boosts reliability. In the PCIe versions 1.0 and 2.0, data is transmitted in eight-bit words with two overhead bits, referred to as 8b/10b encoding. This entails that 20% of the transmitted bits are overhead, not data. PCI 3.0 boosted this number to 128b/130b encoding, yielding a ratio of 98.5% data and 1.5% overhead. This encoding remained from PCIe 1.0 to 5.0, representing binary data with a non-return-to-zero (NRZ) format.
PCIe 1.0 to 7.0: Doubling Up Transfer Speed
PCIe 1.0 delivered up to 2.5 GB/s per lane for a maximum of 4 GB/s with a 16-lane interconnect. PCIe 2.0 doubled that number with improvements in the protocol and silicon fab capability. PCIe 3.0 nearly doubled the speed to 8 GB/s per lane by moving from 8b/10b to 128b/130b. Each new version since has doubled the data rates.
PCIe 6.0, introduced in 2022, brought significant changes in encoding and protocol, upping speeds to 64 GBs/s. PCIe 6.0 changed from an NRZ data format to Pulse Amplitude Modulation 4-level (PAM4) signaling. PAM represents two bits in the same unit interval as one bit in NRZ, which gives four values instead of two. This effectively replaces a binary bit with a two-bit value. PAM4 has a much higher error rate and requires advanced error correction as a result. At the time of writing, cards using this standard are not yet on the market.
PCI-SIG expects PCIe 7.0, under development since June 2022, to be solidified in 2024. This standard promises to double the PCIe 6.0 data rate by fine-tuning channel parameters for improved power efficiency and decreased signal losses. PCIe 7.0 hardware will not appear on the market until 2027.
Are the Days of Expansion Slots Numbered?
The PCIe bus from 20 years ago is still recognizable in today’s PC world, and a motherboard designer from 2004 might just as easily spot today's PCIe. However, this may not be the case 20 years down the line.
When PCIe was being developed, USB was still in its infancy. A wide variety of devices required slots in the computer. Many PCs of that era still needed add-in sound cards, modems, network cards, and wireless interfaces. For the typical user today, none of those applications require add-in cards. While gaming graphics accelerators, high-end video and sound processing equipment, and exotic or special-use products still use plug-in PCIe boards, most home and business PCs and laptops have all of that (and more) built-in or accessible via USB.
Most laptops and mini-PCs don’t use card slots today other than the M.2 SSD interface. While people still use PCIe to connect the various subsystems on the motherboard, the days of many expansion slots might already be numbered.