loading

13 Years of Expertly Engineered Cable Solutions By FARSINCE.

What Is PAM4? A Practical Guide to Four-Level Modulation in High-Speed Data Systems

PAM4 (1)

If you work around high-speed links, you have probably noticed that PAM4 keeps coming up. That is not because the industry likes new terms. It is because the usual ways of scaling bandwidth are getting harder to sustain.

For years, the standard playbook was simple: add more lanes, or push existing lanes faster. Both still work, but both come with a cost. More lanes mean more hardware, more routing pressure, and more overall system complexity. Higher signaling rates mean more channel loss and tighter margins. At some point, the question stops being “How do we get a bigger number?” and becomes “How do we get there without making the whole link harder to manage?”

That is where PAM4 becomes important.

At a high level, PAM4 improves bandwidth efficiency by allowing each symbol to carry twice as much information as NRZ. That is why it has become closely tied to PCIe 6.0, 400G/800G optics, and long-reach data center interconnects. At the same time, PAM4 is not a free upgrade. Smaller eye openings, higher sensitivity to noise, and a heavier dependence on equalization and FEC are all part of the deal.

 

PAM4 vs NRZ at a Glance

The core difference is straightforward.

NRZ uses two signal levels, so each symbol carries one bit. PAM4 uses four signal levels, so each symbol carries two bits. That means PAM4 can deliver twice the bit rate of NRZ at the same symbol rate. Or looking at it another way, if you want the same data rate, PAM4 only needs half the baud rate NRZ would require.

That sounds like a clean win, but it is not that simple.

  • NRZ is simpler and more forgiving.
  • PAM4 is more bandwidth-efficient.
  • NRZ is often the better fit where robustness and lower complexity matter most.
  • PAM4 becomes compelling when higher throughput per lane is the priority.

So the real question is not which one is more advanced. It is which one fits the actual system better.

 

How PAM4 Works

PAM4 (3)

PAM4 matters because it changes the relationship between symbol rate and data rate.

In NRZ, one symbol maps to one bit. In PAM4, one symbol maps to two bits because there are four possible amplitude levels. That is what allows PAM4 to increase data throughput without forcing symbol rate up at the same pace.

This is one of the main reasons PAM4 has become so relevant in newer standards. It is not just a different signaling format. It is a more efficient use of the physical layer.

PCIe 6.0 is a good example. Gen6 uses PAM4 so each symbol carries two bits. That allows throughput to double while the baud rate stays at 32 Gb/s, the same as Gen5. Because baud rate and Nyquist frequency remain the same, channel-loss behavior can stay much closer to the previous generation than many people would expect from the raw rate increase alone.

 

Benefits and Trade-Offs

The main benefit of PAM4 is bandwidth efficiency. It lets designers raise per-lane throughput without increasing lane count at the same rate, and without forcing NRZ signaling into a harsher loss environment. In optical systems, this can also help control infrastructure cost, since increasing single-lane throughput is often more practical than adding more channels.

But the trade-offs are just as real.

The biggest one is signal margin. PAM4 eye height is only about one-third that of NRZ. That means smaller vertical separation between levels and less tolerance for noise, jitter, and distortion. In plain terms, PAM4 gets you more throughput, but it also gives you less room for error.

That is why PAM4 is not universally better. It is better when bandwidth density matters enough to justify the extra complexity.

 

Practical Challenges: Noise, ISI, Equalization, and FEC

This is where PAM4 stops being a nice diagram and becomes a real engineering problem.

Once eye openings get smaller, the link becomes more exposed to jitter, channel loss, crosstalk, and inter-symbol interference. The channel has less margin, and receiver-side behavior matters more. Equalization becomes more important because the system has to recover enough eye opening to make reliable decisions.

Forward error correction is also a major part of practical PAM4 deployment. Because raw bit error rate is higher, PAM4 links commonly rely on FEC to bring overall BER back to an acceptable level. So PAM4 is not just a modulation choice. It usually comes with tighter SI requirements, more receiver complexity, and more link-management overhead.

That is the real engineering message: PAM4 improves efficiency, but it also raises the bar for the rest of the design.

 

Receiver Design Essentials

Receiver design becomes much more demanding once PAM4 enters the picture.

The key building blocks include:

  • Slicer thresholds: a PAM4 receiver has to distinguish among four levels, not two.
  • FFE and DFE: used to compensate for channel loss and ISI.
  • CDR: timing recovery becomes more critical because PAM4 leaves less margin.

This is one reason PAM4 systems demand more simulation effort. Receiver behavior is not a minor implementation detail. It is part of the core link budget.

 

BER, SNR, and the Role of FEC

One clean way to think about PAM4 is that it trades signal-to-noise margin for bandwidth efficiency.

That trade shows up as a BER problem. Raw BER in PAM4 is typically worse than in comparable NRZ links because the receiver has a harder job. That is why FEC becomes part of the design strategy. Not as a patch, but as one layer in a larger system that also includes channel design, connector performance, insertion loss control, equalization, and receiver architecture.

When PAM4 works well, it is usually because all of those layers are aligned.

 

Applications and Real-World Use Cases

PAM4 is already well past the research stage. It is in real systems now.

Major application areas include:

  • 400G and 800G optical communication
  • PCIe 6.0
  • QSFP-DD and OSFP long-reach links
  • 5G transport networks
  • metro fixed networks
  • DCI/DCN environments involving switches and routers.

A practical example is PCIe Gen6. A design team that needs much higher internal bandwidth can either push NRZ harder or move to PAM4. Staying with NRZ would likely mean much higher baud rate or more lanes, both of which increase pressure on layout, loss budget, and power. PAM4 changes that equation. It doubles throughput while keeping baud rate aligned with the previous channel class, but it also introduces stricter SI, equalization, and FEC requirements. That is exactly the kind of trade-off PAM4 is built for.

 

Practical Guidance for System Design

For buyers, engineers, and product teams, the most useful question is not “Does this system use PAM4?” The better question is “Can this system support PAM4 well enough to make it worth using?”

That shifts attention to the right issues:

  • actual channel loss
  • equalization needs
  • FEC behavior
  • connector and cable quality
  • power and thermal implications
  • total system complexity.

PAM4 is not a checkbox. It is a system-level decision.

 

Conclusion

PAM4 matters because it addresses a very real scaling problem.

By allowing each symbol to carry two bits, it improves per-lane throughput without forcing the same jump in symbol rate that NRZ would require. That is why it has become central to PCIe 6.0, 400G/800G optics, and other next-generation interconnects.

At the same time, PAM4 is not a free upgrade. Smaller eye openings, higher sensitivity to noise, more demanding receiver behavior, and a reliance on equalization and FEC all come with it.

The most practical way to frame it is this: PAM4 is powerful, increasingly necessary, and highly effective in the right applications, but it only delivers real value when the surrounding system is designed well.

 

Need support for high-speed interconnect projects?

Farsince supports high-speed cable and connectivity projects for data centers, AI infrastructure, optical links, and next-generation system design. Talk with our team about cable structure, connector selection, shielding, impedance control, and application-specific requirements.

Talk to Farsince

 

Author

Franck Yan
Founder | Farsince Connectivity Solutions

Franck Yan is the founder of Farsince and has more than 13 years of experience in the cable and connectivity industry, working closely with global customers on data center, industrial, and network connectivity solutions.

prev
MCIO Connector and Cable Guide: Why It Matters in High-Density System Design
recommended for you
no data
Get in touch with us
Tel: +86 574 8704 2335
Mobile: +86 189 5787 1301
WhatsApp:  +86 189 5787 1301
Address: 777 West Zhonguan Road, Zhenhai Dist., Ningbo, Zhejiang, China. 315201
Customer service
detect