If you work around high-speed links, you have probably noticed that PAM4 keeps coming up. That is not because the industry likes new terms. It is because the usual ways of scaling bandwidth are getting harder to sustain.
For years, the standard playbook was simple: add more lanes, or push existing lanes faster. Both still work, but both come with a cost. More lanes mean more hardware, more routing pressure, and more overall system complexity. Higher signaling rates mean more channel loss and tighter margins. At some point, the question stops being “How do we get a bigger number?” and becomes “How do we get there without making the whole link harder to manage?”
That is where PAM4 becomes important.
At a high level, PAM4 improves bandwidth efficiency by allowing each symbol to carry twice as much information as NRZ. That is why it has become closely tied to PCIe 6.0, 400G/800G optics, and long-reach data center interconnects. At the same time, PAM4 is not a free upgrade. Smaller eye openings, higher sensitivity to noise, and a heavier dependence on equalization and FEC are all part of the deal.
The core difference is straightforward.
NRZ uses two signal levels, so each symbol carries one bit. PAM4 uses four signal levels, so each symbol carries two bits. That means PAM4 can deliver twice the bit rate of NRZ at the same symbol rate. Or looking at it another way, if you want the same data rate, PAM4 only needs half the baud rate NRZ would require.
That sounds like a clean win, but it is not that simple.
So the real question is not which one is more advanced. It is which one fits the actual system better.
PAM4 matters because it changes the relationship between symbol rate and data rate.
In NRZ, one symbol maps to one bit. In PAM4, one symbol maps to two bits because there are four possible amplitude levels. That is what allows PAM4 to increase data throughput without forcing symbol rate up at the same pace.
This is one of the main reasons PAM4 has become so relevant in newer standards. It is not just a different signaling format. It is a more efficient use of the physical layer.
PCIe 6.0 is a good example. Gen6 uses PAM4 so each symbol carries two bits. That allows throughput to double while the baud rate stays at 32 Gb/s, the same as Gen5. Because baud rate and Nyquist frequency remain the same, channel-loss behavior can stay much closer to the previous generation than many people would expect from the raw rate increase alone.
The main benefit of PAM4 is bandwidth efficiency. It lets designers raise per-lane throughput without increasing lane count at the same rate, and without forcing NRZ signaling into a harsher loss environment. In optical systems, this can also help control infrastructure cost, since increasing single-lane throughput is often more practical than adding more channels.
But the trade-offs are just as real.
The biggest one is signal margin. PAM4 eye height is only about one-third that of NRZ. That means smaller vertical separation between levels and less tolerance for noise, jitter, and distortion. In plain terms, PAM4 gets you more throughput, but it also gives you less room for error.
That is why PAM4 is not universally better. It is better when bandwidth density matters enough to justify the extra complexity.
This is where PAM4 stops being a nice diagram and becomes a real engineering problem.
Once eye openings get smaller, the link becomes more exposed to jitter, channel loss, crosstalk, and inter-symbol interference. The channel has less margin, and receiver-side behavior matters more. Equalization becomes more important because the system has to recover enough eye opening to make reliable decisions.
Forward error correction is also a major part of practical PAM4 deployment. Because raw bit error rate is higher, PAM4 links commonly rely on FEC to bring overall BER back to an acceptable level. So PAM4 is not just a modulation choice. It usually comes with tighter SI requirements, more receiver complexity, and more link-management overhead.
That is the real engineering message: PAM4 improves efficiency, but it also raises the bar for the rest of the design.
Receiver design becomes much more demanding once PAM4 enters the picture.
The key building blocks include:
This is one reason PAM4 systems demand more simulation effort. Receiver behavior is not a minor implementation detail. It is part of the core link budget.
One clean way to think about PAM4 is that it trades signal-to-noise margin for bandwidth efficiency.
That trade shows up as a BER problem. Raw BER in PAM4 is typically worse than in comparable NRZ links because the receiver has a harder job. That is why FEC becomes part of the design strategy. Not as a patch, but as one layer in a larger system that also includes channel design, connector performance, insertion loss control, equalization, and receiver architecture.
When PAM4 works well, it is usually because all of those layers are aligned.
PAM4 is already well past the research stage. It is in real systems now.
Major application areas include:
A practical example is PCIe Gen6. A design team that needs much higher internal bandwidth can either push NRZ harder or move to PAM4. Staying with NRZ would likely mean much higher baud rate or more lanes, both of which increase pressure on layout, loss budget, and power. PAM4 changes that equation. It doubles throughput while keeping baud rate aligned with the previous channel class, but it also introduces stricter SI, equalization, and FEC requirements. That is exactly the kind of trade-off PAM4 is built for.
For buyers, engineers, and product teams, the most useful question is not “Does this system use PAM4?” The better question is “Can this system support PAM4 well enough to make it worth using?”
That shifts attention to the right issues:
PAM4 is not a checkbox. It is a system-level decision.
PAM4 matters because it addresses a very real scaling problem.
By allowing each symbol to carry two bits, it improves per-lane throughput without forcing the same jump in symbol rate that NRZ would require. That is why it has become central to PCIe 6.0, 400G/800G optics, and other next-generation interconnects.
At the same time, PAM4 is not a free upgrade. Smaller eye openings, higher sensitivity to noise, more demanding receiver behavior, and a reliance on equalization and FEC all come with it.
The most practical way to frame it is this: PAM4 is powerful, increasingly necessary, and highly effective in the right applications, but it only delivers real value when the surrounding system is designed well.
Need support for high-speed interconnect projects?
Farsince supports high-speed cable and connectivity projects for data centers, AI infrastructure, optical links, and next-generation system design. Talk with our team about cable structure, connector selection, shielding, impedance control, and application-specific requirements.
Talk to Farsince
Franck Yan
Founder | Farsince Connectivity Solutions
Franck Yan is the founder of Farsince and has more than 13 years of experience in the cable and connectivity industry, working closely with global customers on data center, industrial, and network connectivity solutions.