caLogo

SiP, MCP and DDR5 support faster speeds and higher power requirements.

Ed.: This is the sixth of an occasional series by the authors of the 2019 iNEMI Roadmap. This information is excerpted from the roadmap, available from iNEMI (inemi.org/2019-roadmap-overview).

New high-end computing system technologies becoming available for such applications as servers, telecom and the cloud must meet bandwidth, power, thermal and environmental challenges. Advanced packaging technologies that can drive integration and increase functionality, at acceptable cost and risk levels, will be key enablers for the sector.

Integration Trends
Advanced silicon integration technologies, such as through-silicon vias (TSVs), are enabling 2.5D silicon interposers and 3D chip-stacking, providing high-density interconnect, and therefore, high bandwidth capability between components. Memory modules have started to use this already, and they will continue to expand. More compute elements are also starting to use TSV and novel packaging technologies to enable heterogenous integration by combining compute “chiplets” with I/O memory chiplets integrated either via 2.5D substrate level or 3D stacking.  

5 inemi figure 1




















Figure 1. High-end computing systems will leverage multichip packages for higher bandwidth.

System-in-package (SiP) and multichip package (MCP) technologies can optimize cost and provide more integration in a package. Integrating voltage regulation and silicon photonics with processor chips or bridge chips will increase. The growth in mobile systems has been the driver for development of these packaging architectures. High-end systems will also adopt packaging technologies such as SiP and MCP because they permit higher numbers of interconnect pins, more memory in a condensed space, and more cores without increasing the power envelope.

Optical interconnect will be used more broadly. First, transceivers and active optical cables (AOCs) will be used more broadly for in-frame communication, potentially replacing copper interconnect in backplanes or cables when the cost, power and bandwidth tradeoffs justify the switch to optical. Integrating optical devices into packaging to reduce trace length and, thus, power demand for high bandwidth interfaces will demand advanced packaging and leverage SiP and package-on-package (PoP) for increasing integration at the package level.

The desire for higher levels of integration of optics will favor adoption of silicon photonics. Silicon photonics, especially with vertical cavity surface emitting lasers (VCSELs), will lower cost and allow shorter rack-to-rack, or even within the rack, types of interconnections for high-density, high-bandwidth interconnections. System-level cost management, integration density, and power limit tradeoffs must be carefully considered as development of silicon photonics is pursued.

Electrical Considerations

Electrical interconnection will continue to be the dominant interconnection for short-reach communication. Signaling standards could be extended beyond 50Gb/s per channel. Electrical connectors for printed circuit board and cable communication delivering low insertion loss, flat impedance profiles and minimal crosstalk will maximize the reach of the copper interconnect at an acceptable bit error rate. How quickly higher speeds will be adopted depends on the ability to equalize the channels in the existing power envelope while the channel cost-performance as measured in $/Gb/s is reduced over time. The cost-performance is strongly impacted by bandwidth density. Bandwidth density can be channels x Gb/s/channel per unit area for a package on a PCB or channels x Gb/s/channel per unit length for edge-card interconnection. The required ground pins that shield signals and provide a continuous return path will increase the effective number of pins per channel. So, even in cases where the channels per unit area or channels per unit length are constant, the number of pins may increase to effectively shield the signals.

Reduced dielectric loss materials are increasingly used for high-speed electrical channels, and the demand for those materials will increase as speeds above 50Gb/s per channel are adopted. However, low-loss/ultra-low-loss electrical channels also require attention to processing and design of all the elements of packages and PCBs. Copper roughness, via stubs, antipad size and shape, and internal via and PTH design are all as important as the loss characteristics of the dielectric material. Coreless packages and thin laminates for improved via and PTH design will reduce discontinuities significantly for high-speed channels. The footprint at the electrical connector will require special design rules to avoid becoming a bandwidth limiter in package-to-board, -backplane or -cable interconnections. This footprint design includes via or PTH diameter, length and stub, antipad size and shape and routing escapes from the vias, or PTH and land sizes.

Reference plane gaps, holes and interconnection to PTHs that create return path discontinuities are part of the channel design.

To efficiently address these technology challenges, power efficiency must also continue to improve. The channel shielding requirements demand more layers and vias for the high-speed channel. Improving power efficiency demands lower impedance power distribution for less loss through I2R loss and less inductance for faster regulation. This creates a trend toward more metal and placing regulation closer to the loads competing with the short reach signaling and increased signal shielding. These trends also leverage the advanced packaging concepts of TSV, SiP and PoP and help drive the economics to adopt this technology. In addition, increasing processor power, increased memory channels, and higher-end PCIe end-point cards like FPGAs/discrete GPUs all lead to much higher power needs at the node and rack levels. 48V to the node is being investigated to enable higher efficiency for higher power racks, and higher frequency VRs are being investigated to deliver the power distribution network to the higher-powered compute elements. DDR5 memory has power management integrated circuits (PMICs)/VRs integrated on the DIMMs.

High-End Systems Webinar

Join iNEMI on Jun. 17 to review highlights of the High-End Systems roadmap. This webinar, part of a series of roadmap webinars iNEMI is hosting, will share key trends and technology challenges identified in the chapter. For additional details and to register: https://community.inemi.org/ev_calendar_day.asp?date=6/17/2020&eventid=334

Kartik Ananth, senior principal engineer, Data Platforms Group, Intel, and Dale Becker, chief engineer, System Electrical Design for IBM, are chair and co-chair, respectively, of the iNEMI High-End Systems Product Emulator Group (PEG).

Submit to FacebookSubmit to Google PlusSubmit to TwitterSubmit to LinkedInPrint Article