SPRAD06B March   2022  – November 2024 AM620-Q1 , AM623 , AM625 , AM625-Q1

 

  1.   1
  2.   Abstract
  3.   Trademarks
  4. 1Overview
    1. 1.1 Board Designs Supported
    2. 1.2 General Board Layout Guidelines
    3. 1.3 PCB Stack-Up
    4. 1.4 Bypass Capacitors
      1. 1.4.1 Bulk Bypass Capacitors
      2. 1.4.2 High-Speed Bypass Capacitors
      3. 1.4.3 Return Current Bypass Capacitors
    5. 1.5 Velocity Compensation
  5. 2DDR4 Board Design and Layout Guidance
    1. 2.1  DDR4 Introduction
    2. 2.2  DDR4 Device Implementations Supported
    3. 2.3  DDR4 Interface Schematics
      1. 2.3.1 DDR4 Implementation Using 16-Bit SDRAM Devices
      2. 2.3.2 DDR4 Implementation Using 8-Bit SDRAM Devices
    4. 2.4  Compatible JEDEC DDR4 Devices
    5. 2.5  Placement
    6. 2.6  DDR4 Keepout Region
    7. 2.7  DBI
    8. 2.8  VPP
    9. 2.9  Net Classes
    10. 2.10 DDR4 Signal Termination
    11. 2.11 VREF Routing
    12. 2.12 VTT
    13. 2.13 POD Interconnect
    14. 2.14 CK and ADDR_CTRL Topologies and Routing Guidance
    15. 2.15 Data Group Topologies and Routing Guidance
    16. 2.16 CK and ADDR_CTRL Routing Specification
      1. 2.16.1 CACLM - Clock Address Control Longest Manhattan Distance
      2. 2.16.2 CK and ADDR_CTRL Routing Limits
    17. 2.17 Data Group Routing Specification
      1. 2.17.1 DQLM - DQ Longest Manhattan Distance
      2. 2.17.2 Data Group Routing Limits
    18. 2.18 Bit Swapping
      1. 2.18.1 Data Bit Swapping
      2. 2.18.2 Address and Control Bit Swapping
  6. 3LPDDR4 Board Design and Layout Guidance
    1. 3.1  LPDDR4 Introduction
    2. 3.2  LPDDR4 Device Implementations Supported
    3. 3.3  LPDDR4 Interface Schematics
    4. 3.4  Compatible JEDEC LPDDR4 Devices
    5. 3.5  Placement
    6. 3.6  LPDDR4 Keepout Region
    7. 3.7  LPDDR4 DBI
    8. 3.8  Net Classes
    9. 3.9  LPDDR4 Signal Termination
    10. 3.10 LPDDR4 VREF Routing
    11. 3.11 LPDDR4 VTT
    12. 3.12 CK0 and ADDR_CTRL Topologies
    13. 3.13 Data Group Topologies
    14. 3.14 CK0 and ADDR_CTRL Routing Specification
    15. 3.15 Data Group Routing Specification
    16. 3.16 Byte and Bit Swapping
  7. 4LPDDR4 Board Design Simulations
    1. 4.1 Board Model Extraction
    2. 4.2 Board-Model Validation
    3. 4.3 S-Parameter Inspection
    4. 4.4 Time Domain Reflectometry (TDR) Analysis
    5. 4.5 System Level Simulation
      1. 4.5.1 Simulation Setup
      2. 4.5.2 Simulation Parameters
      3. 4.5.3 Simulation Targets
        1. 4.5.3.1 Eye Quality
        2. 4.5.3.2 Delay Report
        3. 4.5.3.3 Mask Report
    6. 4.6 Design Example
      1. 4.6.1 Stack-Up
      2. 4.6.2 Routing
      3. 4.6.3 Model Verification
      4. 4.6.4 Simulation Results
  8. 5Appendix: AM62x ALW and AMC Package Delays
  9. 6Revision History

DDR4 Introduction

DDR4 board designs are similar to DDR3 board designs. Fly-by routing is required just as it is with DDR3, and thus leveling is required. To achieve higher data rates with DDR4, there are several enhancements added to the interface specification that must be accommodated by both the SDRAM and the processor’s interface (PHY). The enhancements that affect the board interconnect and layout are listed below:

  • Addition of ACT_n pin – This pin provides signaling to allow the pins previously called Command pins (RAS_n, CAS_n and WE_n) to be used as additional address pins. These pins behave as row address pins when ACT_n is low and as command pins when ACT_n is high. This is valid only when CS_n is low.
  • Removal of one BA (Bank Address) pin and addition of 2 BG (Bank Group) pins – This adds flexibility with accesses similar to DDR3, but with 16 banks bundled in four bank groups of four banks each. This results in additional timing parameters, because adjacent accesses within a bank group are faster than adjacent accesses to another bank group. Successive accesses to locations within a single bank are the fastest option.
  • Addition of PAR (Parity) and ALERT_n pins (use is optional) – The PAR pin supplies parity monitoring for the command and address pins using even parity from the controller to the SDRAM. ALERT_n is the indicator (open-drain output) from the SDRAMs that indicate when a parity error has been detected.
  • Change to POD termination – Pseudo-Open Drain (POD) output buffers are implemented rather than traditional SSTL push-pull outputs. This allows the data bit termination, ODT, to go to the I/O power rail, VDDQ, rather than to the mid-level voltage, VTT. Power consumption may be reduced, because only driving a bit low draws current.
  • Addition of DBI – Data bus invert (DBI) is a feature that allows the data bus to be inverted whenever more than half of the bits are zero. This feature may reduce active power and enhance the data signal integrity when coupled with POD termination.
  • Addition of a VPP power input – The VPP power supply (2.5 V) provides power to the internal word line logic. This voltage increase allows the SDRAM to reduce overall power consumption.
  • Separation of data VREF from address/control VREF – The data reference voltage, VREFDQ, is now internally generated both within the SDRAM and within the PHY. It can be programmed to various levels to provide the optimum sampling threshold. The optimum threshold varies based on the ODT impedance chosen, the drive strength, and the PCB track impedance. The address/control reference voltage, VREFCA, is a mid-level reference voltage, the same as it is on DDR3.
Note:

These features may not be supported on all devices. Refer to the datasheet and the DDR Subsystem (DDRSS) chapter in the AM62x Technical Reference Manual for lists of features and not supported features.