SPRAD14 April   2022 AM67 , AM67A , AM68 , AM68A , AM69 , AM69A , DRA821U , DRA821U-Q1 , DRA829J , DRA829J-Q1 , DRA829V , DRA829V-Q1 , TDA4AEN-Q1 , TDA4AH-Q1 , TDA4AL-Q1 , TDA4AP-Q1 , TDA4APE-Q1 , TDA4VE-Q1 , TDA4VEN-Q1 , TDA4VH-Q1 , TDA4VL-Q1 , TDA4VM , TDA4VM-Q1 , TDA4VP-Q1 , TDA4VPE-Q1

 

  1.   Trademarks
  2. 1Introduction
  3. 2Dual TDA4 System
    1. 2.1 Dual TDA4x SoC System Diagram
    2. 2.2 System Consideration and BOM Optimization
  4. 3Camera Connection
    1. 3.1 Duplicate Front Camera Input to Two TDA4x SoCs
    2. 3.2 Connect Front Camera to Only one TDA4x
  5. 4Boot Sequence Solution
    1. 4.1 Boot Solution Based on Dual Flash
    2. 4.2 Boot Solution Based on Single Flash
  6. 5Multi-SoC Demo Based on PCIe
  7. 6References

Connect Front Camera to Only one TDA4x

This solution connects the front camera and side view cameras to TDA4-A, and transfers intermediate results to TDA4-B for further processing. The multiple result fusion is then implemented in TDA4-B and the final result is outputted, as shown in Figure 3-2.

Figure 3-2 Typical Camera Series Solution

The difference between this scenario and the previous scenario is the result of TDA4-A being transferred through the PCIe TX node. Deep learning inference intermediate results are received by the PCIe RX node in the TDA4-B node, and those results are used as the source data for further processing in TDA4-B. The surround view camera results are input to the fusion node along with the deep learning result. Final results are outputted from TDA4-B.