SPRADH2A February   2024  – November 2024 AM62A3 , AM62A3-Q1 , AM62A7 , AM62A7-Q1 , AM62P , AM62P-Q1 , DS90UB953A-Q1 , DS90UB960-Q1 , TDES960 , TSER953

 

  1.   1
  2.   Abstract
  3.   Trademarks
  4. 1Introduction
  5. 2Connecting Multiple CSI-2 Cameras to the SoC
    1. 2.1 CSI-2 Aggregator Using SerDes
    2. 2.2 CSI-2 Aggregator without Using SerDes
    3. 2.3 Supported Camera Data Throughput
  6. 3Enabling Multiple Cameras in Software
    1. 3.1 Camera Subsystem Software Architecture
    2. 3.2 Image Pipeline Software Architecture
  7. 4Reference Design
    1. 4.1 Supported Cameras
    2. 4.2 Setting up Four IMX219 Cameras
    3. 4.3 Configuring Cameras and CSI-2 RX Interface
    4. 4.4 Streaming from Four Cameras
      1. 4.4.1 Streaming Camera Data to Display
      2. 4.4.2 Streaming Camera Data through Ethernet
      3. 4.4.3 Storing Camera Data to Files
    5. 4.5 Multicamera Deep Learning Inference
      1. 4.5.1 Model Selection
      2. 4.5.2 Pipeline Setup
  8. 5Performance Analysis
  9. 6Summary
  10. 7References
  11. 8Revision History

Camera Subsystem Software Architecture

Figure 3-1 shows a high-level block diagram of the camera capture system software in AM62A/AM62P Linux SDK, corresponding to the HW system in Figure 2-2.

 High-Level Block Diagram of Camera Capture System Using SerDesFigure 3-1 High-Level Block Diagram of Camera Capture System Using SerDes

This software architecture enables the SoC to receive multiple camera streams with the use of SerDes, as shown in Figure 2-2. The FPD-Link/V3-Link SerDes assigns a unique I2C address and virtual channel to each camera. A unique device tree overlay needs to be created with the unique I2C address for every camera. The CSI-2 RX driver recognizes each camera using the unique virtual channel number and creates a DMA context per camera stream. A video node is created for every DMA context. Data from each camera is then received and stored using DMA to the memory accordingly. User space applications use the video nodes corresponding to each camera to access the camera data. Examples of using this software architecture are given in chapter 4 - reference design.

Any specific sensor driver that is compliant with the V4L2 framework can plug and play in this architecture. Refer to [8] regarding how to integrate a new sensor driver into the Linux SDK.