Compute Graph Framework SDK Reference  5.10
CGF Channel Sample

Description

The CGF Channel Standalone Sample demonstrates the usage of CGF Channel without the context of a CGF graphlet or application.

Running the Sample

The command line for the sample is:

./sample_cgf_dwchannel --prod=[0|1]
                   --downstreams=[1,4]
                   --cons=[1,4]
                   --ip=[IP Address]
                   --port=[Socket Port or ID]
                   --mode=[mailbox|reuse|[N]]
                   --type=[SOCKET|SHMEM_LOCAL|NVSCI]

where

--prod=[0|1]
    Have producer in this process our not.
    Ignored if type=NVSCI
    Default value: 1

--downstreams=[1,4]
    Number of downstreams of producer.
    Ignored if type=NVSCI
    Default value: 1

--cons=[1,4]
    Number of consumers in this process.
    Ignored if type=NVSCI
    Default value: 1

--ip=[STR]
    IP Address of the source port.
    Ignored if type=NVSCI
    Default value: 127.0.0.1

--port=[INT]
    SOCKET Port Number or port ID under SHMEM_LOCAL.
    Ignored if type=NVSCI
    Default value: 40002

--mode=[mailbox|reuse|[N]]
    If the value is a number, it means the size of fifo channel.
    mailbox means only one packet in channel, and it will be overwritted after come a new packet.
    reuse is based on mailbox, the channel will always keep the latest packet.
    Default value: 4

--type=[SOCKET|SHMEM_LOCAL|NVSCI]
    Socket channel, local shared memory channel, or nvsci channel.
    Default value: SOCKET

--prod-reaches=[STR]
    Colon-separated list of producer reaches (process|chip)
    For NVSCI mode only
    Default value: ""

--prod-stream-names=[STR]
    colon-separated list of producer nvsciipc endpoints
    For NVSCI mode only
    Default value: ""

--cons-reaches=[STR]
    Colon-separated list of consumer reaches (process|chip)
    For NVSCI mode only
    Default value: ""

--cons-stream-names=[STR]
    colon-separated list of consumer nvsciipc endpoints
    For NVSCI mode only
    Default value: ""

--dataType=[int|dwImage]
    the type of data to be transferred.
    Default value: "dwImage"

--frames=N
    the number of frames to run the sample.
    Default value: 128

--sync-mode=[none|p2c|c2p|both]
    the synchronization mode for exchanging buffers.
    none: all data buffers are exchanged synchronously.
    p2c: data buffers are written asynchronous from producer send.
    c2p: data buffers are read asynchronous from consumer read.
    both: both p2c and c2p synchronization.
    For NVSCI mode only

Examples

To run a default (inter process peer to peer with socket):

./sample_cgf_dwchannel

To run intra-process socket broadcast:

./sample_cgf_dwchannel --prod=1 --downstreams=2 --cons=2

To run intra-process nvscistream broadcast:

./sample_cgf_dwchannel --type=NVSCI --prod-stream-names=nvscisync_a_0:nvscisync_b_0 --prod-reaches=process:process --cons-stream-names=nvscisync_a_1:nvscisync_b_1 --cons-reaches=process:process

To run inter-process socket:

./sample_cgf_dwchannel --cons=1 --prod=0 --downstreams=0
./sample_cgf_dwchannel --prod=1 --downstreams=1 --cons=0

To run inter-process socket with custom data type: ./sample_cgf_dwchannel –cons=1 –prod=0 –downstreams=0 –dataType=custom ./sample_cgf_dwchannel –prod=1 –downstreams=1 –cons=0 –dataType=custom

To run inter-process nvscistream:

./sample_cgf_dwchannel --type=NVSCI --prod-stream-names=nvscisync_a_0 --prod-reaches=process
./sample_cgf_dwchannel --type=NVSCI --cons-stream-names=nvscisync_a_1 --cons-reaches=process

To run inter-process nvscistream with custom data type: ./sample_cgf_dwchannel –type=NVSCI –prod-stream-names=nvscisync_a_0 –prod-reaches=process –dataType=custom ./sample_cgf_dwchannel –type=NVSCI –cons-stream-names=nvscisync_a_1 –cons-reaches=process –dataType=custom

To run inter-process nvscistream with asynchronous writes:

./sample_cgf_dwchannel --type=NVSCI --prod-stream-names=nvscisync_a_0 --prod-reaches=process --sync-mode=p2c
./sample_cgf_dwchannel --type=NVSCI --cons-stream-names=nvscisync_a_1 --cons-reaches=process --sync-mode=p2c

To run intra-inter-process socket broadcast:

./sample_cgf_dwchannel --cons=1 --prod=0 --downstreams=0
./sample_cgf_dwchannel --prod=1 --downstreams=2 --cons=1

To run intra-inter-process nvscistream broadcast:

./sample_cgf_dwchannel --type=NVSCI --prod-stream-names=nvscisync_a_0:nvscisync_b_0 --prod-reaches=process:process
./sample_cgf_dwchannel --type=NVSCI --cons-stream-names=nvscisync_a_1:nvscisync_b_1 --cons-reaches=process:process

To run C2C socket peer to peer:

  • Tegra Consumer:
    ./sample_cgf_dwchannel --cons=1 --prod=0 --downstreams=0 --ip=${IP} --port=${ID}
    
  • Tegra Producer:
    ./sample_cgf_dwchannel --cons=0 --ip=${IP} --port=${ID}
    

To run C2C nvscistream PCIE peer-to-peer:

  • On Dual Firespray Tegra A:
    sudo ./sample_cgf_dwchannel --type=NVSCI --prod-stream-names=nvscic2c_pcie_s0_c6_1 --prod-reaches=chip
    
  • On Dual Firespray Tegra B:
    sudo ./sample_cgf_dwchannel --type=NVSCI --cons-stream-names=nvscic2c_pcie_s0_c5_1 --cons-reaches=chip
    

To run C2C nvscistream PCIE peer-to-peer with asynchronous writes:

  • On Dual Firespray Tegra A:
    sudo ./sample_cgf_dwchannel --type=NVSCI --prod-stream-names=nvscic2c_pcie_s0_c6_1 --prod-reaches=chip --sync-mode=p2c
    
  • On Dual Firespray Tegra B:
    sudo ./sample_cgf_dwchannel --type=NVSCI --cons-stream-names=nvscic2c_pcie_s0_c5_1 --cons-reaches=chip --sync-mode=p2c
    

To run C2C socket broadcast:

  • Tegra Consumer:
    ./sample_cgf_dwchannel --cons=1 --prod=0 --downstreams=0 --ip=${IP} --port=${ID}
    
  • Tegra Producer:
    ./sample_cgf_dwchannel --prod=1 --downstreams=2 --cons=1 --ip=${IP} --port=${ID}
    

To run C2C nvscistream PCIE broadcast:

  • On Dual Firespray Tegra A:
    sudo ./sample_cgf_dwchannel --type=NVSCI --prod-stream-names=nvscic2c_pcie_s0_c6_1:nvscic2c_pcie_s0_c6_2 --prod-reaches=chip:chip
    
  • On Dual Firespray Tegra B:
    sudo ./sample_cgf_dwchannel --type=NVSCI --cons-stream-names=nvscic2c_pcie_s0_c5_1:nvscic2c_pcie_s0_c5_2 --cons-reaches=chip:chip
    

To run hybrid inter-process, inter-chip nvscistream:

  • On Dual Firespray Tegra A:
    sudo ./sample_cgf_dwchannel --type=NVSCI --prod-stream-names=nvscisync_a_0:nvscic2c_pcie_s0_c6_1 --prod-reaches=process:chip
    sudo ./sample_cgf_dwchannel --type=NVSCI --cons-stream-names=nvscisync_a_1 --cons-reaches=process
    
  • On Dual Firespray Tegra B:
    sudo ./sample_cgf_dwchannel --type=NVSCI --cons-stream-names=nvscic2c_pcie_s0_c5_1 --cons-reaches=chip
    

Enlarge NetworkStage if necessary

sudo sed -i '$ a net.core.wmem_max = 65011712' /etc/sysctl.conf
sudo sed -i '$ a net.core.rmem_max = 65011712' /etc/sysctl.conf
sudo sed -i '$ a net.core.rmem_default = 16777216' /etc/sysctl.conf
sudo sed -i '$ a net.ipv4.tcp_wmem = 65011712 65011712 65011712' /etc/sysctl.conf
sudo sed -i '$ a net.ipv4.tcp_rmem = 65011712 65011712 65011712' /etc/sysctl.conf
sudo sysctl -p

Add nvsciipc endpoints if necessary

On linux, the nvsciipc endpoints should be listed under /etc/nvsciipc.cfg to be viewed. See the /etc/nvsciipc.cfg file to understand the format of endpoints. If modified the system must be rebooted before the changes will take affect. For C2C use-cases, the nvsciipc endpoints are configured in the device tree and are associated with a specific peer SoC on the platform. Please see DRIVE OS release documentation for further information about how to configure the C2C endpoints.

Understanding C2C nvsciipc endpoints

Each endpoint can only be used by the Tegra that owns the corresponding PCIe port for the endpoint. In default P3710 (Firespray) Dual Configuration, the nvsciipc endpoints are listed as follows:

INTER_CHIP      nvscic2c_pcie_s0_c[5-6]_[1-12)    0000

Endpoints with c5 are end-port (EP) endpoints on the PCIe bus. Endpoints with c6 are toot-port (RP) endpoints on the PCIe bus. The EP and RP namings are for HW node identification, and do not signify direction of allowed data flows. Each pair of connected endpoints may transfer data in either direction. c6 is connected to Tegra A while c5 is connected to Tegra B.