Timed VCPU Yield#

On DriveOS, all Orin cores are assigned to the Guest OS Virtual Machines. There are other DriveOS services and other low priority Virtual Machines (such as Update Service VM) that share the CPUs assigned to the Application VM. To allow the Low Priority Virtual Machines to opportunistically run, DriveOS provides NvHvYieldVcpu() interface to yield a specific VCPU to a Lower priority VM if the Low Priority VM is assigned the same VCPU.

There is an expectation that the application VM VCPUs do periodically idle, thus allowing the DRIVE Update Service to execute opportunistically.

The NvHvYieldVcpu() API provides functionality to yield VCPU to low priority Virtual Machines such that if an interrupt arrives in the Guest OS to the VCPU being yielded, that interrupt is allowed to processed and once done, VCPU yield to low priority VM is continued. To avoid frequent switching, the system can be configured to have the interrupts received in this VCPU to be minimal during the yield. The API also takes as parameter a timeout value MAX_VCPU_YIELD_TIME, which essentially means that the API would yield the CPU to low priority VM for a maximum of the given timeout. The API internally communicates with an entity in the Update VM called lowprio_yield_req. API can return before yielding the CPU for the specified timeout period to the low priority VM if the low priority VM is done using the CPU before the specified timeout.

This API is exposed to the DriveOS users to allow them to invoke the same within their scheduler. The API parameters and description can be found in the header file packaged in SDK at drive-linux/include/nvtegrahv_yield_vcpu.h. Device tree bindings document for the VCPU yield driver is available in kernel source tree at: nvidia/Documentation/devicetree/bindings/tegra_hv/tegra_hv_vcpu_yield.txt.