Video Layer

Video layer is the main video playback layer. NvMediaVideoDesc is used to describe this layer.

  • pictureStructure, next, current, previous and previous2 are describing the type and video surfaces to be used.
  • srcRect determines which portion of the source video surface is used.
  • This rectangle from the source gets zoomed into dstRect.
  • dstRect determines the rectangle where the video is going to be rendered.
    • The position of this rectangle is relative to the destination surface.
    • The destination surface size is determined at NvMediaVideoMixer creation.

Each NvMediaVideoSurface must contain an entire frame's-worth of data, irrespective of whether an interlaced of progressive sequence is being decoded.

Depending on the exact encoding structure of the compressed video stream, the application may need to call NvMediaVideoDecoderRenderEx twice to fill a single NvMediaVideoSurface.

When the stream contains an encoded progressive frame, or a "frame coded” interlaced field-pair, a single NvMediaVideoDecoderRenderEx call fills the entire surface. When the stream contains separately encoded interlaced fields, two reference NvMediaVideoDecoderRenderEx calls are required; one for the top field, and one for the bottom field.

Note:

When NvMediaVideoDecoderRenderEx renders an interlaced field, this operation does not disturb the content of the other field in the surface.

The canonical usage is to call NvMediaVideoMixerRenderSurface once for a decoded field, in display order, to yield one post-processed frame for display. For each call to NvMediaVideoMixerRenderSurface, the field to be processed must be provided as the current parameter.

To enable operation of advanced deinterlacing algorithms and/or post-processing algorithms, some past and/or future surfaces must be provided as context. These are provided as the previous2, previous, and next parameters. The NvMediaVideoMixerRenderSurface pictureStructure parameter applies to current.

The picture structure for the other surfaces is automatically derived from that for the current picture. The derivation algorithm is simple; the concatenated list past/current/future is assumed to have an alternating top/bottom pattern throughout. In other words, the concatenated list of past/current/future frames forms a window that slides through the sequence of decoded fields.