Open
Description
This is required by webmachinelearning/webnn#226
To implement this pipeline, the semantic segmentation sample probably could execute following steps:
- Use mediacapture-transform to capture the
VideoFrame
- Import the
VideoFrame
into a WebGPU texture (GPUExternalTexture
) - Execute WebGPU shader to do pre-processing and convert the
GPUExternalTexture
to aGPUBuffer
- Compute the MLGraph with converted
GPUBuffer
and the output is anotherGPUBuffer
- Execute WebGPU shader to do post-processing on the output
GPUBuffer
and render the result to a canvas. - Create a
VideoFrame
from theCanvasImageSource
and enqueue to the mediacapture-transform API's controller.
Metadata
Metadata
Assignees
Labels
No labels