Skip to content

VideoEncoder using inputSurfaceView to send to RTMPClient #1702

@discovertalha

Description

@discovertalha

Hello,

I have been trying to send a video feed directly from DeepAR but DeepAR allows only one mode at a time (renderToSurfaceView) or offScreenRendering which gives me processed Frames that dont function very well.

My primary goal is to directly feed the surface as that will probably work best for my case but the setInputSurface method that comes with the VideoEncoder doesnt work in my case. I am working with the 2.2.6 version of the RootEncoder. With using frame by frame approach I managed to somewhat convert the frames and send them through the VideoEncoder and the GetVideoData interface but the performance is abysmal and the frames get distorted because of DeepArs weird way of processing frames. Is there anyway we could directly just use the Surface that DeepAR is rendering its frames on ?

The current approach which I experimented around with is something like:

public void frameAvailable(Image frame) {
    if (frame == null) {
        Log.e(TAG, "frameAvailable: Received null frame");
        return;
    }

// renderer.renderImage(frame);
ByteBuffer buffer=frame.getPlanes()[0].getBuffer();
int[] byteArray=getArrayfromBytes(buffer);
if(byteArray.length>0){
ByteBuffer yuvBuffer=YUVBufferExtractor.convertImageToYUV(frame);
// byte[] yuvByteArr = YUVUtil.ARGBtoYUV420SemiPlanar(byteArray, frame.getWidth()+48, frame.getHeight());
byte[] yuvByteArr=yuvBuffer.array();
Frame yuvFrame =new Frame(yuvByteArr, 0, frame.getWidth()*frame.getHeight());
renderer.renderImageThroughFrame(yuvFrame, frame.getWidth(), frame.getHeight());
videoEncoder.inputYUVData(yuvFrame);
}
}

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions