Skip to content

Pipeline for ZED Camera on Terga TX2 #58

@D0CX4ND3R

Description

@D0CX4ND3R

Hi, I run the pipeline on Terga TX2 with a ZED Camera, but a problem occurred.
To promise the ZED Camera can be used, I modified the structure Application as follow:

  1. I add a new structure VideoSource to initialize the zed camera.
struct VideoSource
{
        sl::Mat frame_zed;
        sl::Camera zed_camera;

        VideoSource()
        {
          sl::InitParameters init_params;
          init_params.camera_resolution = sl::RESOLUTION_HD720;
          init_params.depth_mode = sl::DEPTH_MODE_PERFORMANCE;
          init_params.coordinate_units = sl::UNIT_METER;
          init_params.camera_fps = 30;

          sl::ERROR_CODE err = zed_camera.open(init_params);
          if (err != sl::SUCCESS) {
                  std::cout << sl::toString(err) << std::endl;
                  zed_camera.close();
                  //return; // Quit if an error occurred
          }
          else
            std::cout << "ZED Camera created!!" << std::endl;
        }

        // Convert the zed camera Mat to opencv Mat
        virtual cv::Mat slMat2cvMat(sl::Mat &input)
        {
                // Mapping between MAT_TYPE and CV_TYPE
                int cv_type = -1;
                switch (input.getDataType())
                {
                        case sl::MAT_TYPE_32F_C1: cv_type = CV_32FC1; break;
                        case sl::MAT_TYPE_32F_C2: cv_type = CV_32FC2; break;
                        case sl::MAT_TYPE_32F_C3: cv_type = CV_32FC3; break;
                        case sl::MAT_TYPE_32F_C4: cv_type = CV_32FC4; break;
                        case sl::MAT_TYPE_8U_C1: cv_type = CV_8UC1; break;
                        case sl::MAT_TYPE_8U_C2: cv_type = CV_8UC2; break;
                        case sl::MAT_TYPE_8U_C3: cv_type = CV_8UC3; break;
                        case sl::MAT_TYPE_8U_C4: cv_type = CV_8UC4; break;
                default: break;
                }
                return cv::Mat(input.getHeight(), input.getWidth(), cv_type, input.getPtr<sl::uchar1>(sl::MEM_CPU), input.getStepBytes(sl::MEM_CPU));
        }

        virtual void operator>>(cv::Mat &output)
        {
            // get image from zed camera by zed SDK
            zed_camera.retrieveImage(frame_zed, sl::VIEW_LEFT);
            output = slMat2cvMat(frame_zed);
        }

        virtual int getWidth(){return zed_camera.getResolution().width;}
        virtual int getHeight(){return zed_camera.getResolution().height;}
};
  1. I modified the Application structure constructor
    // clang-format off
    Application
    (
        const std::string &input,
        const std::string &model,
        float acfCalibration,
        int minWidth,
        bool window,
        float resolution
    ) : resolution(resolution)
    // clang-format on
    {
        // Create a video source:
        // 1) integar == index to device camera
        // 2) filename == supported video formats
        // 3) "/fullpath/Image_%03d.png" == list of stills
        // http://answers.opencv.org/answers/761/revisions/
        //video = create(input);
        //zed_camera = create();

        // create zed camera
        zed_source = std::make_shared<VideoSource>();

        //video = create(0);

        // Create an OpenGL context:
        cv::Size size(zed_source->getWidth(),zed_source->getHeight());
        //const auto size = getSize(*video);

        context = aglet::GLContext::create(aglet::GLContext::kAuto, window ? "acf" : "", size.width, size.height);

        // Create an object detector:
        detector = std::make_shared<acf::Detector>(model);
        detector->setDoNonMaximaSuppression(true);

        if (acfCalibration != 0.f)
        {
            acf::Detector::Modify dflt;
            dflt.cascThr = { "cascThr", -1.0 };
            dflt.cascCal = { "cascCal", acfCalibration };
            detector->acfModify(dflt);
        }

        // Create the asynchronous scheduler:
        pipeline = std::make_shared<acf::GPUDetectionPipeline>(detector, size, 5, 0, minWidth);

        // Instantiate an ogles_gpgpu display class that will draw to the
        // default texture (0) which will be managed by aglet (typically glfw)
        if (window && context->hasDisplay())
        {
            display = std::make_shared<ogles_gpgpu::Disp>();
            display->init(size.width, size.height, TEXTURE_FORMAT);
            display->setOutputRenderOrientation(ogles_gpgpu::RenderOrientationFlipped);
        }
    }
  1. The update function is also modified correspondingly
cv::Mat frame;
(*zed_source)  >>  frame;

The program is compiled successfully, but no image in window, only a black frame.
Does the code has any mistakes?
Thank you for help me.

P.S. Another question, when I run the project acf-detect, I want show the capture frame in real time, but a opencv error is occurred as follow:

OpenCV(3.4.1) Error: Unspecified error (The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script) in cvShowImage, file /home/nvidia/.hunter/_Base/8fee57e/c3fbf9e/a0ab86d/Build/OpenCV/Source/modules/highgui/src/window.cpp, line 636
Exception: OpenCV(3.4.1) /home/nvidia/.hunter/_Base/8fee57e/c3fbf9e/a0ab86d/Build/OpenCV/Source/modules/highgui/src/window.cpp:636: error: (-2) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Carbon support. If you are on Ubuntu or Debian, install libgtk2.0-dev and pkg-config, then re-run cmake or configure script in function cvShowImage

How can I add the two libraries during the hunter compile. Thank you.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions