You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 25, 2021. It is now read-only.
Imagine a scenario with 24TB of video data. I would like to extract descriptors from each image from a pre-processing stage that can be heavily parallelized via map reduce across a cluster of networked GPU servers on AWS. We then use the collated keyframe descriptors as an input to the actual SLAM pipeline thereby skipping the video decode process. So fundamentally, how would we:
Extract keyframe landmarks from 2000+ 4K videos in parallel and save those descriptors to disk so we can massively parallelize the processing of video data? Are there any sequential dependencies in terms of state management in the descriptor creation process.
Input pre-processed keyframe descriptors into the SLAM pipeline bypassing video decoding and passing the descriptors to the tracker and pose estimation modules directly?
Goals:
Map / reduce descriptor extraction from 24TB of 4k video data
(processing this sequentially is not an option due to time and scale...)
Reduce stage utilizes descriptor dataset to run SLAM pipeline
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Imagine a scenario with 24TB of video data. I would like to extract descriptors from each image from a pre-processing stage that can be heavily parallelized via map reduce across a cluster of networked GPU servers on AWS. We then use the collated keyframe descriptors as an input to the actual SLAM pipeline thereby skipping the video decode process. So fundamentally, how would we:
Goals:
(processing this sequentially is not an option due to time and scale...)
The text was updated successfully, but these errors were encountered: