You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been going through your repository and noticed that the reported inference times for both the CPM and Hourglass models are the same, as shown in the README.
I find it surprising that these two models, which have different architectures, have identical inference times. Upon reviewing the benchmark.py script, I understand that it takes the model weights as input, which could be either for CPM or Hourglass. Since the same script is used for both models, I am curious to know if the identical inference times are due to testing only one of the models and reporting the results for both, or if both models were indeed tested and yielded the same results.
Could you please provide some clarification on this? I appreciate your time and effort in creating this repository and look forward to your response.
Thank you!
The text was updated successfully, but these errors were encountered:
Hello,
I have been going through your repository and noticed that the reported inference times for both the CPM and Hourglass models are the same, as shown in the README.
I find it surprising that these two models, which have different architectures, have identical inference times. Upon reviewing the benchmark.py script, I understand that it takes the model weights as input, which could be either for CPM or Hourglass. Since the same script is used for both models, I am curious to know if the identical inference times are due to testing only one of the models and reporting the results for both, or if both models were indeed tested and yielded the same results.
Could you please provide some clarification on this? I appreciate your time and effort in creating this repository and look forward to your response.
Thank you!
The text was updated successfully, but these errors were encountered: