-
-
Notifications
You must be signed in to change notification settings - Fork 312
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Detect rotated faces #9
Comments
I used https://github.com/hybridgroup/gocv
|
Once the object rotation feature will be implemented this won't be an issue anymore. |
With the new release of Pigo, it's possible now to detect rotated faces however with a small limitation, since you have to provide a specific angle to match the faces against it. Here is an example to detect the 4th missed face undetected previously: $ pigo -in ~/Desktop/49496258-66536f80-f8a0-11e8-965b-4bdfb7f14524.jpg -out ~/Desktop/output.jpg -cf data/facefinder -angle=0.5 -iou=0.2 But this means to detect all the faces in an image the same command should be ran with the angle parameter ranging from 0.0 to 1.0, but this will cost in performance. Maybe in a feature release i will focus on resolving this kind of limitation. |
Hey Endre. Thanks for the library. I too am wondering about detecting faces that are in a profile view. I tested both eunseo.jpg and jimin3.jpg from this dataset https://github.com/Kagami/go-face-testdata without specifying an angle, but pigo didn't find a face. |
Hey Nathan, that's true. I will try to investigate and get a workaround for this kind of limitation. Anyway i'm not really sure if the detector can detect 100% profile faces, but can detect deviations from face views. |
Okay. |
@esimov any update on this? |
Not yet, right now i'm working on WASM support. Afterwards i can check this issue. |
how it's going |
@esimov Any idea whether it will be possible to support profile or semi-profile face images? |
Not yet! |
I feel like this should be the top priority for now. Without rotation detection, performance on video streams is horrible even with the slightest movement of the head. |
I have a few ideas in my mind how this issue could be resolved, but meantime any contribution is welcomed ;). |
I am wondering if we could just train a new cascade dataset as explained here: https://github.com/nenadmarkus/pico/tree/master/gen/sample - using a dataset which includes faces that are in various degrees of profile/rotation? I am guessing there are some good datasets around at this point that include markers for these variations of faces? Here https://en.wikipedia.org/wiki/Viola%E2%80%93Jones_object_detection_framework - it is explained: |
The thing is that I don't really know if that this kind of dataset is available on the https://www.vision.caltech.edu/ which the training model is referring to. The caltech_10k_webfaces contains only frontal faces, so not rotated faces. So in order to train the model with the rotated faces we need some dataset with rotated faces, but also using the format appropriate for |
So I wrote an email to Nenand, the creator of the Pico libraries and algorithm and he responded to some of my questions regarding this: I was wondering if training it with more data that also includes faces in various degrees of profile might help improve detection when heads are turned?
I notice in your code in the caltechfaces.py file you seem to be converting the eye data into bounding boxes, and many datasets already come with bounding boxes. I am wondering if this would be fairly straightforward...
|
The question is from where did they obtained the training data? Did you know or somehow Nenand mentioned what kind of training data they used?
Why this is important to you? |
There are lots of resources for this, here is a good place to start: https://www.face-rec.org/databases/ - in particular I think this might be a good dataset to use? http://www.cs.tau.ac.il/~wolf/ytfaces/
Because it's important to understand the requirements we need to satisfy to train a new cascade. In this case, as long as the bounding boxes in the dataset that we find to train with maintain the same aspect ratio, we can more or less "plug" them in to picolrn in the same way that Nenand did, even skipping a few steps in caltechfaces.py (because we don't have to generate our own bounding boxes). |
Thanks for the links. BTW some of them are broken or outdated. Now, since you mentioned that you have discussed with Nenand and he told that a few companies have already trained with frontal+profile faces, the obvious question is doesn't he by any chance obtained such a cascade file? |
I haven't got any updates from you regarding my question. Do you know by any chance if such cascade files are somehow available? Or should I contact personally Nenad. |
No they are not available as far as I understand as these are companies and they cannot share their intellectual property like that - these are things you would have to train yourself, but luckily Nenand has really great documentation and information on how to do that, all of the ingredients are available, you just gotta put the pieces together. |
The number of faces in this image should be 4 rectangles, so why get -out file is 3 rectangles (-iou 0.1 or -iou 0.2)
-iou 0.1
-iou 0.2
The text was updated successfully, but these errors were encountered: