Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when running code on custom dataset #19

Open
bharadwajdhornala opened this issue Aug 7, 2020 · 1 comment
Open

Error when running code on custom dataset #19

bharadwajdhornala opened this issue Aug 7, 2020 · 1 comment

Comments

@bharadwajdhornala
Copy link

bharadwajdhornala commented Aug 7, 2020

Hi @ankonzoid . Thanks for the code.

I am trying to run your code on my dataset which contains 500 random animal images of size 512x 512.
Code abrupty stops with this error.

Reading train images from '.....\dataset\data2\train'...
Reading test images from '......\dataset\data2\test'...
Image shape = (512, 512, 3)
Loading VGG19 pre-trained model...
2020-08-07 23:02:22.275737: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
input_shape_model = (512, 512, 3)
output_shape_model = (16, 16, 512)
Applying image transformer to training images...
Applying image transformer to test images...
-> X_train.shape = (200, 512, 512, 3)
-> X_test.shape = (5, 512, 512, 3)
Inferencing embeddings using pre-trained model...
2020-08-07 23:02:59.016060: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 2147483648 exceeds 10% of system memory.
2020-08-07 23:03:45.605916: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 2147483648 exceeds 10% of system memory.

Please help!!

@mojoee
Copy link

mojoee commented Aug 19, 2022

as the error says, you are running out of memory, which comes from the fact that your images in your custom training dataset is bigger than the images provided in the repository and your cpu is not powerful enough to handle this amount of data.

The solution that I see here is to use GPU instead of CPU for training, which requires a little bit of coding effort and setting up your machine(install CUDA) to handle the training

Another solution could be to decrease your image size, since then you have to handle less amount of data, which might tradeoff performance/accuracy, since the images might lose information.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants