You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Tagging @RMeli who has more experience with this here.
We are at the point where we are starting to track quite substantial amounts of data (i.e. xtc files in the MD tutorials and ~ 100 x 1 MB PDB files in the homology modelling tutorial).
We really need to have a policy in place to make sure we stop tracking files that get removed and also how to deal with limiting the addition of new large files (i.e. encouraging tarballs/zip/etc..).
The text was updated successfully, but these errors were encountered:
The issue of large files is actually a solved problem with git lfs but unfortunately I haven't seen easy ways to set it up locally (on GitHub you have to pay for additional space; the free 1GB storage runs out very quickly).
If we can get all files under 100MB we are OK. Otherwise, a good option would be to store all files on Zenodo and have a script to download them locally.
Tagging @RMeli who has more experience with this here.
We are at the point where we are starting to track quite substantial amounts of data (i.e. xtc files in the MD tutorials and ~ 100 x 1 MB PDB files in the homology modelling tutorial).
We really need to have a policy in place to make sure we stop tracking files that get removed and also how to deal with limiting the addition of new large files (i.e. encouraging tarballs/zip/etc..).
The text was updated successfully, but these errors were encountered: