You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Considering some of SIA files like PA##.dbc, loading them into a dataframe directly basically burns 20gb RAM.
Should we add support for pandas Dataframe's chunksize parameter, to handle this correctly? If so, can you identify some sort of caveat about this approach @fccoelho ?
The text was updated successfully, but these errors were encountered:
This is a real problem @mohr023. If we iterate over the DBF record as we read, we would also need to iterate over them when saving the cachefile, and cannot return the full dataframe after downloading.
If you have a good Idea for solving this, feel free to submit a Pull-request.
Considering some of SIA files like PA##.dbc, loading them into a dataframe directly basically burns 20gb RAM.
Should we add support for pandas Dataframe's chunksize parameter, to handle this correctly? If so, can you identify some sort of caveat about this approach @fccoelho ?
The text was updated successfully, but these errors were encountered: