-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AssertionError: Not all signals have the same length #40
Comments
@laurelrr Great news! Can you share this openephys session with us? I'll take a closer look. |
@weiglszonja You got it! I am attempting to transfer now. I created a new folder called hao_subject14 in the Tye_data_share drive with all the ephys data. Thanks |
@laurelrr do you have any background information about this dataset in particular? There are gaps in this data but they are not consistent across channels. @samuelgarcia suggested we should check how many files would have failed without using import pandas as pd
from neo.rawio import OpenEphysRawIO
# Changes this file path to the master excel file that was used for Hao's conversion
excel_file_path = "/Volumes/t7-ssd/Hao_NWB/session_config.xlsx"
config = pd.read_excel(excel_file_path)
# We only need the ecephys_folder_path column to collect the list of folders
openephys_folder_paths = config["ecephys_folder_path"].dropna().tolist()
affected_files = []
for folder_path in openephys_folder_paths:
io = OpenEphysRawIO(dirname=folder_path, ignore_timestamps_errors=False)
try:
io.parse_header()
except ValueError:
affected_files.append(folder_path)
print(len(affected_files))
print(affected_files) |
|
@weiglszonja I'm unfamiliar with the details of the dataset but I can ask Hao if there is anything unusual about these subjects. I'm not entirely clear on what the error is -- are the different channels in the recording are stopping at different times? I'll follow up with you and Hao over email. |
@weiglszonja Is it possible that there is just interference getting picked up on some channels that cuts the recording? |
Hi @laurelrr. How many files on total do you have (is 10 files a small parts) ? |
For this dataset (e.g., what went into our journal article) I have 32 files in total, but it seems only 10 are having this issue. |
Hello,
where the Let me know if you need anything else. |
Well, at least it's nice to have some good news!
From the logs, it looks like yet another deviation from expected structure; could you share one of those (or even all 3) remaining files over the data share and we'll send it back for another round of fixes on neo side? |
Thank you @laurelrr for letting us know, it looks like H14 and H41 have the same edge case, but H29 has a different error and for that it would be really useful to have the data shared as well. |
Sure, happy to share the data. I'll try later today through globus. Thanks guys! |
Thank you @laurelrr for sharing the data.
|
So, I think you might already have subject 14 in the Tye_data_share folder. Please let me know if not. I have uploaded the other two subjects, and you should now have all the data for these three problematic subjects. I tried the run as suggested above by @weiglszonja but I get the same errors on all three subjects. |
Thank you @laurelrr, I forwarded this issue to @samuelgarcia who is looking into it. I'll let you know once we know how to fix it. |
We managed to replicate the bug for H29; when reading only the CH .continuous files we didn't have any error, but when adding all AUX and ADC channels we were seeing negative sample number. @samuelgarcia is going to work on a fix, but until then I can provide a workaround for this subject. I managed to write the data without any error or modification to the script just by removing the AUX and ADC .continuous files from the folder. I would suggest to try this out and rerun the conversion for H29. For H41 and H14 the error is with chunking, unfortunately you'll have to edit the You can modify the tye-lab-to-nwb/src/tye_lab_to_nwb/neurotensin_valence/neurotensin_valence_convert_session.py Line 78 in 138fcf3
The suggested change at this line is overriding the iterator options as: conversion_options = dict(Recording=dict(stub_test=stub_test, iterator_opts=dict(buffer_shape=(1024, 32), chunk_shape=(1024, 32)))) With this line of change I managed to write the data for H41, but unfortunately with this small chunk size it will take considerably longer time for the conversion to finish. I would suggest to let it run overnight. |
Hi, However, I am getting an error when I try to upload files to DandiArchive and I run Here is the error message:
|
Looks like this is the same as NeurodataWithoutBorders/pynwb#1843, seeking response from upstream |
Resolved in latest release of PyNWB |
The good news is that this is the last subject in my dataset... I've managed to convert all the other data to nwb.
Content of neurotensin_valence_convert_session_14.py:
The text was updated successfully, but these errors were encountered: