Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AssertionError: Not all signals have the same length #40

Closed
laurelrr opened this issue Oct 12, 2023 · 21 comments
Closed

AssertionError: Not all signals have the same length #40

laurelrr opened this issue Oct 12, 2023 · 21 comments

Comments

@laurelrr
Copy link

The good news is that this is the last subject in my dataset... I've managed to convert all the other data to nwb.

(tyenwbenv39) lkeyes@puma:~/Projects/GIT/tye-lab-to-nwb$ python src/tye_lab_to_nwb/neurotensin_valence/neurotensin_valence_convert_session_14.py
Source data is valid!
Continuous files do not have aligned timestamps; clipping to make them aligned.
Traceback (most recent call last):
  File "/nadata/snlkt/home/lkeyes/Projects/GIT/tye-lab-to-nwb/src/tye_lab_to_nwb/neurotensin_valence/neurotensin_valence_convert_session_14.py", line 265, in <module>
    session_to_nwb(
  File "/nadata/snlkt/home/lkeyes/Projects/GIT/tye-lab-to-nwb/src/tye_lab_to_nwb/neurotensin_valence/neurotensin_valence_convert_session_14.py", line 149, in session_to_nwb
    converter = NeurotensinValenceNWBConverter(source_data=source_data)
  File "/home/lkeyes/anaconda3/envs/tyenwbenv39/lib/python3.9/site-packages/neuroconv/nwbconverter.py", line 65, in __init__
    self.data_interface_objects = {
  File "/home/lkeyes/anaconda3/envs/tyenwbenv39/lib/python3.9/site-packages/neuroconv/nwbconverter.py", line 66, in <dictcomp>
    name: data_interface(**source_data[name])
  File "/home/lkeyes/anaconda3/envs/tyenwbenv39/lib/python3.9/site-packages/neuroconv/datainterfaces/ecephys/openephys/openephysdatainterface.py", line 45, in __new__
    return OpenEphysLegacyRecordingInterface(
  File "/home/lkeyes/anaconda3/envs/tyenwbenv39/lib/python3.9/site-packages/neuroconv/datainterfaces/ecephys/openephys/openephyslegacydatainterface.py", line 56, in __init__
    available_streams = self.get_stream_names(
  File "/home/lkeyes/anaconda3/envs/tyenwbenv39/lib/python3.9/site-packages/neuroconv/datainterfaces/ecephys/openephys/openephyslegacydatainterface.py", line 17, in get_stream_names
    stream_names, _ = OpenEphysLegacyRecordingExtractor.get_streams(
  File "/home/lkeyes/anaconda3/envs/tyenwbenv39/lib/python3.9/site-packages/spikeinterface/extractors/neoextractors/neobaseextractor.py", line 71, in get_streams
    neo_reader = cls.get_neo_io_reader(cls.NeoRawIOClass, **neo_kwargs)
  File "/home/lkeyes/anaconda3/envs/tyenwbenv39/lib/python3.9/site-packages/spikeinterface/extractors/neoextractors/neobaseextractor.py", line 64, in get_neo_io_reader
    neo_reader.parse_header()
  File "/home/lkeyes/anaconda3/envs/tyenwbenv39/lib/python3.9/site-packages/neo/rawio/baserawio.py", line 178, in parse_header
    self._parse_header()
  File "/home/lkeyes/anaconda3/envs/tyenwbenv39/lib/python3.9/site-packages/neo/rawio/openephysrawio.py", line 164, in _parse_header
    assert all(all_sigs_length[0] == e for e in all_sigs_length),\
AssertionError: Not all signals have the same length

Content of neurotensin_valence_convert_session_14.py:

"""Primary script to run to convert an entire session for of data using the NWBConverter."""
import traceback
from importlib.metadata import version
from pathlib import Path
from typing import Optional, Dict
from warnings import warn

from dateutil import parser

from neuroconv.utils import (
    load_dict_from_file,
    dict_deep_update,
    FilePathType,
    FolderPathType,
)
from nwbinspector import inspect_nwbfile
from nwbinspector.inspector_tools import save_report, format_messages
from packaging.version import Version

from tye_lab_to_nwb.neurotensin_valence import NeurotensinValenceNWBConverter


def session_to_nwb(
    nwbfile_path: FilePathType,
    ecephys_recording_folder_path: Optional[FolderPathType] = None,
    subject_metadata: Optional[Dict[str, str]] = None,
    plexon_file_path: Optional[FilePathType] = None,
    events_file_path: Optional[FilePathType] = None,
    pose_estimation_file_path: Optional[FilePathType] = None,
    pose_estimation_config_file_path: Optional[FilePathType] = None,
    pose_estimation_sampling_rate: Optional[float] = None,
    session_start_time: Optional[str] = None,
    original_video_file_path: Optional[FilePathType] = None,
    labeled_video_file_path: Optional[FilePathType] = None,
    confocal_images_oif_file_path: Optional[FilePathType] = None,
    confocal_images_composite_tif_file_path: Optional[FilePathType] = None,
    stub_test: bool = False,
):
    """
    Converts a single session to NWB.

    Parameters
    ----------
    nwbfile_path : FilePathType
        The file path to the NWB file that will be created.
    ecephys_recording_folder_path: FolderPathType
         The path that points to the folder where the OpenEphys (.continuous) files are located.
    subject_metadata: dict, optional
        The optional metadata for the experimental subject.
    plexon_file_path: FilePathType, optional
        The path that points to the Plexon (.plx) file that contains the spike times.
    events_file_path: FilePathType, optional
        The path that points to the .mat file that contains the event onset and offset times.
    pose_estimation_file_path: FilePathType, optional
        The path that points to the .csv file that contains the DLC output.
    pose_estimation_config_file_path: FilePathType, optional
        The path that points to the .pickle file that contains the DLC configuration settings.
    original_video_file_path: FilePathType, optional
        The path that points to the original behavior movie that was used for pose estimation.
    labeled_video_file_path: FilePathType, optional
        The path that points to the labeled behavior movie.
    confocal_images_oif_file_path: FilePathType, optional
        The path that points to the Olympus Image File (.oif).
    confocal_images_composite_tif_file_path: FilePathType, optional
        The path that points to the TIF image that contains the confocal images aggregated over depth.
    stub_test: bool, optional
        For testing purposes, when stub_test=True only writes a subset of ecephys and plexon data.
        Default is to write the whole ecephys recording and plexon data to the file.
    """

    source_data = dict()
    conversion_options = dict()

    # Add Recording
    if ecephys_recording_folder_path:
        recording_source_data = dict(folder_path=str(ecephys_recording_folder_path), stream_name="Signals CH")
        if Version(version("neo")) > Version("0.12.0"):
            recording_source_data.update(ignore_timestamps_errors=True)

        source_data.update(dict(Recording=recording_source_data))
        conversion_options.update(dict(Recording=dict(stub_test=stub_test)))

    # Add Sorting (optional)
    if plexon_file_path:
        source_data.update(dict(Sorting=dict(file_path=str(plexon_file_path))))
        conversion_options.update(dict(Sorting=dict(stub_test=stub_test)))

    # Add Behavior (optional)
    # Add events
    if events_file_path:
        event_names_mapping = {
            0: "reward_stimulus_presentation",
            1: "phototagging",
            2: "shock_stimulus_presentation",
            3: "reward_delivery",
            4: "shock_relay",
            5: "port_entry",
            6: "neutral_stimulus_presentation",
        }
        read_kwargs = dict(event_names_mapping=event_names_mapping)
        events_source_data = dict(file_path=str(events_file_path), read_kwargs=read_kwargs)
        source_data.update(dict(Events=events_source_data))

        events_column_mappings = dict(onset="start_time", offset="stop_time")
        events_conversion_options = dict(column_name_mapping=events_column_mappings)
        conversion_options.update(
            dict(
                Events=events_conversion_options,
            )
        )

    # Add pose estimation (optional)
    pose_estimation_source_data = dict()
    pose_estimation_conversion_options = dict()
    if pose_estimation_file_path:
        pose_estimation_source_data.update(file_path=str(pose_estimation_file_path))
        if pose_estimation_config_file_path:
            pose_estimation_source_data.update(config_file_path=str(pose_estimation_config_file_path))
        elif pose_estimation_sampling_rate is not None:
            pose_estimation_conversion_options.update(rate=float(pose_estimation_sampling_rate))

        source_data.update(dict(PoseEstimation=pose_estimation_source_data))

    if original_video_file_path:
        pose_estimation_conversion_options.update(original_video_file_path=original_video_file_path)
        source_data.update(
            dict(
                OriginalVideo=dict(file_paths=[str(original_video_file_path)]),
            )
        )
    if labeled_video_file_path:
        pose_estimation_conversion_options.update(labeled_video_file_path=labeled_video_file_path)

    if pose_estimation_source_data:
        # The edges between the nodes (e.g. labeled body parts) defined as array of pairs of indices.
        edges = [(0, 1), (0, 2), (2, 3), (1, 3), (5, 6), (5, 7), (5, 8), (5, 9)]
        pose_estimation_conversion_options.update(edges=edges)
        conversion_options.update(dict(PoseEstimation=pose_estimation_conversion_options))

    # Add confocal images
    images_source_data = dict()
    if confocal_images_oif_file_path:
        images_source_data.update(file_path=str(confocal_images_oif_file_path))
        if confocal_images_composite_tif_file_path:
            images_source_data.update(composite_tif_file_path=str(confocal_images_composite_tif_file_path))

        source_data.update(dict(Images=images_source_data))

    converter = NeurotensinValenceNWBConverter(source_data=source_data)

    # Add datetime to conversion
    metadata = converter.get_metadata()

    # Update default metadata with the editable in the corresponding yaml file
    editable_metadata_path = Path(__file__).parent / "metadata" / "general_metadata.yaml"
    editable_metadata = load_dict_from_file(editable_metadata_path)
    metadata = dict_deep_update(metadata, editable_metadata)

    if subject_metadata:
        metadata = dict_deep_update(metadata, dict(Subject=subject_metadata))

    if "session_id" not in metadata["NWBFile"]:
        if ecephys_recording_folder_path:
            ecephys_recording_folder_path = Path(ecephys_recording_folder_path)
            ecephys_folder_name = ecephys_recording_folder_path.name
            session_id = ecephys_folder_name.replace(" ", "").replace("_", "-")
        elif pose_estimation_file_path:
            session_id = Path(pose_estimation_file_path).name.replace(" ", "").replace("_", "-")
        else:
            session_id = Path(nwbfile_path).stem.replace(" ", "").replace("_", "-")

        metadata["NWBFile"].update(session_id=session_id)

    if "session_start_time" not in metadata["NWBFile"]:
        if session_start_time is None:
            raise ValueError(
                "When ecephys recording is not specified the start time of the session must be provided."
                "Specify session_start_time in YYYY-MM-DDTHH:MM:SS format (e.g. 2023-08-21T15:30:00)."
            )
        session_start_time_dt = parser.parse(session_start_time)
        metadata["NWBFile"].update(session_start_time=session_start_time_dt)

    nwbfile_path = Path(nwbfile_path)
    try:
        # Run conversion
        converter.run_conversion(
            nwbfile_path=str(nwbfile_path), metadata=metadata, conversion_options=conversion_options
        )

        # Run inspection for nwbfile
        results = list(inspect_nwbfile(nwbfile_path=nwbfile_path))
        report_path = nwbfile_path.parent / f"{nwbfile_path.stem}_inspector_result.txt"
        save_report(
            report_file_path=report_path,
            formatted_messages=format_messages(
                results,
                levels=["importance", "file_path"],
            ),
        )
    except Exception as e:
        with open(f"{nwbfile_path.parent}/{nwbfile_path.stem}_error_log.txt", "w") as f:
            f.write(traceback.format_exc())
        warn(f"There was an error during the conversion of {nwbfile_path}. The full traceback: {e}")


if __name__ == "__main__":
    # Parameters for conversion
    # The path that points to the folder where the OpenEphys (.continuous) files are located.
    ecephys_folder_path = Path("/nadata/snlkt/data/hao/Neurotensin/ephys/recordings/forLaurel/14_2019-08-30_09-21-05_Disc4")

    # The path that points to the Plexon file (optional)
    # plexon_file_path = None
    plexon_file_path = Path("/nadata/snlkt/data/hao/Neurotensin/ephys/recordings/forLaurel/14_2019-08-30_09-21-05_Disc4/0014_20190825_Disc4.plx")

    # Parameters for events data (optional)
    # The file path that points to the events.mat file
    # events_mat_file_path = None
    events_mat_file_path = Path(
        "/nadata/snlkt/data/hao/Neurotensin/ephys/recordings/forLaurel/14_2019-08-30_09-21-05_Disc4/0014_20190825_Disc4_events.mat"
    )

    # Parameters for pose estimation data (optional)
    # The file path that points to the DLC output (.CSV file)
    # pose_estimation_file_path = None
    pose_estimation_file_path = (
        "/nadata/snlkt/data/hao/Neurotensin/DLC/DLCresults/14_DiscDLC_resnet50_Hao_MedPC_ephysFeb9shuffle1_800000.csv"
    )
    # The file path that points to the DLC configuration file (.pickle file), optional
    # pose_estimation_config_file_path = None
    pose_estimation_config_file_path = (
        "/snlkt/data/hao/Neurotensin/DLC/DLC_files_from_Aardvark/14_DiscDLC_resnet50_Hao_MedPC_ephysFeb9shuffle1_800000includingmetadata.pickle"
    )
    # If the pickle file is not available the sampling rate in units of Hz for the behavior data must be provided.
    pose_estimation_sampling_rate = None

    # For sessions where only the pose estimation data is available the start time of the session must be provided.
    # The session_start_time in YYYY-MM-DDTHH:MM:SS format (e.g. 2023-08-21T15:30:00).
    session_start_time = None

    # The file path that points to the behavior movie file, optional
    #original_video_file_path = None
    original_video_file_path = r"/nadata/snlkt/data/hao/Neurotensin/ephys/recordings/PVT-BLA_NT_KO_NACproj batch1/Discrimination day 4/video/14_20190830_Disc4.asf"
    # The file path that points to the labeled behavior movie file, optional
    # labeled_video_file_path = None
    labeled_video_file_path = "/nadata/snlkt/data/hao/Neurotensin/DLC/DLCresults/14_DiscDLC_resnet50_Hao_MedPC_ephysFeb9shuffle1_800000_labeled.mp4"

    # Parameters for histology images (optional)
    # The file path to the Olympus Image File (.oif)
    #  confocal_images_oif_file_path = None
    confocal_images_oif_file_path = "/nadata/snlkt/data/hao/Neurotensin/Imaging/H9-14/H14/H14_slidePVT4_slice6_zstack_40x_PVT.oif"
    # The file path to the aggregated confocal images in TIF format.
    confocal_images_composite_tif_file_path = None
    #confocal_images_composite_tif_file_path = r"/snlkt/data/hao/Neurotensin/CellProfiler/NT CRISPR/channel merged/H13_MAX_Composite.tif"

    # The file path where the NWB file will be created.
    nwbfile_path = Path("/nadata/snlkt/data/hao/Neurotensin/NWB/nwbfiles/H14_Disc4.nwb")

    # For faster conversion, stub_test=True would only write a subset of ecephys and plexon data.
    # When running a full conversion, use stub_test=False.
    stub_test = False

    # subject metadata (optional)
    subject_metadata = dict(sex="M",age="P60D",genotype="Wild type",strain="C57BL/6J",subject_id="H14")

    session_to_nwb(
        nwbfile_path=nwbfile_path,
        ecephys_recording_folder_path=ecephys_folder_path,
        subject_metadata=subject_metadata,
        plexon_file_path=plexon_file_path,
        events_file_path=events_mat_file_path,
        pose_estimation_file_path=pose_estimation_file_path,
        pose_estimation_config_file_path=pose_estimation_config_file_path,
        pose_estimation_sampling_rate=pose_estimation_sampling_rate,
        session_start_time=session_start_time,
        original_video_file_path=original_video_file_path,
        labeled_video_file_path=labeled_video_file_path,
        confocal_images_oif_file_path=confocal_images_oif_file_path,
        confocal_images_composite_tif_file_path=confocal_images_composite_tif_file_path,
        stub_test=stub_test,
    )
@weiglszonja
Copy link
Collaborator

@laurelrr Great news! Can you share this openephys session with us? I'll take a closer look.

@laurelrr
Copy link
Author

@weiglszonja You got it! I am attempting to transfer now. I created a new folder called hao_subject14 in the Tye_data_share drive with all the ephys data. Thanks

@weiglszonja
Copy link
Collaborator

Thank you @laurelrr for sharing this data. It looks like something we need to look into, so I opened an issue for this at neo. I'll let you know once I know more about this, or learn about a more immediate solution.

@weiglszonja
Copy link
Collaborator

weiglszonja commented Oct 24, 2023

@laurelrr do you have any background information about this dataset in particular? There are gaps in this data but they are not consistent across channels.

@samuelgarcia suggested we should check how many files would have failed without using ignore_timestamps_errors=True flag. Can you run this code snippet and send us what is printed on the console?

import pandas as pd
from neo.rawio import OpenEphysRawIO

# Changes this file path to the master excel file that was used for Hao's conversion
excel_file_path = "/Volumes/t7-ssd/Hao_NWB/session_config.xlsx"
config = pd.read_excel(excel_file_path)
# We only need the ecephys_folder_path column to collect the list of folders
openephys_folder_paths = config["ecephys_folder_path"].dropna().tolist()

affected_files = []
for folder_path in openephys_folder_paths:
    io = OpenEphysRawIO(dirname=folder_path, ignore_timestamps_errors=False)
    try:
        io.parse_header()
    except ValueError:
        affected_files.append(folder_path)

print(len(affected_files))
print(affected_files)

@laurelrr
Copy link
Author

print(len(affected_files))
10
print(affected_files)
['/nadata/snlkt/data/hao/Neurotensin/ephys/recordings/forLaurel/1_2019-07-13_11-59-42_DiscS', '/nadata/snlkt/data/hao/Neurotensin/ephys/recordings/forLaurel/8_2019-07-14_18-11-14_DiscS', '/nadata/snlkt/data/hao/Neurotensin/ephys/recordings/forLaurel/11_2019-08-30_12-58-04_Disc4', '/nadata/snlkt/data/hao/Neurotensin/ephys/recordings/forLaurel/13_2019-08-30_09-21-02_Disc4', '/nadata/snlkt/data/hao/Neurotensin/ephys/recordings/forLaurel/14_2019-08-30_09-21-05_Disc4', '/nadata/snlkt/data/hao/Neurotensin/ephys/recordings/forLaurel/H24_2019-12-27_10-25-11_Disc3', '/nadata/snlkt/data/hao/Neurotensin/ephys/recordings/forLaurel/H29_2020-02-20_13-42-59_Disc4_20k', '/nadata/snlkt/data/hao/Neurotensin/ephys/recordings/forLaurel/H35_2020-04-30_09-57-13_Disc7_8,22,23', '/nadata/snlkt/data/hao/Neurotensin/ephys/recordings/forLaurel/H39_2020-04-29_13-33-40_Disc5', '/nadata/snlkt/data/hao/Neurotensin/ephys/recordings/forLaurel/H41_2020-04-29_16-55-26_Disc5_9']

@laurelrr
Copy link
Author

@weiglszonja I'm unfamiliar with the details of the dataset but I can ask Hao if there is anything unusual about these subjects. I'm not entirely clear on what the error is -- are the different channels in the recording are stopping at different times? I'll follow up with you and Hao over email.

@laurelrr
Copy link
Author

@weiglszonja Is it possible that there is just interference getting picked up on some channels that cuts the recording?

@samuelgarcia
Copy link

Hi @laurelrr.
Removing channel on the OpenEphysRawIO is not implemented but this could be done but not straight now.

How many files on total do you have (is 10 files a small parts) ?

@laurelrr
Copy link
Author

For this dataset (e.g., what went into our journal article) I have 32 files in total, but it seems only 10 are having this issue.
However, we do have supplementary data (not in the paper) that I planned to convert and share next.

@weiglszonja
Copy link
Collaborator

Thank you for the info @laurelrr. I took this issue to OpenEphys team (here) to ask what could be the solution.

@laurelrr
Copy link
Author

Hello,
I went ahead with updating neo using the suggestion in Szonja's email from 1/31. I prepared a master config file for the 10 remaining subjects (listed above) and ran these steps:

conda activate tyenwbenv39
pip install -r src/tye_lab_to_nwb/neurotensin_valence/neurotensin_valence_requirements.txt
pip install git+https://github.com/NeuralEnsemble/python-neo.git@4158fbbe2f31339b8909bd3c0672e74b152424d2
python ~/Projects/GIT/tye-lab-to-nwb/src/tye_lab_to_nwb/neurotensin_valence/neurotensin_valence_convert_all_sessions.py

where the excel_file_path was updated to point to the newest config file.
Though 7 seem to have processed correctly, I received an error log on 3 of the subjects and am attaching them here.
H14_Disc4_error_log.txt
H29_Disc4_error_log.txt
H41_Disc5_error_log.txt

Let me know if you need anything else.

@CodyCBakerPhD
Copy link
Member

Though 7 seem to have processed correctly,

Well, at least it's nice to have some good news!

I received an error log on 3 of the subjects and am attaching them here.

From the logs, it looks like yet another deviation from expected structure; could you share one of those (or even all 3) remaining files over the data share and we'll send it back for another round of fixes on neo side?

@weiglszonja
Copy link
Collaborator

weiglszonja commented Feb 21, 2024

Thank you @laurelrr for letting us know, it looks like H14 and H41 have the same edge case, but H29 has a different error and for that it would be really useful to have the data shared as well.

@samuelgarcia

@laurelrr
Copy link
Author

Sure, happy to share the data. I'll try later today through globus. Thanks guys!

@weiglszonja
Copy link
Collaborator

Thank you @laurelrr for sharing the data.
While we look further into this, can you give a try to install Neo from the master branch instead?

conda activate tyenwbenv39
pip install -r src/tye_lab_to_nwb/neurotensin_valence/neurotensin_valence_requirements.txt
pip install git+https://github.com/NeuralEnsemble/python-neo.git@master
python ~/Projects/GIT/tye-lab-to-nwb/src/tye_lab_to_nwb/neurotensin_valence/neurotensin_valence_convert_all_sessions.py

@laurelrr
Copy link
Author

So, I think you might already have subject 14 in the Tye_data_share folder. Please let me know if not. I have uploaded the other two subjects, and you should now have all the data for these three problematic subjects.

I tried the run as suggested above by @weiglszonja but I get the same errors on all three subjects.

@weiglszonja
Copy link
Collaborator

Thank you @laurelrr, I forwarded this issue to @samuelgarcia who is looking into it. I'll let you know once we know how to fix it.

@weiglszonja
Copy link
Collaborator

weiglszonja commented Mar 13, 2024

We managed to replicate the bug for H29; when reading only the CH .continuous files we didn't have any error, but when adding all AUX and ADC channels we were seeing negative sample number. @samuelgarcia is going to work on a fix, but until then I can provide a workaround for this subject.

I managed to write the data without any error or modification to the script just by removing the AUX and ADC .continuous files from the folder. I would suggest to try this out and rerun the conversion for H29.

For H41 and H14 the error is with chunking, unfortunately you'll have to edit the neurotensin_valence_convert_session.py script to force a chunk size that is going to work with this data.

You can modify the conversion_options for the OpenEphysLegacyRecordingInterface here:

conversion_options.update(dict(Recording=dict(stub_test=stub_test)))

The suggested change at this line is overriding the iterator options as:

conversion_options = dict(Recording=dict(stub_test=stub_test, iterator_opts=dict(buffer_shape=(1024, 32), chunk_shape=(1024, 32))))

With this line of change I managed to write the data for H41, but unfortunately with this small chunk size it will take considerably longer time for the conversion to finish. I would suggest to let it run overnight.

@laurelrr
Copy link
Author

laurelrr commented Apr 9, 2024

Hi,
So it looks like that suggestion worked and I am able to convert my last 3 subjects! Thanks for all the help.

However, I am getting an error when I try to upload files to DandiArchive and I run
nwbinspector ./ --config dandi --report-file-path DandiInspector.txt
regarding the time zone.

Here is the error message:


/home/lkeyes/.local/lib/python3.9/site-packages/pynwb/file.py:471: UserWarning: Date is missing timezone information. Updating to local timezone.
  args_to_set['session_start_time'] = _add_missing_timezone(session_start_time)
Traceback (most recent call last):
  File "/home/lkeyes/.local/lib/python3.9/site-packages/hdmf/build/objectmapper.py", line 1258, in construct
    obj = self.__new_container__(cls, builder.source, parent, builder.attributes.get(self.__spec.id_key()),
  File "/home/lkeyes/.local/lib/python3.9/site-packages/hdmf/build/objectmapper.py", line 1271, in __new_container__
    obj.__init__(**kwargs)
  File "/home/lkeyes/.local/lib/python3.9/site-packages/hdmf/utils.py", line 664, in func_call
    return func(args[0], **pargs)
  File "/home/lkeyes/.local/lib/python3.9/site-packages/pynwb/file.py", line 479, in __init__
    raise ValueError("'timestamps_reference_time' must be a timezone-aware datetime object.")
ValueError: 'timestamps_reference_time' must be a timezone-aware datetime object.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/lkeyes/anaconda3/envs/dandi/bin/nwbinspector", line 8, in <module>
    sys.exit(inspect_all_cli())
  File "/home/lkeyes/.local/lib/python3.9/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
  File "/home/lkeyes/.local/lib/python3.9/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
  File "/home/lkeyes/.local/lib/python3.9/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/lkeyes/.local/lib/python3.9/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
  File "/home/lkeyes/anaconda3/envs/dandi/lib/python3.9/site-packages/nwbinspector/nwbinspector.py", line 280, in inspect_all_cli
    messages = list(
  File "/home/lkeyes/anaconda3/envs/dandi/lib/python3.9/site-packages/nwbinspector/nwbinspector.py", line 412, in inspect_all
    nwbfile = robust_s3_read(io.read)
  File "/home/lkeyes/anaconda3/envs/dandi/lib/python3.9/site-packages/nwbinspector/utils.py", line 174, in robust_s3_read
    raise exc
  File "/home/lkeyes/anaconda3/envs/dandi/lib/python3.9/site-packages/nwbinspector/utils.py", line 169, in robust_s3_read
    return command(*command_args, **command_kwargs)
  File "/home/lkeyes/.local/lib/python3.9/site-packages/hdmf/utils.py", line 664, in func_call
    return func(args[0], **pargs)
  File "/home/lkeyes/.local/lib/python3.9/site-packages/pynwb/__init__.py", line 304, in read
    file = super().read(**kwargs)
  File "/home/lkeyes/.local/lib/python3.9/site-packages/hdmf/backends/hdf5/h5tools.py", line 479, in read
    return super().read(**kwargs)
  File "/home/lkeyes/.local/lib/python3.9/site-packages/hdmf/utils.py", line 664, in func_call
    return func(args[0], **pargs)
  File "/home/lkeyes/.local/lib/python3.9/site-packages/hdmf/backends/io.py", line 60, in read
    container = self.__manager.construct(f_builder)
  File "/home/lkeyes/.local/lib/python3.9/site-packages/hdmf/utils.py", line 664, in func_call
    return func(args[0], **pargs)
  File "/home/lkeyes/.local/lib/python3.9/site-packages/hdmf/build/manager.py", line 284, in construct
    result = self.__type_map.construct(builder, self, None)
  File "/home/lkeyes/.local/lib/python3.9/site-packages/hdmf/utils.py", line 664, in func_call
    return func(args[0], **pargs)
  File "/home/lkeyes/.local/lib/python3.9/site-packages/hdmf/build/manager.py", line 795, in construct
    return obj_mapper.construct(builder, build_manager, parent)
  File "/home/lkeyes/.local/lib/python3.9/site-packages/hdmf/utils.py", line 664, in func_call
    return func(args[0], **pargs)
  File "/home/lkeyes/.local/lib/python3.9/site-packages/hdmf/build/objectmapper.py", line 1262, in construct
    raise ConstructError(builder, msg) from ex
hdmf.build.errors.ConstructError: (root GroupBuilder {'attributes': {'namespace': 'core', 'neurodata_type': 'NWBFile', 'nwb_version': '2.6.0', 'object_id': '77bcdb29-b99d-4027-8210-b428522e0114'}, 'groups': {'acquisition': root/acquisition GroupBuilder {'attributes': {}, 'groups': {'COFAImages': root/acquisition/COFAImages GroupBuilder {'attributes': {'description': 'The PVT confocal images from FLUOVIEW FV1000 version 1.2.6.0 extracted from H27PVT_40x.oif.', 'namespace': 'core', 'neurodata_type': 'Images', 'object_id': '68fdeb3a-59fa-4fdc-81ac-7db145faec15'}, 'groups': {}, 'datasets': {'GrayScaleImage1Composite': root/acquisition/COFAImages/GrayScaleImage1Composite DatasetBuilder {'attributes': {'description': 'The image intensity aggregated over depth from PVT region.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': 'e0406996-9cdc-4f04-b11c-8533fce9ca1e'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage1Depth1': root/acquisition/COFAImages/GrayScaleImage1Depth1 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 4.545e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': '113a2872-c15c-4501-a0b5-f3fd25eecb16'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage1Depth2': root/acquisition/COFAImages/GrayScaleImage1Depth2 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 4.395e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': '16a13f8c-0b72-4060-9c27-82813fad3832'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage1Depth3': root/acquisition/COFAImages/GrayScaleImage1Depth3 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 4.245e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': 'd345c7c3-8399-46b8-8d4f-2a728ac5179a'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage1Depth4': root/acquisition/COFAImages/GrayScaleImage1Depth4 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 4.095e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': 'dd921197-cd00-46f7-a29d-7faff491fe6f'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage1Depth5': root/acquisition/COFAImages/GrayScaleImage1Depth5 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 3.945e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': '44204b70-b40b-4d51-93a6-726f86efacf7'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage1Depth6': root/acquisition/COFAImages/GrayScaleImage1Depth6 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 3.795e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': 'a9d9dfe7-9a24-42fc-a94e-f71aa7d4b841'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage1Depth7': root/acquisition/COFAImages/GrayScaleImage1Depth7 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 3.645e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': '6101bc64-cd7a-4b42-8037-278e62c8d01b'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage2Composite': root/acquisition/COFAImages/GrayScaleImage2Composite DatasetBuilder {'attributes': {'description': 'The image intensity aggregated over depth from PVT region.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': 'd7557beb-0d8c-4169-b135-612e8d82eb67'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage2Depth1': root/acquisition/COFAImages/GrayScaleImage2Depth1 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 4.545e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': '1de5e794-ce98-4958-b987-420e17e0f4ba'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage2Depth2': root/acquisition/COFAImages/GrayScaleImage2Depth2 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 4.395e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': '3fcf08e6-ec1a-4362-9474-a896043ec0ed'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage2Depth3': root/acquisition/COFAImages/GrayScaleImage2Depth3 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 4.245e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': '75cbfff8-7f56-44e5-beaa-090e5980a7f5'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage2Depth4': root/acquisition/COFAImages/GrayScaleImage2Depth4 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 4.095e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': 'af84c5fc-c5df-4405-960e-f254c5f229c2'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage2Depth5': root/acquisition/COFAImages/GrayScaleImage2Depth5 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 3.945e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': 'b534c8ed-0ce4-4d8d-bc06-0995aef02cd8'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage2Depth6': root/acquisition/COFAImages/GrayScaleImage2Depth6 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 3.795e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': 'e5d18672-35f7-47d4-ace2-8aea9b89b2b2'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage2Depth7': root/acquisition/COFAImages/GrayScaleImage2Depth7 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 3.645e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': 'f7a35689-9096-4937-871c-2baf3bd1ad49'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage3Composite': root/acquisition/COFAImages/GrayScaleImage3Composite DatasetBuilder {'attributes': {'description': 'The image intensity aggregated over depth from PVT region.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': 'c28ae29a-49bf-453a-b644-80468dfa58ea'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage3Depth1': root/acquisition/COFAImages/GrayScaleImage3Depth1 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 4.545e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': 'b341a8cd-1bba-4aaa-92dc-70f6c00ab569'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage3Depth2': root/acquisition/COFAImages/GrayScaleImage3Depth2 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 4.395e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': 'd77fd074-2b88-4f53-82ac-dc201cf49d93'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage3Depth3': root/acquisition/COFAImages/GrayScaleImage3Depth3 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 4.245e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': 'fac6999b-1c4e-478c-b267-f91eea7f0b4d'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage3Depth4': root/acquisition/COFAImages/GrayScaleImage3Depth4 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 4.095e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': '1abe39bd-74c8-4b1f-ba7c-abc96d5f660b'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage3Depth5': root/acquisition/COFAImages/GrayScaleImage3Depth5 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 3.945e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': 'eb08cd48-ee6c-4365-8f32-7c7279210d1a'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage3Depth6': root/acquisition/COFAImages/GrayScaleImage3Depth6 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 3.795e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': '67c6c529-3e31-4ce8-84b2-bacd2f0f370d'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage3Depth7': root/acquisition/COFAImages/GrayScaleImage3Depth7 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 3.645e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': '7783ecc1-9ff6-4d5f-9a11-4a769df38821'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage4Composite': root/acquisition/COFAImages/GrayScaleImage4Composite DatasetBuilder {'attributes': {'description': 'The image intensity aggregated over depth from PVT region.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': 'd97e92d8-fc9d-4edb-a768-14fd5e9181bd'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage4Depth1': root/acquisition/COFAImages/GrayScaleImage4Depth1 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 4.545e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': 'b48c8c2f-4bd1-408c-871d-1b3605596634'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage4Depth2': root/acquisition/COFAImages/GrayScaleImage4Depth2 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 4.395e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': 'f19853a8-9c39-431d-b839-9b83150c2c98'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage4Depth3': root/acquisition/COFAImages/GrayScaleImage4Depth3 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 4.245e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': 'bfc75488-8f4b-4a24-994f-69a6f7b69d17'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage4Depth4': root/acquisition/COFAImages/GrayScaleImage4Depth4 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 4.095e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': '94c2f4cc-dd0d-4c41-a791-0b247481d079'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage4Depth5': root/acquisition/COFAImages/GrayScaleImage4Depth5 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 3.945e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': '64a702dc-1862-4774-9dab-aabaafbbca8a'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage4Depth6': root/acquisition/COFAImages/GrayScaleImage4Depth6 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 3.795e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': 'a19bd282-422b-4f99-8ac2-4f22f76dbfd8'}, 'data': <Closed HDF5 dataset>}, 'GrayScaleImage4Depth7': root/acquisition/COFAImages/GrayScaleImage4Depth7 DatasetBuilder {'attributes': {'description': 'The image intensity from PVT region at 3.645e-05 meters depth.', 'namespace': 'core', 'neurodata_type': 'GrayscaleImage', 'object_id': 'a30f753f-14fb-4147-a8d8-f8e2ac83e819'}, 'data': <Closed HDF5 dataset>}}, 'links': {}}, 'ElectricalSeries': root/acquisition/ElectricalSeries GroupBuilder {'attributes': {'comments': 'no comments', 'description': 'Acquisition traces for the ElectricalSeries.', 'namespace': 'core', 'neurodata_type': 'ElectricalSeries', 'object_id': '3e7de792-9ae3-49af-96f6-0bedbe0bc5c8'}, 'groups': {}, 'datasets': {'data': root/acquisition/ElectricalSeries/data DatasetBuilder {'attributes': {'conversion': 1.9499999999999999e-07, 'offset': 0.0, 'resolution': -1.0, 'unit': 'volts'}, 'data': <Closed HDF5 dataset>}, 'electrodes': root/acquisition/ElectricalSeries/electrodes DatasetBuilder {'attributes': {'description': 'electrode_table_region', 'namespace': 'hdmf-common', 'neurodata_type': 'DynamicTableRegion', 'object_id': '4d44a335-01d5-4483-9716-76328b6d2ed1', 'table': root/general/extracellular_ephys/electrodes GroupBuilder {'attributes': {'colnames': array(['location', 'group', 'group_name', 'channel_name', 'gain_to_uV',
       'offset_to_uV'], dtype=object), 'description': 'metadata about extracellular electrodes', 'namespace': 'hdmf-common', 'neurodata_type': 'DynamicTable', 'object_id': 'ab11ceb1-fa18-4547-b47e-03e3dff97c51'}, 'groups': {}, 'datasets': {'channel_name': root/general/extracellular_ephys/electrodes/channel_name DatasetBuilder {'attributes': {'description': 'no description', 'namespace': 'hdmf-common', 'neurodata_type': 'VectorData', 'object_id': '95e05f8b-9345-43d9-8550-a0b78a037b45'}, 'data': <StrDataset for Closed HDF5 dataset>}, 'gain_to_uV': root/general/extracellular_ephys/electrodes/gain_to_uV DatasetBuilder {'attributes': {'description': 'no description', 'namespace': 'hdmf-common', 'neurodata_type': 'VectorData', 'object_id': '466ae55e-2a4f-4ebf-bd8f-c962d08ac419'}, 'data': <Closed HDF5 dataset>}, 'group': root/general/extracellular_ephys/electrodes/group DatasetBuilder {'attributes': {'description': 'a reference to the ElectrodeGroup this electrode is a part of', 'namespace': 'hdmf-common', 'neurodata_type': 'VectorData', 'object_id': '14d56bf0-3d8a-4d3d-a86e-684268b60f44'}, 'data': <hdmf.backends.hdf5.h5_utils.BuilderH5ReferenceDataset object at 0x7f494f5b5e20>}, 'group_name': root/general/extracellular_ephys/electrodes/group_name DatasetBuilder {'attributes': {'description': 'the name of the ElectrodeGroup this electrode is a part of', 'namespace': 'hdmf-common', 'neurodata_type': 'VectorData', 'object_id': '70b6a4e3-264d-44c8-97bb-a2d7ee5c61e2'}, 'data': <StrDataset for Closed HDF5 dataset>}, 'id': root/general/extracellular_ephys/electrodes/id DatasetBuilder {'attributes': {'namespace': 'hdmf-common', 'neurodata_type': 'ElementIdentifiers', 'object_id': '3e4c5fb4-5f73-4f68-8985-08180553c0e3'}, 'data': <Closed HDF5 dataset>}, 'location': root/general/extracellular_ephys/electrodes/location DatasetBuilder {'attributes': {'description': 'the location of channel within the subject e.g. brain region', 'namespace': 'hdmf-common', 'neurodata_type': 'VectorData', 'object_id': '6ded306e-e180-4127-9d6b-1aea6ea09d80'}, 'data': <StrDataset for Closed HDF5 dataset>}, 'offset_to_uV': root/general/extracellular_ephys/electrodes/offset_to_uV DatasetBuilder {'attributes': {'description': 'no description', 'namespace': 'hdmf-common', 'neurodata_type': 'VectorData', 'object_id': 'dae52326-b714-4d18-8999-4b1cb49f3010'}, 'data': <Closed HDF5 dataset>}}, 'links': {}}}, 'data': <Closed HDF5 dataset>}, 'starting_time': root/acquisition/ElectricalSeries/starting_time DatasetBuilder {'attributes': {'rate': 30000.0, 'unit': 'seconds'}, 'data': 0.0}}, 'links': {}}, 'Video: H027Disc4': root/acquisition/Video: H027Disc4 GroupBuilder {'attributes': {'comments': 'no comments', 'description': 'Video recorded by camera.', 'namespace': 'core', 'neurodata_type': 'ImageSeries', 'object_id': 'b96b218c-7917-4993-835e-2affe1fc164e'}, 'groups': {}, 'datasets': {'data': root/acquisition/Video: H027Disc4/data DatasetBuilder {'attributes': {'conversion': 1.0, 'offset': 0.0, 'resolution': -1.0, 'unit': 'Frames'}, 'data': <Closed HDF5 dataset>}, 'external_file': root/acquisition/Video: H027Disc4/external_file DatasetBuilder {'attributes': {'starting_frame': array([0])}, 'data': <StrDataset for Closed HDF5 dataset>}, 'format': root/acquisition/Video: H027Disc4/format DatasetBuilder {'attributes': {}, 'data': 'external'}, 'starting_time': root/acquisition/Video: H027Disc4/starting_time DatasetBuilder {'attributes': {'rate': 30.0, 'unit': 'seconds'}, 'data': 0.0}}, 'links': {}}}, 'datasets': {}, 'links': {}}, 'analysis': root/analysis GroupBuilder {'attributes': {}, 'groups': {}, 'datasets': {}, 'links': {}}, 'general': root/general GroupBuilder {'attributes': {}, 'groups': {'devices': root/general/devices GroupBuilder {'attributes': {}, 'groups': {'DeviceEcephys': root/general/devices/DeviceEcephys GroupBuilder {'attributes': {'description': 'no description', 'namespace': 'core', 'neurodata_type': 'Device', 'object_id': '42230dbf-b9a1-4355-9107-c263a429bdd7'}, 'groups': {}, 'datasets': {}, 'links': {}}}, 'datasets': {}, 'links': {}}, 'extracellular_ephys': root/general/extracellular_ephys GroupBuilder {'attributes': {}, 'groups': {'ElectrodeGroup': root/general/extracellular_ephys/ElectrodeGroup GroupBuilder {'attributes': {'description': 'no description', 'location': 'unknown', 'namespace': 'core', 'neurodata_type': 'ElectrodeGroup', 'object_id': '83428b89-faad-48bb-822e-487369424ea0'}, 'groups': {}, 'datasets': {}, 'links': {'device': root/general/extracellular_ephys/ElectrodeGroup/device LinkBuilder {'builder': root/general/devices/DeviceEcephys GroupBuilder {'attributes': {'description': 'no description', 'namespace': 'core', 'neurodata_type': 'Device', 'object_id': '42230dbf-b9a1-4355-9107-c263a429bdd7'}, 'groups': {}, 'datasets': {}, 'links': {}}}}}, 'electrodes': root/general/extracellular_ephys/electrodes GroupBuilder {'attributes': {'colnames': array(['location', 'group', 'group_name', 'channel_name', 'gain_to_uV',
       'offset_to_uV'], dtype=object), 'description': 'metadata about extracellular electrodes', 'namespace': 'hdmf-common', 'neurodata_type': 'DynamicTable', 'object_id': 'ab11ceb1-fa18-4547-b47e-03e3dff97c51'}, 'groups': {}, 'datasets': {'channel_name': root/general/extracellular_ephys/electrodes/channel_name DatasetBuilder {'attributes': {'description': 'no description', 'namespace': 'hdmf-common', 'neurodata_type': 'VectorData', 'object_id': '95e05f8b-9345-43d9-8550-a0b78a037b45'}, 'data': <StrDataset for Closed HDF5 dataset>}, 'gain_to_uV': root/general/extracellular_ephys/electrodes/gain_to_uV DatasetBuilder {'attributes': {'description': 'no description', 'namespace': 'hdmf-common', 'neurodata_type': 'VectorData', 'object_id': '466ae55e-2a4f-4ebf-bd8f-c962d08ac419'}, 'data': <Closed HDF5 dataset>}, 'group': root/general/extracellular_ephys/electrodes/group DatasetBuilder {'attributes': {'description': 'a reference to the ElectrodeGroup this electrode is a part of', 'namespace': 'hdmf-common', 'neurodata_type': 'VectorData', 'object_id': '14d56bf0-3d8a-4d3d-a86e-684268b60f44'}, 'data': <hdmf.backends.hdf5.h5_utils.BuilderH5ReferenceDataset object at 0x7f494f5b5e20>}, 'group_name': root/general/extracellular_ephys/electrodes/group_name DatasetBuilder {'attributes': {'description': 'the name of the ElectrodeGroup this electrode is a part of', 'namespace': 'hdmf-common', 'neurodata_type': 'VectorData', 'object_id': '70b6a4e3-264d-44c8-97bb-a2d7ee5c61e2'}, 'data': <StrDataset for Closed HDF5 dataset>}, 'id': root/general/extracellular_ephys/electrodes/id DatasetBuilder {'attributes': {'namespace': 'hdmf-common', 'neurodata_type': 'ElementIdentifiers', 'object_id': '3e4c5fb4-5f73-4f68-8985-08180553c0e3'}, 'data': <Closed HDF5 dataset>}, 'location': root/general/extracellular_ephys/electrodes/location DatasetBuilder {'attributes': {'description': 'the location of channel within the subject e.g. brain region', 'namespace': 'hdmf-common', 'neurodata_type': 'VectorData', 'object_id': '6ded306e-e180-4127-9d6b-1aea6ea09d80'}, 'data': <StrDataset for Closed HDF5 dataset>}, 'offset_to_uV': root/general/extracellular_ephys/electrodes/offset_to_uV DatasetBuilder {'attributes': {'description': 'no description', 'namespace': 'hdmf-common', 'neurodata_type': 'VectorData', 'object_id': 'dae52326-b714-4d18-8999-4b1cb49f3010'}, 'data': <Closed HDF5 dataset>}}, 'links': {}}}, 'datasets': {}, 'links': {}}, 'intracellular_ephys': root/general/intracellular_ephys GroupBuilder {'attributes': {}, 'groups': {}, 'datasets': {}, 'links': {}}, 'optogenetics': root/general/optogenetics GroupBuilder {'attributes': {}, 'groups': {}, 'datasets': {}, 'links': {}}, 'optophysiology': root/general/optophysiology GroupBuilder {'attributes': {}, 'groups': {}, 'datasets': {}, 'links': {}}, 'subject': root/general/subject GroupBuilder {'attributes': {'namespace': 'core', 'neurodata_type': 'Subject', 'object_id': '4708516f-c57f-43f1-9502-13f0a701e0c4'}, 'groups': {}, 'datasets': {'age': root/general/subject/age DatasetBuilder {'attributes': {'reference': 'birth'}, 'data': 'P109D'}, 'genotype': root/general/subject/genotype DatasetBuilder {'attributes': {}, 'data': 'Wild type'}, 'sex': root/general/subject/sex DatasetBuilder {'attributes': {}, 'data': 'M'}, 'species': root/general/subject/species DatasetBuilder {'attributes': {}, 'data': 'Mus musculus'}, 'strain': root/general/subject/strain DatasetBuilder {'attributes': {}, 'data': 'C57BL/6J'}, 'subject_id': root/general/subject/subject_id DatasetBuilder {'attributes': {}, 'data': 'H27'}}, 'links': {}}}, 'datasets': {'experiment_description': root/general/experiment_description DatasetBuilder {'attributes': {}, 'data': 'The ability to associate temporally segregated information and assign positive or\nnegative valence to environmental cues is paramount for survival. Studies have shown\nthat different projections from the basolateral amygdala (BLA) are potentiated\nfollowing reward or punishment learning1–7. However, we do not yet understand how\nvalence-specific information is routed to the BLA neurons with the appropriate\ndownstream projections, nor do we understand how to reconcile the sub-second\ntimescales of synaptic plasticity8–11 with the longer timescales separating the\npredictive cues from their outcomes. Here we demonstrate that neurotensin\n(NT)-expressing neurons in the paraventricular nucleus of the thalamus (PVT)\nprojecting to the BLA (PVT-BLA:NT) mediate valence assignment by exerting NT\nconcentration-dependent modulation in BLA during associative learning.\nWe found that optogenetic activation of the PVT-BLA:NT projection promotes reward\nlearning, whereas PVT-BLA projection-specific knockout of the NT gene (Nts) augments\npunishment learning. Using genetically encoded calcium and NT sensors, we further\nrevealed that both calcium dynamics within the PVT-BLA:NT projection and NT\nconcentrations in the BLA are enhanced after reward learning and reduced after\npunishment learning. Finally, we showed that CRISPR-mediated knockout of the Nts\ngene in the PVT-BLA pathway blunts BLA neural dynamics and attenuates the preference\nfor active behavioural strategies to reward and punishment predictive cues. In sum,\nwe have identified NT as a neuropeptide that signals valence in the BLA, and showed\nthat NT is a critical neuromodulator that orchestrates positive and negative valence\nassignment in amygdala neurons by extending valence-specific plasticity to\nbehaviourally relevant timescales.\n'}, 'experimenter': root/general/experimenter DatasetBuilder {'attributes': {}, 'data': <StrDataset for Closed HDF5 dataset>}, 'institution': root/general/institution DatasetBuilder {'attributes': {}, 'data': 'Salk Institute for Biological Studies'}, 'lab': root/general/lab DatasetBuilder {'attributes': {}, 'data': 'Tye'}, 'related_publications': root/general/related_publications DatasetBuilder {'attributes': {}, 'data': <StrDataset for Closed HDF5 dataset>}, 'session_id': root/general/session_id DatasetBuilder {'attributes': {}, 'data': 'H27-2020-02-20-10-07-01-Disc4-20k'}}, 'links': {}}, 'intervals': root/intervals GroupBuilder {'attributes': {}, 'groups': {'trials': root/intervals/trials GroupBuilder {'attributes': {'colnames': array(['start_time', 'stop_time', 'trial_type'], dtype=object), 'description': 'experimental trials generated from /nadata/snlkt/data/hao/Neurotensin/ephys/recordings/forLaurel/H27_2020-02-20_10-07-01_Disc4_20k/0027_20200221_20k_events.mat', 'namespace': 'core', 'neurodata_type': 'TimeIntervals', 'object_id': '113649cb-4234-4078-98cd-229ef3f14278'}, 'groups': {}, 'datasets': {'id': root/intervals/trials/id DatasetBuilder {'attributes': {'namespace': 'hdmf-common', 'neurodata_type': 'ElementIdentifiers', 'object_id': 'a2a87302-2361-4988-a658-8d826d7e29cf'}, 'data': <Closed HDF5 dataset>}, 'start_time': root/intervals/trials/start_time DatasetBuilder {'attributes': {'description': 'Start time of epoch, in seconds', 'namespace': 'hdmf-common', 'neurodata_type': 'VectorData', 'object_id': 'a7545b66-504d-42e8-a43b-30942cda8724'}, 'data': <Closed HDF5 dataset>}, 'stop_time': root/intervals/trials/stop_time DatasetBuilder {'attributes': {'description': 'Stop time of epoch, in seconds', 'namespace': 'hdmf-common', 'neurodata_type': 'VectorData', 'object_id': '185453eb-a6fb-4dfc-b91d-c39148d920ab'}, 'data': <Closed HDF5 dataset>}, 'trial_type': root/intervals/trials/trial_type DatasetBuilder {'attributes': {'description': 'trial_type', 'namespace': 'hdmf-common', 'neurodata_type': 'VectorData', 'object_id': '7407517d-c4f6-4f33-aa08-c772584ff81d'}, 'data': <StrDataset for Closed HDF5 dataset>}}, 'links': {}}}, 'datasets': {}, 'links': {}}, 'processing': root/processing GroupBuilder {'attributes': {}, 'groups': {'behavior': root/processing/behavior GroupBuilder {'attributes': {'description': 'Processed behavior data.', 'namespace': 'core', 'neurodata_type': 'ProcessingModule', 'object_id': 'e02d40bd-c30c-417e-9930-cf96a8a24739'}, 'groups': {'PoseEstimation': root/processing/behavior/PoseEstimation GroupBuilder {'attributes': {'namespace': 'ndx-pose', 'neurodata_type': 'PoseEstimation', 'object_id': 'b3f4e019-cdd4-4850-addc-d643811c00fd'}, 'groups': {'BottomLeftCameraPoseEstimationSeries': root/processing/behavior/PoseEstimation/BottomLeftCameraPoseEstimationSeries GroupBuilder {'attributes': {'comments': 'no comments', 'description': 'The pose estimation for the bottom left camera.', 'namespace': 'ndx-pose', 'neurodata_type': 'PoseEstimationSeries', 'object_id': 'a2f09947-52f4-451e-aeaa-6ff05855fb34'}, 'groups': {}, 'datasets': {'confidence': root/processing/behavior/PoseEstimation/BottomLeftCameraPoseEstimationSeries/confidence DatasetBuilder {'attributes': {'definition': 'The likelihood estimation from the algorithm.'}, 'data': <Closed HDF5 dataset>}, 'data': root/processing/behavior/PoseEstimation/BottomLeftCameraPoseEstimationSeries/data DatasetBuilder {'attributes': {'conversion': 1.0, 'offset': 0.0, 'resolution': -1.0, 'unit': 'px'}, 'data': <Closed HDF5 dataset>}, 'reference_frame': root/processing/behavior/PoseEstimation/BottomLeftCameraPoseEstimationSeries/reference_frame DatasetBuilder {'attributes': {}, 'data': '(0,0) corresponds to the top left corner of the cage.'}, 'starting_time': root/processing/behavior/PoseEstimation/BottomLeftCameraPoseEstimationSeries/starting_time DatasetBuilder {'attributes': {'rate': 15.0, 'unit': 'seconds'}, 'data': 0.0}}, 'links': {}}, 'BottomRightCameraPoseEstimationSeries': root/processing/behavior/PoseEstimation/BottomRightCameraPoseEstimationSeries GroupBuilder {'attributes': {'comments': 'no comments', 'description': 'The pose estimation for the bottom right camera.', 'namespace': 'ndx-pose', 'neurodata_type': 'PoseEstimationSeries', 'object_id': '44c35a5b-c567-4e6e-9b4e-d26929941879'}, 'groups': {}, 'datasets': {'confidence': root/processing/behavior/PoseEstimation/BottomRightCameraPoseEstimationSeries/confidence DatasetBuilder {'attributes': {'definition': 'The likelihood estimation from the algorithm.'}, 'data': <Closed HDF5 dataset>}, 'data': root/processing/behavior/PoseEstimation/BottomRightCameraPoseEstimationSeries/data DatasetBuilder {'attributes': {'conversion': 1.0, 'offset': 0.0, 'resolution': -1.0, 'unit': 'px'}, 'data': <Closed HDF5 dataset>}, 'reference_frame': root/processing/behavior/PoseEstimation/BottomRightCameraPoseEstimationSeries/reference_frame DatasetBuilder {'attributes': {}, 'data': '(0,0) corresponds to the top left corner of the cage.'}, 'starting_time': root/processing/behavior/PoseEstimation/BottomRightCameraPoseEstimationSeries/starting_time DatasetBuilder {'attributes': {'rate': 15.0, 'unit': 'seconds'}, 'data': 0.0}}, 'links': {}}, 'ElectrodePoseEstimationSeries': root/processing/behavior/PoseEstimation/ElectrodePoseEstimationSeries GroupBuilder {'attributes': {'comments': 'no comments', 'description': 'The pose estimation for the electrode.', 'namespace': 'ndx-pose', 'neurodata_type': 'PoseEstimationSeries', 'object_id': '46ab591b-70d6-42ce-96ea-47175a6efaa9'}, 'groups': {}, 'datasets': {'confidence': root/processing/behavior/PoseEstimation/ElectrodePoseEstimationSeries/confidence DatasetBuilder {'attributes': {'definition': 'The likelihood estimation from the algorithm.'}, 'data': <Closed HDF5 dataset>}, 'data': root/processing/behavior/PoseEstimation/ElectrodePoseEstimationSeries/data DatasetBuilder {'attributes': {'conversion': 1.0, 'offset': 0.0, 'resolution': -1.0, 'unit': 'px'}, 'data': <Closed HDF5 dataset>}, 'reference_frame': root/processing/behavior/PoseEstimation/ElectrodePoseEstimationSeries/reference_frame DatasetBuilder {'attributes': {}, 'data': '(0,0) corresponds to the top left corner of the cage.'}, 'starting_time': root/processing/behavior/PoseEstimation/ElectrodePoseEstimationSeries/starting_time DatasetBuilder {'attributes': {'rate': 15.0, 'unit': 'seconds'}, 'data': 0.0}}, 'links': {}}, 'HeadPoseEstimationSeries': root/processing/behavior/PoseEstimation/HeadPoseEstimationSeries GroupBuilder {'attributes': {'comments': 'no comments', 'description': 'The pose estimation for the head of the animal.', 'namespace': 'ndx-pose', 'neurodata_type': 'PoseEstimationSeries', 'object_id': 'cea28e3d-a61d-4c53-ae5f-bc104e6795a0'}, 'groups': {}, 'datasets': {'confidence': root/processing/behavior/PoseEstimation/HeadPoseEstimationSeries/confidence DatasetBuilder {'attributes': {'definition': 'The likelihood estimation from the algorithm.'}, 'data': <Closed HDF5 dataset>}, 'data': root/processing/behavior/PoseEstimation/HeadPoseEstimationSeries/data DatasetBuilder {'attributes': {'conversion': 1.0, 'offset': 0.0, 'resolution': -1.0, 'unit': 'px'}, 'data': <Closed HDF5 dataset>}, 'reference_frame': root/processing/behavior/PoseEstimation/HeadPoseEstimationSeries/reference_frame DatasetBuilder {'attributes': {}, 'data': '(0,0) corresponds to the top left corner of the cage.'}, 'starting_time': root/processing/behavior/PoseEstimation/HeadPoseEstimationSeries/starting_time DatasetBuilder {'attributes': {'rate': 15.0, 'unit': 'seconds'}, 'data': 0.0}}, 'links': {}}, 'LeftEarPoseEstimationSeries': root/processing/behavior/PoseEstimation/LeftEarPoseEstimationSeries GroupBuilder {'attributes': {'comments': 'no comments', 'description': 'The pose estimation for the left ear of the animal.', 'namespace': 'ndx-pose', 'neurodata_type': 'PoseEstimationSeries', 'object_id': 'acfc8acb-9db5-4377-b07a-7cebe4278b2b'}, 'groups': {}, 'datasets': {'confidence': root/processing/behavior/PoseEstimation/LeftEarPoseEstimationSeries/confidence DatasetBuilder {'attributes': {'definition': 'The likelihood estimation from the algorithm.'}, 'data': <Closed HDF5 dataset>}, 'data': root/processing/behavior/PoseEstimation/LeftEarPoseEstimationSeries/data DatasetBuilder {'attributes': {'conversion': 1.0, 'offset': 0.0, 'resolution': -1.0, 'unit': 'px'}, 'data': <Closed HDF5 dataset>}, 'reference_frame': root/processing/behavior/PoseEstimation/LeftEarPoseEstimationSeries/reference_frame DatasetBuilder {'attributes': {}, 'data': '(0,0) corresponds to the top left corner of the cage.'}, 'starting_time': root/processing/behavior/PoseEstimation/LeftEarPoseEstimationSeries/starting_time DatasetBuilder {'attributes': {'rate': 15.0, 'unit': 'seconds'}, 'data': 0.0}}, 'links': {}}, 'PortPoseEstimationSeries': root/processing/behavior/PoseEstimation/PortPoseEstimationSeries GroupBuilder {'attributes': {'comments': 'no comments', 'description': 'The pose estimation for the port.', 'namespace': 'ndx-pose', 'neurodata_type': 'PoseEstimationSeries', 'object_id': '657f3296-9bdb-4045-bc69-dbbb9e2dbc1d'}, 'groups': {}, 'datasets': {'confidence': root/processing/behavior/PoseEstimation/PortPoseEstimationSeries/confidence DatasetBuilder {'attributes': {'definition': 'The likelihood estimation from the algorithm.'}, 'data': <Closed HDF5 dataset>}, 'data': root/processing/behavior/PoseEstimation/PortPoseEstimationSeries/data DatasetBuilder {'attributes': {'conversion': 1.0, 'offset': 0.0, 'resolution': -1.0, 'unit': 'px'}, 'data': <Closed HDF5 dataset>}, 'reference_frame': root/processing/behavior/PoseEstimation/PortPoseEstimationSeries/reference_frame DatasetBuilder {'attributes': {}, 'data': '(0,0) corresponds to the top left corner of the cage.'}, 'starting_time': root/processing/behavior/PoseEstimation/PortPoseEstimationSeries/starting_time DatasetBuilder {'attributes': {'rate': 15.0, 'unit': 'seconds'}, 'data': 0.0}}, 'links': {}}, 'RightEarPoseEstimationSeries': root/processing/behavior/PoseEstimation/RightEarPoseEstimationSeries GroupBuilder {'attributes': {'comments': 'no comments', 'description': 'The pose estimation for the right ear of the animal.', 'namespace': 'ndx-pose', 'neurodata_type': 'PoseEstimationSeries', 'object_id': '0aa2d7a1-858a-473e-973b-6288daa0d40c'}, 'groups': {}, 'datasets': {'confidence': root/processing/behavior/PoseEstimation/RightEarPoseEstimationSeries/confidence DatasetBuilder {'attributes': {'definition': 'The likelihood estimation from the algorithm.'}, 'data': <Closed HDF5 dataset>}, 'data': root/processing/behavior/PoseEstimation/RightEarPoseEstimationSeries/data DatasetBuilder {'attributes': {'conversion': 1.0, 'offset': 0.0, 'resolution': -1.0, 'unit': 'px'}, 'data': <Closed HDF5 dataset>}, 'reference_frame': root/processing/behavior/PoseEstimation/RightEarPoseEstimationSeries/reference_frame DatasetBuilder {'attributes': {}, 'data': '(0,0) corresponds to the top left corner of the cage.'}, 'starting_time': root/processing/behavior/PoseEstimation/RightEarPoseEstimationSeries/starting_time DatasetBuilder {'attributes': {'rate': 15.0, 'unit': 'seconds'}, 'data': 0.0}}, 'links': {}}, 'TailPoseEstimationSeries': root/processing/behavior/PoseEstimation/TailPoseEstimationSeries GroupBuilder {'attributes': {'comments': 'no comments', 'description': 'The pose estimation for the tail of the animal.', 'namespace': 'ndx-pose', 'neurodata_type': 'PoseEstimationSeries', 'object_id': '26897b2a-da2f-4437-939a-ca121eeb3f7f'}, 'groups': {}, 'datasets': {'confidence': root/processing/behavior/PoseEstimation/TailPoseEstimationSeries/confidence DatasetBuilder {'attributes': {'definition': 'The likelihood estimation from the algorithm.'}, 'data': <Closed HDF5 dataset>}, 'data': root/processing/behavior/PoseEstimation/TailPoseEstimationSeries/data DatasetBuilder {'attributes': {'conversion': 1.0, 'offset': 0.0, 'resolution': -1.0, 'unit': 'px'}, 'data': <Closed HDF5 dataset>}, 'reference_frame': root/processing/behavior/PoseEstimation/TailPoseEstimationSeries/reference_frame DatasetBuilder {'attributes': {}, 'data': '(0,0) corresponds to the top left corner of the cage.'}, 'starting_time': root/processing/behavior/PoseEstimation/TailPoseEstimationSeries/starting_time DatasetBuilder {'attributes': {'rate': 15.0, 'unit': 'seconds'}, 'data': 0.0}}, 'links': {}}, 'TopLeftCameraPoseEstimationSeries': root/processing/behavior/PoseEstimation/TopLeftCameraPoseEstimationSeries GroupBuilder {'attributes': {'comments': 'no comments', 'description': 'The pose estimation for the top left camera.', 'namespace': 'ndx-pose', 'neurodata_type': 'PoseEstimationSeries', 'object_id': '4dcb6dee-0c4a-4856-a15f-73f220d20f65'}, 'groups': {}, 'datasets': {'confidence': root/processing/behavior/PoseEstimation/TopLeftCameraPoseEstimationSeries/confidence DatasetBuilder {'attributes': {'definition': 'The likelihood estimation from the algorithm.'}, 'data': <Closed HDF5 dataset>}, 'data': root/processing/behavior/PoseEstimation/TopLeftCameraPoseEstimationSeries/data DatasetBuilder {'attributes': {'conversion': 1.0, 'offset': 0.0, 'resolution': -1.0, 'unit': 'px'}, 'data': <Closed HDF5 dataset>}, 'reference_frame': root/processing/behavior/PoseEstimation/TopLeftCameraPoseEstimationSeries/reference_frame DatasetBuilder {'attributes': {}, 'data': '(0,0) corresponds to the top left corner of the cage.'}, 'starting_time': root/processing/behavior/PoseEstimation/TopLeftCameraPoseEstimationSeries/starting_time DatasetBuilder {'attributes': {'rate': 15.0, 'unit': 'seconds'}, 'data': 0.0}}, 'links': {}}, 'TopRightCameraPoseEstimationSeries': root/processing/behavior/PoseEstimation/TopRightCameraPoseEstimationSeries GroupBuilder {'attributes': {'comments': 'no comments', 'description': 'The pose estimation for the top right camera.', 'namespace': 'ndx-pose', 'neurodata_type': 'PoseEstimationSeries', 'object_id': '174bbf00-fa6a-4785-a801-4d6dd0d9f00c'}, 'groups': {}, 'datasets': {'confidence': root/processing/behavior/PoseEstimation/TopRightCameraPoseEstimationSeries/confidence DatasetBuilder {'attributes': {'definition': 'The likelihood estimation from the algorithm.'}, 'data': <Closed HDF5 dataset>}, 'data': root/processing/behavior/PoseEstimation/TopRightCameraPoseEstimationSeries/data DatasetBuilder {'attributes': {'conversion': 1.0, 'offset': 0.0, 'resolution': -1.0, 'unit': 'px'}, 'data': <Closed HDF5 dataset>}, 'reference_frame': root/processing/behavior/PoseEstimation/TopRightCameraPoseEstimationSeries/reference_frame DatasetBuilder {'attributes': {}, 'data': '(0,0) corresponds to the top left corner of the cage.'}, 'starting_time': root/processing/behavior/PoseEstimation/TopRightCameraPoseEstimationSeries/starting_time DatasetBuilder {'attributes': {'rate': 15.0, 'unit': 'seconds'}, 'data': 0.0}}, 'links': {}}}, 'datasets': {'dimensions': root/processing/behavior/PoseEstimation/dimensions DatasetBuilder {'attributes': {}, 'data': <Closed HDF5 dataset>}, 'edges': root/processing/behavior/PoseEstimation/edges DatasetBuilder {'attributes': {}, 'data': <Closed HDF5 dataset>}, 'labeled_videos': root/processing/behavior/PoseEstimation/labeled_videos DatasetBuilder {'attributes': {}, 'data': <StrDataset for Closed HDF5 dataset>}, 'nodes': root/processing/behavior/PoseEstimation/nodes DatasetBuilder {'attributes': {}, 'data': <StrDataset for Closed HDF5 dataset>}, 'original_videos': root/processing/behavior/PoseEstimation/original_videos DatasetBuilder {'attributes': {}, 'data': <StrDataset for Closed HDF5 dataset>}, 'scorer': root/processing/behavior/PoseEstimation/scorer DatasetBuilder {'attributes': {}, 'data': 'DLC_resnet50_Hao_MedPC_ephysFeb9shuffle1_800000'}, 'source_software': root/processing/behavior/PoseEstimation/source_software DatasetBuilder {'attributes': {}, 'data': 'DeepLabCut'}}, 'links': {}}}, 'datasets': {}, 'links': {}}}, 'datasets': {}, 'links': {}}, 'scratch': root/scratch GroupBuilder {'attributes': {}, 'groups': {}, 'datasets': {}, 'links': {}}, 'stimulus': root/stimulus GroupBuilder {'attributes': {}, 'groups': {'presentation': root/stimulus/presentation GroupBuilder {'attributes': {}, 'groups': {}, 'datasets': {}, 'links': {}}, 'templates': root/stimulus/templates GroupBuilder {'attributes': {}, 'groups': {}, 'datasets': {}, 'links': {}}}, 'datasets': {}, 'links': {}}, 'units': root/units GroupBuilder {'attributes': {'colnames': array(['spike_times', 'unit_name'], dtype=object), 'description': 'Autogenerated by neuroconv.', 'namespace': 'core', 'neurodata_type': 'Units', 'object_id': 'e861687e-d7ef-41cc-825c-203133c8b418'}, 'groups': {}, 'datasets': {'id': root/units/id DatasetBuilder {'attributes': {'namespace': 'hdmf-common', 'neurodata_type': 'ElementIdentifiers', 'object_id': '42cd8e4e-a2d0-42ec-a01b-f9e02d9e8977'}, 'data': <Closed HDF5 dataset>}, 'spike_times': root/units/spike_times DatasetBuilder {'attributes': {'description': 'the spike times for each unit', 'namespace': 'hdmf-common', 'neurodata_type': 'VectorData', 'object_id': '9905e1ca-4727-4cd6-bb93-44134930e658'}, 'data': <Closed HDF5 dataset>}, 'spike_times_index': root/units/spike_times_index DatasetBuilder {'attributes': {'description': "Index for VectorData 'spike_times'", 'namespace': 'hdmf-common', 'neurodata_type': 'VectorIndex', 'object_id': 'd3560956-bc0b-4696-ab87-b4c247e94fb0', 'target': root/units/spike_times DatasetBuilder {'attributes': {'description': 'the spike times for each unit', 'namespace': 'hdmf-common', 'neurodata_type': 'VectorData', 'object_id': '9905e1ca-4727-4cd6-bb93-44134930e658'}, 'data': <Closed HDF5 dataset>}}, 'data': <Closed HDF5 dataset>}, 'unit_name': root/units/unit_name DatasetBuilder {'attributes': {'description': 'Unique reference for each unit.', 'namespace': 'hdmf-common', 'neurodata_type': 'VectorData', 'object_id': '0cb75d57-68bd-4546-9389-98adb29221bc'}, 'data': <StrDataset for Closed HDF5 dataset>}}, 'links': {}}}, 'datasets': {'file_create_date': root/file_create_date DatasetBuilder {'attributes': {}, 'data': <Closed HDF5 dataset>}, 'identifier': root/identifier DatasetBuilder {'attributes': {}, 'data': '583be7b6-b006-4f26-97a5-14393b4cd3ba'}, 'session_description': root/session_description DatasetBuilder {'attributes': {}, 'data': 'Discrimination task in which three types of trials with distinct auditory tones predicting sucrose, shock or no outcome (sucrose, shock and neutral conditioned stimulus (CS), respectively) were presented randomly at a 2:1:1 ratio'}, 'session_start_time': root/session_start_time DatasetBuilder {'attributes': {}, 'data': '2020-03-20'}, 'timestamps_reference_time': root/timestamps_reference_time DatasetBuilder {'attributes': {}, 'data': '2020-03-20'}}, 'links': {}}, "Could not construct NWBFile object due to: 'timestamps_reference_time' must be a timezone-aware datetime object.")

@CodyCBakerPhD
Copy link
Member

Looks like this is the same as NeurodataWithoutBorders/pynwb#1843, seeking response from upstream

@CodyCBakerPhD
Copy link
Member

Resolved in latest release of PyNWB

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants