Skip to content

Exporting results after data is split into groups #45

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
vncntprvst opened this issue May 14, 2025 · 5 comments
Open

Exporting results after data is split into groups #45

vncntprvst opened this issue May 14, 2025 · 5 comments

Comments

@vncntprvst
Copy link
Contributor

I created a SortingAnalyzer object from the pipeline output.

sorting_folder = session_folder / "processed_data/spike_sorting_output/"
postproc_folder = sorting_folder / "postprocessed/"
# Group data
group0_path = postproc_folder / "experiment1_Record Node 101#Neuropix-PXI-100.ProbeA_recording1_group0.zarr"
# Load the postprocessed sorting data
sorting_analyzer = load_sorting_analyzer(group0_path)
SortingAnalyzer: 96 channels - 122 units - 1 segments - zarr - sparse
Loaded 13 extensions: correlograms, isi_histograms, noise_levels, principal_components, quality_metrics, random_spikes, spike_amplitudes, spike_locations, template_metrics, template_similarity, templates, unit_locations, waveforms

However, I'm having trouble to export it. I fail to load the preprocessed recording:

preproc_rec_group0_path = sorting_folder / "preprocessed/experiment1_Record Node 101#Neuropix-PXI-100.ProbeA_recording1_group0.json"
raw_ephys_data_path = session_folder / "raw_ephys_data"
recording_preprocessed = si.load(
   preproc_rec_group0_path,
   base_folder=raw_ephys_data_path
)
---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
Cell In[7], line 3
      1 preproc_rec_group0_path = sorting_folder / "preprocessed/experiment1_Record Node 101#Neuropix-PXI-100.ProbeA_recording1_group0.json"
      2 raw_ephys_data_path = session_folder / "raw_ephys_data"
----> 3 recording_preprocessed = si.load(
      4    preproc_rec_group0_path,
      5    base_folder=raw_ephys_data_path
      6 )

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/core/loading.py:100, in load(file_or_folder_or_dict, **kwargs)
     98     if object_type is None:
     99         raise ValueError(_error_msg.format(file_path=file_path))
--> 100     return _load_object_from_dict(d, object_type, base_folder=base_folder)
    102 elif is_local and file_path.is_dir():
    104     folder = file_path

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/core/loading.py:175, in _load_object_from_dict(d, object_type, base_folder)
    173 def _load_object_from_dict(d, object_type, base_folder=None):
    174     if object_type in ("Recording", "Sorting", "Recording|Sorting"):
--> 175         return BaseExtractor.from_dict(d, base_folder=base_folder)
    177     elif object_type == "Templates":
    178         from spikeinterface.core import Templates

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/core/base.py:571, in BaseExtractor.from_dict(dictionary, base_folder)
    569     assert base_folder is not None, "When  relative_paths=True, need to provide base_folder"
    570     dictionary = make_paths_absolute(dictionary, base_folder)
--> 571 extractor = _load_extractor_from_dict(dictionary)
    572 folder_metadata = dictionary.get("folder_metadata", None)
    573 if folder_metadata is not None:

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/core/base.py:1108, in _load_extractor_from_dict(dic)
   1106 for name, value in dic["kwargs"].items():
   1107     if is_dict_extractor(value):
-> 1108         new_kwargs[name] = _load_extractor_from_dict(value)
   1109     elif isinstance(value, dict):
   1110         new_kwargs[name] = {k: transform_dict_to_extractor(v) for k, v in value.items()}

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/core/base.py:1108, in _load_extractor_from_dict(dic)
   1106 for name, value in dic["kwargs"].items():
   1107     if is_dict_extractor(value):
-> 1108         new_kwargs[name] = _load_extractor_from_dict(value)
   1109     elif isinstance(value, dict):
   1110         new_kwargs[name] = {k: transform_dict_to_extractor(v) for k, v in value.items()}

    [... skipping similar frames: _load_extractor_from_dict at line 1108 (4 times)]

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/core/base.py:1108, in _load_extractor_from_dict(dic)
   1106 for name, value in dic["kwargs"].items():
   1107     if is_dict_extractor(value):
-> 1108         new_kwargs[name] = _load_extractor_from_dict(value)
   1109     elif isinstance(value, dict):
   1110         new_kwargs[name] = {k: transform_dict_to_extractor(v) for k, v in value.items()}

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/core/base.py:1127, in _load_extractor_from_dict(dic)
   1121     warnings.warn(
   1122         f"Versions are not the same. This might lead to compatibility errors. "
   1123         f"Using {class_name.split('.')[0]}=={dic['version']} is recommended"
   1124     )
   1126 # Initialize the extractor
-> 1127 extractor = extractor_class(**new_kwargs)
   1129 extractor._annotations.update(dic["annotations"])
   1130 for k, v in dic["properties"].items():

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/extractors/neoextractors/openephys.py:158, in OpenEphysBinaryRecordingExtractor.__init__(self, folder_path, load_sync_channel, load_sync_timestamps, experiment_names, stream_id, stream_name, block_index, all_annotations)
    146 def __init__(
    147     self,
    148     folder_path,
   (...)
    155     all_annotations=False,
    156 ):
    157     neo_kwargs = self.map_to_neo_kwargs(folder_path, load_sync_channel, experiment_names)
--> 158     NeoBaseRecordingExtractor.__init__(
    159         self,
    160         stream_id=stream_id,
    161         stream_name=stream_name,
    162         block_index=block_index,
    163         all_annotations=all_annotations,
    164         **neo_kwargs,
    165     )
    166     # get streams to find correct probe
    167     stream_names, stream_ids = self.get_streams(folder_path, load_sync_channel, experiment_names)

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/extractors/neoextractors/neobaseextractor.py:188, in NeoBaseRecordingExtractor.__init__(self, stream_id, stream_name, block_index, all_annotations, use_names_as_ids, **neo_kwargs)
    158 def __init__(
    159     self,
    160     stream_id: Optional[str] = None,
   (...)
    165     **neo_kwargs: Dict[str, Any],
    166 ) -> None:
    167     """
    168     Initialize a NeoBaseRecordingExtractor instance.
    169 
   (...)
    185 
    186     """
--> 188     _NeoBaseExtractor.__init__(self, block_index, **neo_kwargs)
    190     kwargs = dict(all_annotations=all_annotations)
    191     if block_index is not None:

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/extractors/neoextractors/neobaseextractor.py:27, in _NeoBaseExtractor.__init__(self, block_index, **neo_kwargs)
     23 def __init__(self, block_index, **neo_kwargs):
     24 
     25     # Avoids double initiation of the neo reader if it was already done in the __init__ of the child class
     26     if not hasattr(self, "neo_reader"):
---> 27         self.neo_reader = self.get_neo_io_reader(self.NeoRawIOClass, **neo_kwargs)
     29     if self.neo_reader.block_count() > 1 and block_index is None:
     30         raise Exception(
     31             "This dataset is multi-block. Spikeinterface can load one block at a time. "
     32             "Use 'block_index' to select the block to be loaded."
     33         )

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/extractors/neoextractors/neobaseextractor.py:66, in _NeoBaseExtractor.get_neo_io_reader(cls, raw_class, **neo_kwargs)
     64 neoIOclass = getattr(rawio_module, raw_class)
     65 neo_reader = neoIOclass(**neo_kwargs)
---> 66 neo_reader.parse_header()
     68 return neo_reader

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/neo/rawio/baserawio.py:211, in BaseRawIO.parse_header(self)
    197 """
    198 Parses the header of the file(s) to allow for faster computations
    199 for all other functions
    200 
    201 """
    202 # this must create
    203 # self.header['nb_block']
    204 # self.header['nb_segment']
   (...)
    208 # self.header['spike_channels']
    209 # self.header['event_channels']
--> 211 self._parse_header()
    212 self._check_stream_signal_channel_characteristics()
    213 self.is_header_parsed = True

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/neo/rawio/openephysbinaryrawio.py:85, in OpenEphysBinaryRawIO._parse_header(self)
     81 def _parse_header(self):
     82     folder_structure, all_streams, nb_block, nb_segment_per_block, possible_experiments = explore_folder(
     83         self.dirname, self.experiment_names
     84     )
---> 85     check_folder_consistency(folder_structure, possible_experiments)
     86     self.folder_structure = folder_structure
     88     # all streams are consistent across blocks and segments.
     89     # also checks that 'continuous' and 'events' folder are present

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/neo/rawio/openephysbinaryrawio.py:732, in check_folder_consistency(folder_structure, possible_experiment_names)
    730             if segment_stream_names is None:
    731                 segment_stream_names = stream_names
--> 732             assert segment_stream_names == stream_names, (
    733                 "Inconsistent continuous streams across segments! Streams for different "
    734                 "segments in the same experiment must be the same. Check your open ephys "
    735                 "folder."
    736             )
    738 # check that "continuous" streams across blocks (experiments)
    739 block_stream_names = None

AssertionError: Inconsistent continuous streams across segments! Streams for different segments in the same experiment must be the same. Check your open ephys folder.

And the export function requires the recording;

from spikeinterface.exporters import export_to_phy

output_folder = session_folder / "phy_output"
export_to_phy(sorting_analyzer=sorting_analyzer, output_folder=output_folder)
/home/prevosto/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/exporters/to_phy.py:150: UserWarning: Recording will not be copied since sorting_analyzer is recordingless.
  warnings.warn("Recording will not be copied since sorting_analyzer is recordingless.")

---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
Cell In[6], line 10
      8 output_folder = session_folder / "phy_output"
      9 # the export process is fast because everything is pre-computed
---> 10 export_to_phy(sorting_analyzer=sorting_analyzer, output_folder=output_folder)

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/exporters/to_phy.py:230, in export_to_phy(sorting_analyzer, output_folder, compute_pc_features, compute_amplitudes, sparsity, copy_binary, remove_if_exists, template_mode, add_quality_metrics, add_template_metrics, additional_properties, dtype, verbose, use_relative_path, **job_kwargs)
    226     sorting_analyzer.compute("principal_components", n_components=5, mode="by_channel_local", **job_kwargs)
    228 pca_extension = sorting_analyzer.get_extension("principal_components")
--> 230 pca_extension.run_for_all_spikes(output_folder / "pc_features.npy", **job_kwargs)
    232 max_num_channels_pc = max(len(chan_inds) for chan_inds in used_sparsity.unit_id_to_channel_indices.values())
    233 pc_feature_ind = -np.ones((len(unit_ids), max_num_channels_pc), dtype="int64")

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/postprocessing/principal_component.py:372, in ComputePrincipalComponents.run_for_all_spikes(self, file_path, verbose, **job_kwargs)
    370 sorting_analyzer = self.sorting_analyzer
    371 sorting = sorting_analyzer.sorting
--> 372 assert (
    373     sorting_analyzer.has_recording() or sorting_analyzer.has_temporary_recording()
    374 ), "To compute PCA projections for all spikes, the sorting analyzer needs the recording"
    375 recording = sorting_analyzer.recording
    377 # assert sorting.get_num_segments() == 1

AssertionError: To compute PCA projections for all spikes, the sorting analyzer needs the recording

Is there a way to create a SortingAnalyzer object from the separate groups, that can still be associated to the recording data?

@alejoe91
Copy link
Collaborator

Is raw_ephys_data the parent folder of your Open Ephys session folder?

Otherwise, it seems it can't reload the open ephys folder. Did you move or modify any of the subfolders? (except form addnig the settings files)?

@vncntprvst
Copy link
Contributor Author

Yes, raw_ephys_data is the parent folder of the Open Ephys session folder.
I haven't moved or modified subfolders.
Here's the current session data directory structure:

<session_folder path>
├── processed_data
│   ├── spike_sorting_output
│   └── tracking_output
├── raw_ephys_data
│   ├── data_description.json
│   ├── Record Node 101
│   ├── subject.json
│   └── workdir
├── raw_video_data
│   ├── Basler_acA640-750um__24441171__20250401_165928367.mp4
│   └── Basler_acA640-750um__24441215__20250401_165931525.mp4
└── task_data
    └── Bpod

I get the same issue if I do this:

preproc_path = Path(session_folder) / "processed_data/spike_sorting_output/preprocessed"
recording_paths = sorted(preproc_path.glob("experiment1_Record Node 101#*.json"))
base_folder = Path(session_folder) / "raw_ephys_data"

# Load all recordings
recordings = [si.load(path, base_folder=base_folder) for path in recording_paths]
---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
Cell In[19], line 8
      6 # base_folder = sorting_folder 
      7 base_folder = Path(session_folder) / "raw_ephys_data"
----> 8 recordings = [si.load(path, base_folder=base_folder) for path in recording_paths]

Cell In[19], line 8, in <listcomp>(.0)
      6 # base_folder = sorting_folder 
      7 base_folder = Path(session_folder) / "raw_ephys_data"
----> 8 recordings = [si.load(path, base_folder=base_folder) for path in recording_paths]

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/core/loading.py:100, in load(file_or_folder_or_dict, **kwargs)
     98     if object_type is None:
     99         raise ValueError(_error_msg.format(file_path=file_path))
--> 100     return _load_object_from_dict(d, object_type, base_folder=base_folder)
    102 elif is_local and file_path.is_dir():
    104     folder = file_path

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/core/loading.py:175, in _load_object_from_dict(d, object_type, base_folder)
    173 def _load_object_from_dict(d, object_type, base_folder=None):
    174     if object_type in ("Recording", "Sorting", "Recording|Sorting"):
--> 175         return BaseExtractor.from_dict(d, base_folder=base_folder)
    177     elif object_type == "Templates":
    178         from spikeinterface.core import Templates

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/core/base.py:571, in BaseExtractor.from_dict(dictionary, base_folder)
    569     assert base_folder is not None, "When  relative_paths=True, need to provide base_folder"
    570     dictionary = make_paths_absolute(dictionary, base_folder)
--> 571 extractor = _load_extractor_from_dict(dictionary)
    572 folder_metadata = dictionary.get("folder_metadata", None)
    573 if folder_metadata is not None:

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/core/base.py:1108, in _load_extractor_from_dict(dic)
   1106 for name, value in dic["kwargs"].items():
   1107     if is_dict_extractor(value):
-> 1108         new_kwargs[name] = _load_extractor_from_dict(value)
   1109     elif isinstance(value, dict):
   1110         new_kwargs[name] = {k: transform_dict_to_extractor(v) for k, v in value.items()}

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/core/base.py:1108, in _load_extractor_from_dict(dic)
   1106 for name, value in dic["kwargs"].items():
   1107     if is_dict_extractor(value):
-> 1108         new_kwargs[name] = _load_extractor_from_dict(value)
   1109     elif isinstance(value, dict):
   1110         new_kwargs[name] = {k: transform_dict_to_extractor(v) for k, v in value.items()}

    [... skipping similar frames: _load_extractor_from_dict at line 1108 (4 times)]

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/core/base.py:1108, in _load_extractor_from_dict(dic)
   1106 for name, value in dic["kwargs"].items():
   1107     if is_dict_extractor(value):
-> 1108         new_kwargs[name] = _load_extractor_from_dict(value)
   1109     elif isinstance(value, dict):
   1110         new_kwargs[name] = {k: transform_dict_to_extractor(v) for k, v in value.items()}

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/core/base.py:1127, in _load_extractor_from_dict(dic)
   1121     warnings.warn(
   1122         f"Versions are not the same. This might lead to compatibility errors. "
   1123         f"Using {class_name.split('.')[0]}=={dic['version']} is recommended"
   1124     )
   1126 # Initialize the extractor
-> 1127 extractor = extractor_class(**new_kwargs)
   1129 extractor._annotations.update(dic["annotations"])
   1130 for k, v in dic["properties"].items():

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/extractors/neoextractors/openephys.py:158, in OpenEphysBinaryRecordingExtractor.__init__(self, folder_path, load_sync_channel, load_sync_timestamps, experiment_names, stream_id, stream_name, block_index, all_annotations)
    146 def __init__(
    147     self,
    148     folder_path,
   (...)
    155     all_annotations=False,
    156 ):
    157     neo_kwargs = self.map_to_neo_kwargs(folder_path, load_sync_channel, experiment_names)
--> 158     NeoBaseRecordingExtractor.__init__(
    159         self,
    160         stream_id=stream_id,
    161         stream_name=stream_name,
    162         block_index=block_index,
    163         all_annotations=all_annotations,
    164         **neo_kwargs,
    165     )
    166     # get streams to find correct probe
    167     stream_names, stream_ids = self.get_streams(folder_path, load_sync_channel, experiment_names)

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/extractors/neoextractors/neobaseextractor.py:188, in NeoBaseRecordingExtractor.__init__(self, stream_id, stream_name, block_index, all_annotations, use_names_as_ids, **neo_kwargs)
    158 def __init__(
    159     self,
    160     stream_id: Optional[str] = None,
   (...)
    165     **neo_kwargs: Dict[str, Any],
    166 ) -> None:
    167     """
    168     Initialize a NeoBaseRecordingExtractor instance.
    169 
   (...)
    185 
    186     """
--> 188     _NeoBaseExtractor.__init__(self, block_index, **neo_kwargs)
    190     kwargs = dict(all_annotations=all_annotations)
    191     if block_index is not None:

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/extractors/neoextractors/neobaseextractor.py:27, in _NeoBaseExtractor.__init__(self, block_index, **neo_kwargs)
     23 def __init__(self, block_index, **neo_kwargs):
     24 
     25     # Avoids double initiation of the neo reader if it was already done in the __init__ of the child class
     26     if not hasattr(self, "neo_reader"):
---> 27         self.neo_reader = self.get_neo_io_reader(self.NeoRawIOClass, **neo_kwargs)
     29     if self.neo_reader.block_count() > 1 and block_index is None:
     30         raise Exception(
     31             "This dataset is multi-block. Spikeinterface can load one block at a time. "
     32             "Use 'block_index' to select the block to be loaded."
     33         )

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/extractors/neoextractors/neobaseextractor.py:66, in _NeoBaseExtractor.get_neo_io_reader(cls, raw_class, **neo_kwargs)
     64 neoIOclass = getattr(rawio_module, raw_class)
     65 neo_reader = neoIOclass(**neo_kwargs)
---> 66 neo_reader.parse_header()
     68 return neo_reader

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/neo/rawio/baserawio.py:211, in BaseRawIO.parse_header(self)
    197 """
    198 Parses the header of the file(s) to allow for faster computations
    199 for all other functions
    200 
    201 """
    202 # this must create
    203 # self.header['nb_block']
    204 # self.header['nb_segment']
   (...)
    208 # self.header['spike_channels']
    209 # self.header['event_channels']
--> 211 self._parse_header()
    212 self._check_stream_signal_channel_characteristics()
    213 self.is_header_parsed = True

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/neo/rawio/openephysbinaryrawio.py:85, in OpenEphysBinaryRawIO._parse_header(self)
     81 def _parse_header(self):
     82     folder_structure, all_streams, nb_block, nb_segment_per_block, possible_experiments = explore_folder(
     83         self.dirname, self.experiment_names
     84     )
---> 85     check_folder_consistency(folder_structure, possible_experiments)
     86     self.folder_structure = folder_structure
     88     # all streams are consistent across blocks and segments.
     89     # also checks that 'continuous' and 'events' folder are present

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/neo/rawio/openephysbinaryrawio.py:732, in check_folder_consistency(folder_structure, possible_experiment_names)
    730             if segment_stream_names is None:
    731                 segment_stream_names = stream_names
--> 732             assert segment_stream_names == stream_names, (
    733                 "Inconsistent continuous streams across segments! Streams for different "
    734                 "segments in the same experiment must be the same. Check your open ephys "
    735                 "folder."
    736             )
    738 # check that "continuous" streams across blocks (experiments)
    739 block_stream_names = None

AssertionError: Inconsistent continuous streams across segments! Streams for different segments in the same experiment must be the same. Check your open ephys folder.

@alejoe91
Copy link
Collaborator

alejoe91 commented May 28, 2025

I think this could be related to #41 and it's fixed by AllenNeuralDynamics/aind-ephys-results-collector#12 and part of #46

@alejoe91 alejoe91 mentioned this issue May 28, 2025
2 tasks
@alejoe91
Copy link
Collaborator

@vncntprvst can you retry this with the v1.0 pipeline?

@vncntprvst
Copy link
Contributor Author

I'm still having some issue here.
Creating recordings, combining them, and creating Sorting Analyzers works:

# %%
import spikeinterface as si
import os

# %%
data_path = "..."
preproc_folder = "processed_data/spike_sorting_output/preprocessed"
base_folder = ".../raw_ephys_data/"

group_files = [
    f"experiment1_Record Node 101#Neuropix-PXI-100.ProbeA_recording1_group{i}.json"
    for i in range(4)
]

# %%
# Load recordings
recordings = [
    si.load(os.path.join(data_path, preproc_folder, fname), base_folder=base_folder)
    for fname in group_files
]

# %%
# Combine recordings into a single recording object
combined_recording = si.aggregate_channels(recordings)

# %%
print(combined_recording)

#%%
# Create sorting analyzer 
postprocessed_folder = "processed_data/spike_sorting_output/postprocessed"
group_zarr_files = [
    f"experiment1_Record Node 101#Neuropix-PXI-100.ProbeA_recording1_group{i}.zarr"
    for i in range(4)
]

sorting_analyzers = [
    si.load(os.path.join(data_path, postprocessed_folder, fname))
    for fname in group_zarr_files
]

But combining Sorting Analyzers doesn't work:

merged_sorting = si.concatenate_sortings(sorting_analyzers)

---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
File .../scripts/export_to_Phy.py:3
      1 #%%
      2 # Combine sorting analyzers into a single sorting analyzer
----> 3 merged_sorting = si.concatenate_sortings(sorting_analyzers)

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/core/segmentutils.py:362, in ConcatenateSegmentSorting.__init__(self, sorting_list, total_samples_list, ignore_times, sampling_frequency_max_diff)
    360 all_has_recording = all([sorting.has_recording() for sorting in sorting_list])
    361 if not all_has_recording:
--> 362     assert total_samples_list is not None, (
    363         "Some concatenated sortings don't have a registered recording. "
    364         "Call sorting.register_recording() or set `total_samples_list` kwarg."
    365     )
    366     assert len(total_samples_list) == len(
    367         sorting_list
    368     ), "`total_samples_list` should have the same number of elements as `sorting_list`"
    369     assert all(
    370         [s.get_num_segments() == 1 for s in sorting_list]
    371     ), "All sortings are expected to be monosegment."

AssertionError: Some concatenated sortings don't have a registered recording. Call sorting.register_recording() or set `total_samples_list` kwarg.

Setting preprocessed recordings as temporary recordings also fails:

for sorting_analyzer, preproc_recording in zip(sorting_analyzers, recordings):
    sorting_analyzer.set_temporary_recording(preproc_recording)

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
File .../scripts/export_to_Phy.py:3
      1 # %%
      2 for sorting_analyzer, recording in zip(sorting_analyzers, recordings):
----> 3     sorting_analyzer.set_temporary_recording(recording)

File ~/.conda/envs/aind-ephys-pipeline/lib/python3.10/site-packages/spikeinterface/core/sortinganalyzer.py:743, in SortingAnalyzer.set_temporary_recording(self, recording, check_dtype)
    741     raise ValueError(exception_str)
    742 if not np.array_equal(recording.get_channel_locations(), self.get_channel_locations()):
--> 743     raise ValueError("Recording channel locations do not match.")
    744 if self._recording is not None:
    745     warnings.warn("SortingAnalyzer recording is already set. The current recording is temporarily replaced.")

ValueError: Recording channel locations do not match.

Here's a diagnostic on the Sorting Analyzers, if that helps:

for idx, (analyzer, recording) in enumerate(zip(sorting_analyzers, recordings)):
    # Check if analyzer has a recording attached
    has_recording = hasattr(analyzer, "has_recording") and analyzer.has_recording()
    print(f"Analyzer {idx}: has_recording = {has_recording}")
    # If not, try to check channel locations
    try:
        analyzer_locs = analyzer.get_channel_locations()
        recording_locs = recording.get_channel_locations()
        match = np.array_equal(analyzer_locs, recording_locs)
        print(f"Analyzer {idx}: channel locations match = {match}")
        if not match:
            print(f"  Analyzer channel locations: {analyzer_locs}")
            print(f"  Recording channel locations: {recording_locs}")
    except Exception as e:
        print(f"Analyzer {idx}: Error comparing channel locations - {e}")

Analyzer 0: has_recording = False
Analyzer 0: channel locations match = True
Analyzer 1: has_recording = False
Analyzer 1: channel locations match = False
  Analyzer channel locations: [[250.   0.]
 [282.   0.]
 [250.  15.]
 [282.  15.]
 [250.  30.]
 [282.  30.]
 [250.  45.]
 [282.  45.]
 [250.  60.]
 [282.  60.]
 [250.  75.]
 [282.  75.]
 [250.  90.]
 [282.  90.]
 [250. 105.]
 [282. 105.]
 [250. 120.]
 [282. 120.]
 [250. 135.]
 [282. 135.]
 [250. 150.]
 [282. 150.]
 [250. 165.]
 [282. 165.]
 [250. 180.]
 [282. 180.]
 [282. 195.]
 [250. 210.]
 [282. 210.]
 [250. 225.]
 [282. 225.]
 [250. 240.]
 [282. 240.]
 [250. 255.]
 [282. 255.]
 [250. 270.]
 [282. 270.]
 [250. 285.]
 [282. 285.]
 [250. 300.]
 [282. 300.]
 [250. 315.]
 [282. 315.]
 [250. 330.]
 [282. 330.]
 [250. 345.]
 [282. 345.]
 [250. 360.]
 [282. 360.]
 [250. 375.]
 [282. 375.]
 [250. 390.]
 [282. 390.]
 [250. 405.]
 [282. 405.]
 [250. 420.]
 [282. 420.]
 [250. 435.]
 [282. 435.]
 [250. 450.]
 [282. 450.]
 [250. 465.]
 [282. 465.]
 [250. 480.]
 [282. 480.]
 [250. 495.]
 [282. 495.]
 [250. 510.]
 [282. 510.]
 [250. 525.]
 [282. 525.]
 [250. 540.]
 [282. 540.]
 [250. 555.]
 [282. 555.]
 [250. 570.]
 [282. 570.]
 [250. 585.]
 [282. 585.]
 [250. 600.]
 [282. 600.]
 [250. 615.]
 [282. 615.]
 [250. 630.]
 [282. 630.]
 [250. 645.]
 [282. 645.]
 [250. 660.]
 [282. 660.]
 [250. 675.]
 [282. 675.]
 [250. 690.]
 [282. 690.]
 [250. 705.]
 [282. 705.]]
  Recording channel locations: [[ 750.  720.]
 [ 782.  720.]
 [ 750.  735.]
 [ 782.  735.]
 [ 750.  750.]
 [ 782.  750.]
 [ 750.  765.]
 [ 782.  765.]
 [ 750.  780.]
 [ 782.  780.]
 [ 750.  795.]
 [ 782.  795.]
 [ 750.  810.]
 [ 782.  810.]
 [ 750.  825.]
 [ 782.  825.]
 [ 750.  840.]
 [ 782.  840.]
 [ 750.  855.]
 [ 782.  855.]
 [ 750.  870.]
 [ 782.  870.]
 [ 750.  885.]
 [ 782.  885.]
 [ 750.  900.]
 [ 782.  900.]
 [ 782.  915.]
 [ 750.  930.]
 [ 782.  930.]
 [ 750.  945.]
 [ 782.  945.]
 [ 750.  960.]
 [ 782.  960.]
 [ 750.  975.]
 [ 782.  975.]
 [ 750.  990.]
 [ 782.  990.]
 [ 750. 1005.]
 [ 782. 1005.]
 [ 750. 1020.]
 [ 782. 1020.]
 [ 750. 1035.]
 [ 782. 1035.]
 [ 750. 1050.]
 [ 782. 1050.]
 [ 750. 1065.]
 [ 782. 1065.]
 [ 750. 1080.]
 [ 782. 1080.]
 [ 750. 1095.]
 [ 782. 1095.]
 [ 750. 1110.]
 [ 782. 1110.]
 [ 750. 1125.]
 [ 782. 1125.]
 [ 750. 1140.]
 [ 782. 1140.]
 [ 750. 1155.]
 [ 782. 1155.]
 [ 750. 1170.]
 [ 782. 1170.]
 [ 750. 1185.]
 [ 782. 1185.]
 [ 750. 1200.]
 [ 782. 1200.]
 [ 750. 1215.]
 [ 782. 1215.]
 [ 750. 1230.]
 [ 782. 1230.]
 [ 750. 1245.]
 [ 782. 1245.]
 [ 750. 1260.]
 [ 782. 1260.]
 [ 750. 1275.]
 [ 782. 1275.]
 [ 750. 1290.]
 [ 782. 1290.]
 [ 750. 1305.]
 [ 782. 1305.]
 [ 750. 1320.]
 [ 782. 1320.]
 [ 750. 1335.]
 [ 782. 1335.]
 [ 750. 1350.]
 [ 782. 1350.]
 [ 750. 1365.]
 [ 782. 1365.]
 [ 750. 1380.]
 [ 782. 1380.]
 [ 750. 1395.]
 [ 782. 1395.]
 [ 750. 1410.]
 [ 782. 1410.]
 [ 750. 1425.]
 [ 782. 1425.]]
Analyzer 2: has_recording = False
Analyzer 2: channel locations match = False
  Analyzer channel locations: [[500.   0.]
 [532.   0.]
 [500.  15.]
 [532.  15.]
 [500.  30.]
 [532.  30.]
 [500.  45.]
 [532.  45.]
 [500.  60.]
 [532.  60.]
 [500.  75.]
 [532.  75.]
 [500.  90.]
 [532.  90.]
 [500. 105.]
 [532. 105.]
 [500. 120.]
 [532. 120.]
 [500. 135.]
 [532. 135.]
 [500. 150.]
 [532. 150.]
 [500. 165.]
 [532. 165.]
 [500. 180.]
 [532. 180.]
 [500. 195.]
 [532. 195.]
 [500. 210.]
 [532. 210.]
 [500. 225.]
 [532. 225.]
 [500. 240.]
 [532. 240.]
 [500. 255.]
 [532. 255.]
 [500. 270.]
 [532. 270.]
 [500. 285.]
 [532. 285.]
 [500. 300.]
 [532. 300.]
 [500. 315.]
 [532. 315.]
 [500. 330.]
 [532. 330.]
 [500. 345.]
 [532. 345.]
 [500. 360.]
 [532. 360.]
 [500. 375.]
 [532. 375.]
 [500. 390.]
 [532. 390.]
 [500. 405.]
 [532. 405.]
 [500. 420.]
 [532. 420.]
 [500. 435.]
 [532. 435.]
 [500. 450.]
 [532. 450.]
 [500. 465.]
 [532. 465.]
 [500. 480.]
 [532. 480.]
 [500. 495.]
 [532. 495.]
 [500. 510.]
 [532. 510.]
 [500. 525.]
 [532. 525.]
 [500. 540.]
 [532. 540.]
 [500. 555.]
 [532. 555.]
 [500. 570.]
 [532. 570.]
 [500. 585.]
 [532. 585.]
 [500. 600.]
 [532. 600.]
 [500. 615.]
 [532. 615.]
 [500. 630.]
 [532. 630.]
 [500. 645.]
 [532. 645.]
 [500. 660.]
 [532. 660.]
 [500. 675.]
 [532. 675.]
 [500. 690.]
 [532. 690.]
 [500. 705.]
 [532. 705.]]
  Recording channel locations: [[   0.  720.]
 [  32.  720.]
 [   0.  735.]
 [  32.  735.]
 [   0.  750.]
 [  32.  750.]
 [   0.  765.]
 [  32.  765.]
 [   0.  780.]
 [  32.  780.]
 [   0.  795.]
 [  32.  795.]
 [   0.  810.]
 [  32.  810.]
 [   0.  825.]
 [  32.  825.]
 [   0.  840.]
 [  32.  840.]
 [   0.  855.]
 [  32.  855.]
 [   0.  870.]
 [  32.  870.]
 [   0.  885.]
 [  32.  885.]
 [   0.  900.]
 [  32.  900.]
 [   0.  915.]
 [  32.  915.]
 [   0.  930.]
 [  32.  930.]
 [   0.  945.]
 [  32.  945.]
 [   0.  960.]
 [  32.  960.]
 [   0.  975.]
 [  32.  975.]
 [   0.  990.]
 [  32.  990.]
 [   0. 1005.]
 [  32. 1005.]
 [   0. 1020.]
 [  32. 1020.]
 [   0. 1035.]
 [  32. 1035.]
 [   0. 1050.]
 [  32. 1050.]
 [   0. 1065.]
 [  32. 1065.]
 [   0. 1080.]
 [  32. 1080.]
 [   0. 1095.]
 [  32. 1095.]
 [   0. 1110.]
 [  32. 1110.]
 [   0. 1125.]
 [  32. 1125.]
 [   0. 1140.]
 [  32. 1140.]
 [   0. 1155.]
 [  32. 1155.]
 [   0. 1170.]
 [  32. 1170.]
 [   0. 1185.]
 [  32. 1185.]
 [   0. 1200.]
 [  32. 1200.]
 [   0. 1215.]
 [  32. 1215.]
 [   0. 1230.]
 [  32. 1230.]
 [   0. 1245.]
 [  32. 1245.]
 [   0. 1260.]
 [  32. 1260.]
 [   0. 1275.]
 [  32. 1275.]
 [   0. 1290.]
 [  32. 1290.]
 [   0. 1305.]
 [  32. 1305.]
 [   0. 1320.]
 [  32. 1320.]
 [   0. 1335.]
 [  32. 1335.]
 [   0. 1350.]
 [  32. 1350.]
 [   0. 1365.]
 [  32. 1365.]
 [   0. 1380.]
 [  32. 1380.]
 [   0. 1395.]
 [  32. 1395.]
 [   0. 1410.]
 [  32. 1410.]
 [   0. 1425.]
 [  32. 1425.]]
Analyzer 3: has_recording = False
Analyzer 3: channel locations match = True

Any suggestions?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants