Skip to content

experiment comparisons #11

@jordandekraker

Description

@jordandekraker

As discussed in the grp-brainscores meeting, @rcruces and I had these comparisons to suggest (Raul feel free to add if i missed anything):

  • LamaReg (labelsonly)
  • LamaReg (robust - likely the new default)
  • ANTs default
  • ANTs robust (add highres iterations only, even though lowres initialization will still fail)
  • ANTs steelman (no lowres iters, robust highres iters) (also note this may actually be WORSE for some datasets that are low res)
  • fMRIprep defaults (if different from above)

You may still get additional Reviewer requests, but I think this is solid coverage as a start point.

Also when it comes to test datasets, focus on generalizability rather than within-dataset precision. Maybe only keep 20-50-ish subjects per dataset (that should be enough for a robust estimate of registration success), but try to cover a few broad cases:

  • 7T (PNI)
  • 3T research-quality (MICs)
  • clinical quality (low res, maybe talk to @ella-sah about a good dataset to use)
  • possibly some HCP1200 (since its so widely used)
  • possibly a disease case, like ADNI

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions