Roughly, in 3-D using MSE works well, especially in the setting MSE + mean-initialize + all_loss
However, using LNCC, the results are distinctly cursed, even if mean inititialized. Notably, this happens without crazy intensity drift.
Look at this whacky patella:
Using the atlas generated by the mean squares technique, we can compare the performance of the ICON_atlas algorithm using the old, pregenerated atlas to the performance of that same algorithm using the new, icon generated atlas in terms of DICE.
We get the results:
No atlas: (register directly)
71.3
Old atlas: (currently in use in oai_analysis_2)
DICE 70.6
New atlas:
DICE 71.6
In 2-D, the atlas generated by (rand_init, LNCC, all_loss) looks like this:
The atlas generated by (rand_init, LNCC, all_loss + 40 * squared mean pixel disp) looks like this:
_ The atlas generated by (mean_init, LNCC, all_loss + 900 * squared mean pixel disp) looks like this:
The atlas generated by (mean_init, LNCC, all_loss):
I'm not sure what to make of this: making an atlas out of LNCC seems hard, and the cursed wobbles seem to be a general principle.
side question: is penalizing the mean jacobian more powerful?
finally, we can actually regularize LNCC by (mean_init, LNCC, all_loss + 900 * sq pix disp, extra long training) as demonstrated in this notebook.
notebook for live investigating
Marc's math:
Then the variation with respect to u_i is:
And therefore the gradient is:
discuss
74.91
Back to Reports