So, I ran a test of the newly compiled version of
functionhighpassandvariancenormalize at the scale of a fully concatenated HCA subject, and the result at full scale was frankly disappointing.
First, on a positive and confirmatory note, the memory usage of the "1st pass"
functionhigh... calls to the individual runs was reduced 25%, exactly consistent with 's testing in interpreted matlab mode (above: https://github.com/Washington-University/HCPpipelines/pull/168#issuecomment-594908252).
But, for whatever reason, that memory reduction didn't scale up to the "2nd pass"
functionhigh... call (for the VN on the concatenated run). (Incidentally, I don't see a "pre-change" memory value for the concatenated VN stage of 's simple 2-run MR-FIX test, so its possible that my findings would be borne out even for just a 2-run test. Might be worth running that test to compare).
2nd pass VN, Pre-changes (~ 42 GB):
1101.01user 2110.48system 1:06:10elapsed 80%CPU (0avgtext+0avgdata 42412500maxresident)k
2nd pass VN, Post-changes (~ 38 GB):
1712.31user 4460.70system 1:46:04elapsed 96%CPU (0avgtext+0avgdata 38753080maxresident)k
On the full concatenated HCA data, the
fix stage is taking 30 GB, so unfortunately, we don't appear to have gotten the 2nd pass VN stage down to the same memory usage as
fix (which we would have if we achieved a 25% reduction on the 2nd pass VN stage).