I am having problems creating some datamats for my PLS analysis of event-related fMRI data. A few of my datamats end up being only a few kilobytes in size, (while most of the are around 8MB). I can't seem to find out why this is happening with only some of my subjects (The nifty files that I am loading in are all in order). In the small datamats, it looks like the st_coords and the st_datamat variables are empty. I'd appreciate any help you can provide!
Thanks.
Here is a link to my data. I've included my pre-processed nifti files, the datamats that I created for two subjects that have been causing me problems. I re-created theses datamats using the "consider all voxels as brain" option. I ran an analysis on these datamats, and it is clear from looking at the result that the brain is empty. Any help as to why this might be happening would be greatly appreciated. https://www.dropbox.com/sh/qh66ghbo59ayqjl/AABmSHIeXhx6EC0JbmadGC7ja
Thanks,
Sabrina
Some thoughts...
1) use 16-bit int images - the two you uploaded are 32-bit real - check the pipleline for errors - MRI data is at best 16 bit.. no need to go to higher precision
2) something else is wrong with your processing - you should not have the bright band around the edge of the brain
3) the bright band is causing the problem. If you look at your result file, you can see that there is a huge band of artifact - if you scale the lv to be 0/0 for Pos/Neg you can see a pattern but it's likely garbage...
these issues should go away if you fix the nii files...
nancy
Hi Nancy (and PLS other people),
Thank you for taking a look. The bands appeared because the selected the "consider all voxels as brain" option in creating the datamats for these particular problem subjects. When we created their datamats normally (thresholded at .015), the datamats are empty (and so we cannot run an analysis on them at all). This does not seem to be an error with our pre-processing, because we ran all the participants through the same preprocessing pipeline, and are able to create normal datamats for most of the subjects .
I have now uploaded the data from two subjects for which the datamat files did not create properly ("bad files") and the data from two subjects with which we've had no issues ("good fies"). We preprocessed the scans in the same way and we used the same procedure to create the datamats for both (define brain region automatically: threshold: 0.15 and normalizing data with ref. scans) but you will notice that the datamats for the "bad files" are empty.
https://www.dropbox.com/s/1c6e8vh3741pukt/Updated%20PLS%20problem.zip
In comparing the nifti files, we have not been able to find any difference (the output of 3dinfo is exactly the same). So we can't figure out why the datamats are created for one, but not the other. We must be missing something. If anyone could have a look and see what is causing the issue in the problem files, that would be very helpful.
Thanks,
Sabrina
Your data are completely corrupted and not usable.
1) the bright band I mentioned is seen when loading the nii file into a viewer (like mricron)
2) your pipeline is essentially making all of your voxels have the same value - the signal intensity is ~1000, and no variance to speak of
3) In your "bad" images, you have some random voxels with extremely high values - that is why all of the voxels go away when you threshold
4) if you are positive your processing pipeline is not causing the signal intensity range to change, then check your raw dicom images to make sure that your scanner is not acting up. I suspect it is not a scanner problem.
I've made a pdf showing you where the issues are, and what the signal intensity range should look like...
you can download it here:
https://www.dropbox.com/sh/m5foqok0qqr1m37/AACsX5EAQ0OcAgKl6BbJCMmaa
Good luck!!!
Nancy
It looks like it turned out the be a masking problem. When I appied a group-level MNI mask for the EPI images (instead of thresholding at 0.15 or considering all voxels as brain), the datamats were created just fine. Though my data did have a few voxels with extremely high values, it doesn't seem like my data were completely corrupted. It seems as if normalizing the data through our preprocessing pipeline was not the problem, since analyzing these images (after applying the mask) yeilded resonable results.
However, since you've taken a look at our data, I was hoping you could confirm just one last thing. We've reviewed the orginal papers, and the website to assure that our preprocessing steps are appropriate for a PLS analysis. Would you mind confirming that everthing we've done is reasonable. Here is an ordered list of what has been done to our data before loading it into the datamats. Please let us know if any of this is incorrect:
- Generated physiological noise regressors
- Slice-time correction
- Motion corrected each subject to the first run of the session
- Created whole brain masks
- Demeaned each dataset
- Freesurfer-derived masks to generate white matter and CSF regressors per run
- Regressed headmotion, linear trend and white matter & CSF out of each run
- FWHM 6mm smoothing
- Linear registration to MNI space
- Analyzed in PLS using group level MNI space mask.
Thank you so much for taking a look at this problem for us, Nancy! Gary, Joe and I really appreciate it.
- Sabrina
OK - Generated physiological noise regressors
OK (justify in the paper) - Slice-time correction
OK - Motion corrected each subject to the first run of the session
OK - Created whole brain masks
*Not necessary for PLS (justify in paper)- Demeaned each dataset
OK (justify in paper) - Freesurfer-derived masks to generate white matter and CSF regressors per run
OK (justify in paper) - Regressed headmotion, linear trend and white matter & CSF out of each run
OK - FWHM 6mm smoothing
OK - Linear registration to MNI space
:-D - Analyzed in PLS using group level MNI space mask.
*We use the onset TR to normalize the subsequent signal intensities, so we automatically account for drift in the signal - the option you selected is rather controversial - so if you keep it in, be prepared to justify in the paper.
The other options where I indicated "justify" have also had some pro/con discussion in the literature
also... it would seem that demeaning/detrending are redundant if you are regressing out a linear drift???
Baycrest is an academic health sciences centre fully affiliated with the University of Toronto
Privacy Statement - Disclaimer - © 1989-2024 BAYCREST HEALTH SCIENCE. ALL RIGHTS RESERVED