Hi Conner
I am working on an Optical Coherence Tomography (OCT) project and we are considering using OME(big)TIFFs to store the data. As the current project is designed, Z indicates depth into the sample and with scan with X as the fast axis and Y as the slow axis. Therefore we end up with ZX images stacked along the Y axis.
Thanks for starting the discussion. Your use case is actually similar to a few other imaging modalities where the data acquisition does not fit the convential XY[other] order e.g. OPT or lightsheet.
Is there a recommended way to store the data in a way that is congruent with our current definitions, or would we be better off redefining the axis definitions in our program such that depth is Y, fast axis is X, and slow axis is Z? I simply want to minimize the mental load for users so they don't have to transpose the dimensions in their head when analyzing the data.
An example of how such things can be represented with the current technology can be illustrated by this
IDR dataset. This is a lightsheet 5D volume (XYZ, time and channels) with orthogonal projections alongside XY, XZ, and YZ. Each projection is internally stored using and represented as a separate 4D image and the 4 images in the fileset have implicit spatial relationships between themselves.
It would be interesting to hear more about your pipeline and the users expectation. Would these images be available via viewing tools? What type of downstream analysis are your performing?
Best,
Sebastien