NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
My application, VisBio, is designed to fluidly visualize multidimensional images. To do so, it creates a low-resolution (typically 96x96) thumbnail of each image during data import. VisBio used to mash these thumbnails into a big FieldImpl called "thumbs" with a MathType such as: (bio_time -> (bio_slice -> ((ImageLine, ImageElement) -> value))) Where ((ImageLine, ImageElement) -> value) is the data's core image MathType, and bio_time and bio_slice are RealTypes representing additional dimensional axes (in this case, time points and focal planes, respectively). However, with extremely large datasets, I realized this strategy was consuming too much memory during the VisAD display transform. For example, one of our datasets consists of 85 timepoints and 33 slices, for a total of 2805 thumbnails created upon data import. 85x33x96x96x4bpp = 99MB of thumbnail data, which is not really a problem with a modern computer. But that 99MB of raw data being transformed in the display started eating upwards of 1GB of RAM, not to mention the slowness of the initial transform during the data display phase. The upside of the above strategy was that animation was very smooth (e.g., with mappings of bio_time -> Display.Animation and bio_slice -> Display.SelectValue). Unfortunately, the RAM requirement was too great. Thus, I recently switched VisBio to a different strategy. It no longer maintains a master "thumbs" data object, but instead has an array of FlatFields that get linked into the displays on an as-needed basis. The result is that the memory requirement was drastically reduced, and VisBio can now effectively make good on its claim of being able to visualize dataset, no matter how large. The downside is that animation has become somewhat choppy. I realize there is a transform computation involved each time I setData the new thumbnail. However, part of the choppiness is due to Java garbage collection. When animating, the process is typically smooth for a few frames, followed by one choppy frame, then smooth again, etc. When studying VisBio's memory monitor, I noticed that these choppy frames correspond to Java garbage collection operations. That is, the memory usage creeps up by about 20MB per frame, then drops back down to previous levels during the choppy frame. My question is: can I fix this behavior, and if so, how? I realize that this problem may be beyond the control of VisAD's display model, but I thought I'd ask anyway to see if there are any thoughts or suggestions. Secondly, as promised in the subject line, here is a NullPointerException I received while animating in VisBio: java.lang.NullPointerException at visad.ShadowType.addIndices(ShadowType.java:329) at visad.ShadowTupleType.<init>(ShadowTupleType.java:78) at visad.ShadowRealTupleType.<init>(ShadowRealTupleType.java:71) at visad.java3d.ShadowRealTupleTypeJ3D.<init>(ShadowRealTupleTypeJ3D.java:53) at visad.java3d.RendererJ3D.makeShadowRealTupleType(RendererJ3D.java:124) at visad.RealTupleType.buildShadowType(RealTupleType.java:501) at visad.java3d.ShadowFunctionOrSetTypeJ3D.<init>(ShadowFunctionOrSetTypeJ3D.java:66) at visad.java3d.ShadowSetTypeJ3D.<init>(ShadowSetTypeJ3D.java:43) at visad.java3d.RendererJ3D.makeShadowSetType(RendererJ3D.java:136) at visad.SetType.buildShadowType(SetType.java:121) at visad.DataDisplayLink.prepareData(DataDisplayLink.java:258) at visad.DataRenderer.prepareAction(DataRenderer.java:314) at visad.DisplayRenderer.prepareAction(DisplayRenderer.java:844) at visad.DisplayImpl.doAction(DisplayImpl.java:1644) at visad.ActionImpl.run(ActionImpl.java:353) at visad.util.ThreadPool$ThreadMinnow.run(ThreadPool.java:95) Thanks, -Curtis
visad
archives: