Sensors and computer simulations generate incredible amounts of data, which cannot be examined directly by humans. Visual display of information allows to present large amounts of data in a very compact form. However, we have now so much data that a single image cannot convey the information anymore, because the data consists of non-scalar values like vectors, tensors, or the data domain is high dimensional possibly including a time dimension, or the resolution of the data is so high that only small parts can be viewed in detail, or certain relations exist among the data items described by other structures than functions in a contiguous spatial domain. But even if we had a one-to-one mapping between the data and the colors of all visible pixels, it is crucial that the decisions taken in the choice of this mapping are already part of the interpretation process, emphasizing certain structures and subduing others. This can lead to positive effects uncovering otherwise unconceivable relations in the data, but may also produce false evidence. In particular, the type of pre-interpretation performed in the course of the display cannot be easily specified and the application scientists who interpret the data afterwards are often left with the uneasy feeling that some detail might have been lost or added in the process.
One method of dealing with this problem is the application of several different visualization methods to the same data set. The hope here is that, while each of the method is imperfect and in danger of adding or omitting some information, the pros and cons are different and thus the examination of all visuals offers a base for a more trustworthy interpretation form the application point of view (Fig 1.). Another approach is to offer several views in the same image, enriched also by additional context data (Fig 2.). In some cases we may also want to consciously apply a very radical simplification in the visualization process relying on a single feature, in the aim that this will help us better understand the main global effects rather than being confused by too much detail (Fig. 3). Visualization is an interactive process. By offering a few parameters which allow to emphasize various aspects of the data, we hope to eliminate the danger of misinterpretation. However, here we also cannot get around a prior decisions about which parameters will be offered, as the parameter space of visualization methods itself is so large that it cannot be explored thoroughly. One idea in this context is to analyze the data automatically and try to adapt the parameter controls to the data itself (Fig 4.).
2. The unstable invariant set (blue) of a knotted flow (a knotted trajectory in red) and their projections onto the walls.
3. A direct visualization of the motion generated by a contrast agent in the blood flow with color encoded velocities
and a simplified view of this process using just one color but fading the detected motion to allow a smoother transition.
4. Multiscale visualization of a vector field. The multiscale consists of flow aligned basis functions which
belong to the hierarchy generated by an algebraic multigrid process operating on a flow aligned tensor.
First we see a fine (a) and two coarse levels (b,c), below different level combination methods.