What is visualization?
This paper describes a hardware/software architecture for interactive scientific visualization of large data sets on scalable clusters. The critical enabling technology for this architecture is a scalable high performance packet switched network. Per-stage latency measured in microseconds is a necessary condition for real-time interactivity measured in milliseconds on clusters with thousands of nodes. In addition, packet switching is the basis for overcoming scaling and capability limitations of traditional graphics supercomputer architectures. Overcoming these limitations requires close coupling among rendering nodes, in order to maintain interactive performance, and because the network requires a non-blocking multi-stage topology that is easiest to implement using centralized switching. In contrast loose coupling is viable for displays which may be remotely located across a Grid or legacy networks. This paper discusses the general problem of distributed scientific visualization, proposes an architectural solution to this problem, and presents two different approaches to implementing this architecture. The architecture overcomes limitations of previous technology by supporting direct volume rendering on large tile displays and other novel effects.
Current visualization environments are often separated from the simulation resources they support by a network or shared file system. This makes concurrent visual simulation impossible, and often introduces an additional requirement for data reduction. This paper presents an alternative architecture in which visualization is distributed throughout a simulation cluster so that data may be concurrently simulated and visualized with no data movement. Furthermore the paper argues that this architecture is an endpoint of current clearly identifiable trends in commoditized graphics workstation technology and may eventually be implementable in software.
In order to reach a general audience this paper begins with a high level survey of requirements in visualizing scalable data sets on clusters. It introduces basic concepts of surface and volume rendering, graphics accelerators, and issues in scaling parallel rendering algorithms. It formulates a compositor based model of scalable parallel rendering and identifies two shortcomings of the prior art in compositing architectures. It demonstrates that a network-centric compositing architecture removes the shortcomings of the prior art.
Current trends in closer integration of memory, network, and graphics resources are all driving toward this making this architecture feasible in commodity platforms. The model presented here represents a design that is to be deployed in prototype form at the Pittsburgh Supercomputer Center, and is being developed with U.S. government assistance to satisfy requirements for extremely large scale visualization capabilities in the national security arena.
For Whole seminar report, Download .doc file