You might think of these forms of interaction with data graphics as including, source code editing. Scripting commands. Graphical interfaces. Direct manipulation. And direct touch. And I’ve written examples beside each of these categories.
Let’s consider each of these.
Source code editing. On the implementation level, a visual representation is defined by source code. Only a few lines of code need to be edited to set the visualization view to where the nodes of interest are located. The altered code is compiled and run to see if the visual result is as expected. If not, the procedure is re-run until the user is satisfied. Changing code lines, re-compiling, and test-running the visualization is the least direct form of interaction, as it exhibits large conceptual, spatial, and temporal separation.
Scripting commands. Alternatively, the visualization may offer a scripting interface allowing the user to enter commands to change the view. Once issued, the commands take effect immediately while the visualization is running. In this scenario, no separate compilation is necessary, which reduces the temporal separation. But still the interaction is rather indirect, and several commands may be necessary before the view fits as desired.
Graphical interface. The field of view is displayed in a graphical interface alongside the visualization. Standard controls such as buttons and sliders allow the user to easily shift the view and control its zoom factor. Any changes are immediately reflected in the graphical interface and the visualization. Given the fact that the graphical interface represent the view status and at the same time serves to manipulate it, the conceptual gap is narrowed. Yet, the interaction (with the controls) and the visual feedback (in the graph visualization) are still spatially separated. Direct manipulation. The user zooms the view directly by drawing an elastic rectangle around the nodes to be inspected in detail. This is a rather simple press-drag-release operation when using the mouse. During the interaction, visual feedback constantly indicates the frame that will make up the new view once the mouse button is released. Any necessary fine-tuning can be done using the mouse or trackpad. In this scenario, the manipulation of the view takes place directly in the visualization. There is no longer a spatial separation between the interaction and the visual feedback. Or is there?
Direct touch. Indeed there remains some degree of separation. The interaction is carried out with the mouse, whereas the feedback is shown on the screen. To obtain a yet higher degree of directness, the interaction can alternatively be carried out using touch input on the display. Now, the interaction takes place exactly where the visual feedback is shown. A truly direct way of zooming in on a node-link diagram.
So to know what level of interaction seems appropriate, we should consider what? Our audience and purpose?