Printable Version of this PageHome PageRecent ChangesSearchSign In
Tag:
I attended the Computer Science and Institute of Cognitive Science colloquiums on October 5th and 6th, respectively. Both talks were given by Ben Shneiderman of the University of Maryland. The first was titled The Thrill of Discovery: Information Visualization for High-Dimensional Spaces, and the second was titled Creativity Support Tools: Accelerating Discovery & Innovation.

The first talk focused on software tools that allow users to visualize data in useful ways. Dr. Shneiderman focused mainly on two areas of data visualization that he is personally involved with namely: applied tree mapping/visualization techniques, and a graphical visualization tool called Spotfire. The former are methods of visualizing tree data in a two (or arguably three) dimensional way. Data points are represented as rectangles, and the two dimensions of each point are represented respectively as rectangle area and color. The rectangles are then organized in such a way that they fill a continuous space. Another dimension of the data can be represented by the relative location of each rectangle in the continuous space (i.e. similar things can be placed near each other based on their location in the tree, or geographically located as a function of some other data attribute). One example of this is a tree visualization of the New York Stock Exchange that is available at www.smartmoney.com. In this example stock loss/gain is represented on a spectrum of red to green, respectively, and market cap is reflected as rectangle size. Rectangles are located in clusters based on the industry of the security that they represent.

Spotfire is a graphical package developed by one of Shneiderman's students as a PhD thesis. This tool allows users to visualize high dimensional data in a number of ways. Shneiderman demoed a few of the application's features, but focused mainly on three functions: The first was the ability to cluster and visualize (via a tree) data based on similar attributes. The second was a tool that allowed users to graphically search for patterns in time series data. The third allowed users to automatically create two dimensional projections of the original data, and then fit a linear or quadratic trend line to the result.

I enjoyed the talk and can respect the need for better visualization tools for high dimensional data. My one major criticism is that the talk was mostly devoted to showcasing the software that Shneiderman and his group developed. I would have liked him to give some discussion about why they chose to implement the specific functionality that they did (e.g. suggestion from domain experts, personal ideas, previous research, theory of human information processing?), as well as examples (or testimonials) of how the features have actually helped people in practice. I should also mention that I personally disagree with a couple of the remarks that Shneiderman made about related fields in computer science; in particular, something along the lines of “why would anybody waste their time getting a computer to understand human speech... using a mouse and clicking on keys is more efficient.” Though it is possible that I missed the point that he was trying to make.

The Institute of Cognitive Science talk (the next day), mainly focused on different theories of creativity, and the need to develop methodology that can be used to determine the effectiveness of creativity aids (using Spotfire as a practical example). Shneiderman devoted most of the first hour of the talk to reviewing different theories of how people achieve creativity. These varied from the extremely structured approaches (e.g. engineering method) to totally unstructured (e.g. Eurieke moments). He then addressed the difficulty of applying the scientific method to evaluate the effectiveness of tools such as Spotfire, and proposed a new method of evaluation that could compliment the scientific method in certain domains. He called the new method Science 2.0 (which I disliked), and explained that it would rely more on in-depth case studies–as opposed to repeatable laboratory experiments. I understand the need for such evaluation tools. That is, given the cost (in time and money) of doing comprehensive evaluations of tools such as Spotfire (as well as how they impact creativity in general) it is basically impossible to do repeatable experiments. Further, how do you know that certain findings can generalize to the entire human race without doing the experiment on everybody? Shneiderman says that he is still developing the idea of science 2.0.

I personally think that there are a few ways this idea could be formulated that might help science 2.0 concept to gain acceptance alongside other tools in the arsenal of researchers. First of all, the scientific method (science 1.0, as Shneiderman calls it) should always be something that we strive to achieve. That is, science 2.0 is only invoked when science 1.0 becomes impossible. Case studies should be performed like laboratory tests. For instance, there should always be control case studies, and the scientist should avoid influencing the outcome of the study in any way—such as by modifying the Spotfire program in response to user suggestion (obviously this method would only be used to evaluate something like Spotfire once it reached a certain level of development). In this framework, science 2.0 can be viewed as a random sampling approach to science 1.0. As the number of case studies approaches infinity, the probability that the case studies show correct results approaches one.

In my opinion, one of the main drawbacks of the science 2.0 idea is that it attempts to reformulate what many other fields have already been doing for many years, into a framework that research scientists might find easier to swallow. For instance, engineers routinely perform case studies to evaluate a new design. They do enough tests to determine that, with a certain confidence, the design will probably work a certain percentage of the time. Marketing professionals often ask a target panel about their opinions of a new product, before deciding to produce it. They decide how many people they need to ask to get an adequate representation of the population. In either case, there are only degrees of belief about something. Each field determines how strongly they need to believe in something before they make a decision (e.g. the bridge will not fall down, the product will be profitable to produce). Though it cannot be applied in all situations, the attractive thing about the scientific method (science 1.0) is that it does not have any of this ambiguity.

Last modified 5 October 2007 at 4:42 pm by MOtte