Printable Version of this PageHome PageRecent ChangesSearchSign In
Tag:
I attended the Computer Science Colloquium on September 6th given by Margaret M. Burnett titled End-User Software Engineering: Surprise-Explain-Reward.

The talk primarily focused on a series of studies in which Dr. Burnett and her colleagues tested user performance on the task of correcting erroneous spread sheet formulas. The specific spreadsheet program used was developed in Forms/3, and was equipped with tools that could help the users in their task. The primary tools at a user's disposal included the ability to validate or invalidate the output of a formula (via a check mark or an X, respectively), to set range limits on values, and to see graphically a flow chart displaying the dependencies between different cells. If an error was found (and the X used), then the spreadsheet application shaded in other cells in varying degrees of yellow, depending on the likelihood that they caused the error. If the user wanted to validate their program, they could use the check mark to work through it—cells that had not been checked were outlined in red, those that had been validated were outlined in blue, and partially validated cells were outlined in purple. The basic philosophy behind the software is What You See Is What You Test or WYSIWYT. The aim of WYSIWYT is to help users to debug programs by unknowingly incorporating good software engineering practices (e.g. by showing them possible places that errors could be caused by, and that the errors even exist in the first place).

The second focus of the talk (and the major focus of a second talk given by Dr. Burnett at the I.C.S. on the following day), was about trends in user behavior toward and interaction with the provided WYSIWYT program functionality. Particularly, the differences that men and women showed in their willingness to use the tools provided by the program, their tendencies to invoke the 'help' features of the program to learn how to use the features that were unfamiliar to them, and correlations to a user's self efficacy about their computer literacy. Though, the last point was only addressed by saying that men, in general, had higher computer self efficacy than women. The research found that men were more willing to 'tinker' with the program tools than women (at least when the tools were simple to tinker with), while women were more likely to try editing the formulas in the spreadsheets directly. Dr. Burnett reasoned that the higher editing rates observed in women were the result of a lower desire to learn new techniques, which was in turn caused by a low degree of computer self efficacy. She also reasoned that higher editing rates lead to more new user errors being introduced, and thus a downward spiral of even lower self efficacy ensued. She also noted that both men and women performed nearly identically on the prescribed task of fixing the original errors, and that men and woman scored similarly on proficiency tests. Therefore, the key to getting women to want to learn about new features seems to be raising their level of self efficacy.

In order to address this issue, a help section was added to the program that allowed users to watch a video of somebody in a similar situation to themselves receiving help from an expert. The videos were interactive and showed a spreadsheet being updated alongside the actual camera shot of a (female) user and (male) expert. During the videos, Women apparently switched their focus between the camera view and the spreadsheet, while men tended to look mainly at the spreadsheet. Dr. Burnett sighted this as evidence that the women in the study were identifying with the user in the video, while the men were not. Dr. Burnett gave a few references to why women might respond this way, and also noted that this video setup (male expert and female user) was specifically chosen to elicit a high degree of female self efficacy improvement. Though the findings of this video experiment were interesting, I thought that it was curious that Dr. Burnett did repeat the experiment with a male in the user role, or at least address the idea that it is possible men may have responded differently to a video showing a man needing and receiving help than a woman (i.e. if this setup was used to improve woman self efficacy, then the fact that men were less interested in this particular setup should not be used as evidence for men being less interested in help videos, in general).

Last modified 7 September 2007 at 4:27 pm by MOtte