Links
Course Documents
     Main Page
     Assignments
     Contact Information
     Course Announcement
     Course Participants
     Discussion Forum
     Lecture Material
     Previous Course
     Project
     Questionnaires
     Schedule and Syllabus
     Swiki Basics
Swiki Features:
  View this Page
  Edit this Page
  Printer Friendly View
  Lock this Page
  References to this Page
  Uploads to this Page
  History of this Page
  Top of the Swiki
  Recent Changes
  Search the Swiki
  Help Guide
Related Links:
     Atlas Program
     Center for LifeLong Learning and Design
     Computer Science Department
     Institute of Cognitive Science
     College of Architecture and Planning
     University of Colorado at Boulder
The Virtual Bus Stop: Final Report


Project Description | Progress Report | Updates | Updates 2 | Final Report






Participants: Jon Marbach, Jing Fang

Note: Although this is the final class report, the participants see this as an ongoing project, hence the present tense will be maintained when referring to the system. Also, the Problem Statement and Rationale are largely restatements from the original project proposal as these previous ideas are still current and applicable.

Problem Statement:

The Virtual Bus Stop is to be a prototype and simulation environment for an "intelligent" bus stop which will make using public transportation easier for everyone - especially for those with cognitive disabilities. This will be an immersive 3D environment that may be used to explore the problem domain by allowing potential users to get a feel for how such a bus stop might work and capturing feedback from them. Simulations could be run to illustrate shortcomings of current systems, showing the necessity of more advanced public transportation.

One potential part of the Virtual Bus Stop would be to allow users to make configuration changes to the bus stop such as placement and number of touch screens, proximity to road, or existence of other computing devices and displays. The rationale for these changes could be documented, creating a catalog of design decisions.

Through the use of a virtual hand, the user could potentially interact with the display elements in the bus stop, possibly planning a route and being informed of a bus arrival time. Another possibility is to simulate the use of a handheld computing device to communicate with the busses and the bus stop itself. A small piece of this vision is that a PDA could help identify the correct bus for a rider to board when three busses that appear to be indentical arrive at the stop.

This project is a piece of the L3D group's Mobility for All project and is ultimately intended to be used to simulate and thereby evaluate the proposals of that project.

Rationale:

Immersive 3D Visualization technologies have incredible potential benefit in many fields but up to this point, due to the high costs involved, have only been utilized in the Defense and Oil industries. We are very fortunate to have on campus an immersive visualization facility, the BP Center for Visualization, whose mission is in part to help expand visualization into other domains. This project is being developed for the center's 'IVE' which is a 12'w x 12'd x 10'h cube (3 sides and a floor) onto which a 3D scene can be projected.

This project has been one of the first student research projects at the Visualization Center and has served as a foundation of the new center's relationship with the campus. This project has begun to lay groundwork in terms of software and operations for future projects at the center.

Course Relevance:

The importance of exploring the possibilities of an immersive 3D environment is that it may lead to new ways of designing, learning, and collaboration. The Virtual Bus Stop can potentially support design by allowing user modification, learning by capturing user feedback as they explore the environment, and collaboration by keeping a catalog of design changes that document differing rationales. A distinct advantage of a virtual prototype over a physical one is that the computational elements of the busstop can be simulated as well as the physical ones, which are the only elements represented in a physical prototype.

The sheer scale of the IVE makes it very conducive to collaborative behavior. Although a few people can huddle around a single monitor, perhaps even viewing a stereographic 3-d scene, having a 144 sq ft open space to move around in certainly provides a more spacious and comfortable alternative for as many as six collaborators.

As a learning environment, immersive 3d presents myriad possibilities that range from teaching chemistry to children, to helping train a cognitively disabled person to complete a task, to simulating what someone with Autism sees and hears so that others can understand that person's point of view. In the case of the bus stop, the investigators from the mobility for all project may be able to learn which of their ideas are most effective and which may need further refinement, incrementally building domain knowledge.

Exprience of Learning, Design and Collaborations

Since we are nearly the first student project in th BP Center for Visualization, there are lots of things that we didn't know but had to know to continue the project. The learning experience we had fits the typical model of learning by demand. Whenever we encountered a problem, we had to find a way ourselves to solve it as soon as possible by collaboration with other people at the center or looking for the information from other resources (books, internet). The typical example came from our problem of screen-capture of web-window for our 3-D bus-stop. After intensive search for the answers from various resources, we were still unble to find satisfactory soloutions for that problem. As we learn from Independent Project on Collaboration, a great group has to extend the base of the participants and get as much information as possible from external resources. One of such information management example mentioned in the class is Experts-Exchange. To reach out and find the solution to our problem, we posted our question on Experts-Exchange. Our experience also shows that the symmetry of ignorance can exist in every situation of collaboration.


Vertical Integration:

The group aspect of this project has actually been a subtle side-goal of the project not mentioned previously. Jing, a sophomore Computer Science undergrad, and Jon, a Master's candidate did not always work as a pair of programmers but more often took on informal learner/teacher roles (respectively) showing that a teacher is a function of context rather than of the person. Known as vertical integration, this crossing of grade-levels allows graduate students to share what they've learned with undergraduates, enriching the experience overall.



Technical Overview:

It should be said (again) that this project's goals span more than just a semester's work. The above has described the long-term goals and over the course of just a little over two months we have taken some of the first steps in achieving those goals, as the photographs on the Updates pages show. We have converted a cad model to a format we can work with, taken photographs to be used as textures, used image editing tools to manipulate those images to create textures for the models, used modeling tools to define how these textures will be applied to the model, and developed code from scratch to read model files, texture files, and display elements in the scene via a hierarchical scene tree (a scene graph, as it is commonly called). The user can navigate freely through the environment which includes a bus stop, an animated bus, a ground plane, a 360-degree sky dome, a user-placed web browser, and a virtual hand that moves as the user moves the hand-held input device known as the wand. The user may reposition the virtual browser window - ultimately to be a virtual touch screen used to plan a bus trip itinerary - with this virtual hand. The user may execute gross navigation using the joystick located on the wand but can also explore the scene locally simply by walking around the 12'x12' floor of the immersive projection cube.

This enumeration of elements and tasks is only presented to give the reader some insight into the effort involved in such a project. When it all comes together as one elegant system, it is easy to forget the energy already invested and the past obstacles overcome that have resulted in newly constructed knowledge that allows you to move forward at an ever-faster pace: everything becomes elementary in hindsight.

Technical Details:

The following UML diargram shows the main classes from the application.



The scene graph is an n-ary tree consisting of a root SceneNode and n child SceneNodes which in turn may have children. Each scene node may point to a RenderObject which references model geometry and texture information. Although it is not apparent to the user, each object in the bus stop itself is an instance of RenderObject and is a separate node in the scene graph. This was done so that any element of the scene may be positioned independently. A scene graph hierarchy is a chosen because it allows logical connections between elements and intuitive placement thereof: bus wheels would be children of the bus object so that when the bus is positioned the wheels move along with it. Another classic example is if you rotate a persons upper arm, the lower arm and hand should move too. In the bus stop environment, all pieces of the stop itself are children of the main central metal post and awning so that if this object is moved, the entire bus stop is moved, but bench objects have no children so manipulation of these is independent and changes are not propagated through the tree.

Evaluation of the system:

We feel that given that we are on e of the first student projects at the bp Center for Visualization we have made sufficient progress. The system gives the user a very good idea of how realistic and effective an immersive 3d environment can be. Also, the design of the system is open enough that any 3d model data that is converted to our file format may be loaded and displayed relatively easily; new elements may be added to the scene by just attaching another SceneNode to the scene graph and positioning it accordingly.

Potential further developments:

We would certainly be interested in seeing the project out toward its longer term goals as described in the previous sections. A few shorter range goals that we would like to accomplish soon are: continuous browser window capture (this item is already in progress), user interaction with the virtual browser, collision detection, user modifiability of all scene elements, scripted animation, XML scene file support, and 3d-positional audio.


Refernces and Resources:

BP Center for Visualization

Mobility for All

Center for Lifelong Learning and Design




Semester Design Project Home Pages

View this PageEdit this PagePrinter Friendly ViewLock this PageReferences to this PageUploads to this PageHistory of this PageTop of the SwikiRecent ChangesSearch the SwikiHelp Guide