These are all the demos for ISMAR 2006:
Tentative Floorplan: demos_floorplan.pdf.
ARiSE is an innovative teaching aid based on Augmented Reality technology. It enables teachers to develop, with moderate efforts, new teaching practices and curricula to bring scientific and cultural contents to school classes in an easy to comprehend way. ARiSE is about an Augmented Reality Teaching Platform named ARTP.
ARTP adapts existing augmented reality (AR) technology for museums to the needs of students in primary and secondary school classes. By using 3D presentations and user-friendly interaction techniques a better understanding of scientific and cultural content is guaranteed and at the same time coupled with a high student motivation to participate and to work together in small groups. The students will have the possibility to interact together with virtual objects in a virtual shared space provided by an AR display system and thereby they perform learning by doing instead of learning by reading or listening. ARTP additionally enables team work, collaboration between classes in the same school, or even remote collaboration between schools in different countries - in a learner-centered approach.
This project is funded by the European Union through the IST programme under FP6 with the contract number IST-027039.
Keywords: augmented reality, 3D graphical augmentation, object overlay, mobile augmented reality display system, visualization, 3D modeling, human computer interface, interaction devices, authoring tool, collaborative learning, AR for education, training, and group work.
This paper outlines some of the features and functionalities of MARA, Sensor Based Mobile Augmented Reality system, demo. The MARA system implements hand-held, video-see through augmented reality for Nokia S60 mobile imaging devices equipped with additional sensors, shown in Figure 1. The system utilizes sensors as follows: position is provided by a GPS receiver, accelerometers provide relative orientation and a tilt compensated magnetometer is used to determine heading. The device's on-board camera is used for image acquisition and the on-board screen for rendering, including annotations. All the annotation data and additional map images are downloaded from external services on the Internet via cellular network connection. The system is based upon a light-weight and portable standard platform. It requires no additional devices beyond the sensors. The platform also has excellent capabilities for network connectivity and great potential for multimodality.
Keywords: Sensor based, mobile augmented reality, mobile imaging device
This demonstration showcases a robust tracking technology for urban outdoor environments. Users will be able to operate a hand-held unit to see and interact with information registered to the surrounding environment. A simple location based game will be part of the demonstration to show the performance of the system. The system uses a novel edge-based tracker which dispenses with the conventional edge model, using instead a coarse textured 3D model. This yields several advantages: scale-based detail culling is automatic, appearance-based edge signatures can be used to improve matching and the models needed are more commonly available. Furthermore, a back store of reference frames with automatic frame selection jump-starts the edge-based tracker after dynamic occlusions or failures.
Keywords: outdoor augmented reality, model based tracking, sensor fusion
The demo shows the result of a camera pose estimation framework presented in the paper with the same title. The tracking approach does not depend on any preprocessed data of the target scene. Only a polygonal model of an object in the scene is needed for the initialization of the tracking. A line model is created out of the object rendered from a given camera pose and registrated onto the image gradient for finding the initial pose. In the tracking phase, the camera is not restricted to the modeled part of the scene anymore. The scene structure is recovered automatically during tracking. Point features are detected in the images and tracked from frame to frame using a brightness invariant template matching algorithm. Several template patches are extracted from different levels of an image pyramid and are used to make the 2D feature tracking capable for large changes in scale. Occlusion is detected already on the 2D feature tracking level. The features' 3D locations are roughly initialized by linear triangulation and then refined recursively over time using techniques of the Extended Kalman Filter framework. A quality manager handles the influence of a feature on the estimation of the camera pose. As structure and pose recovery are always performed under uncertainty, statistical methods for estimating and propagating uncertainty have been incorporated consequently into both processes.
Keywords: marker-less tracking, real-time camera pose estimation, reconstruction
Marker-based optical tracking systems are widely used in augmented reality, medical navigation and industrial applications. We propose a model for the prediction of the target registration error (TRE) in these kinds of tracking systems by estimating the fiducial location error (FLE) from twodimensional errors on the image plane. We have designed a set of experiments in order to estimate the actual parameters of the model for any given tracking system. We present the results of a study which we used to demonstrate the effect of different sources of error. The method is applied to real applications to show the usefulness for any kind of augmented reality system. We also present a set of tools that can be used to visualize the accuracy at design time.
Keywords: Optical Tracking, Accuracy Estimation, Error Propagation, Error Prediction, Target Registration Error
This AR system is an entertainment tool allowing users to watch "virtual baseball game"on the real tabletop fieldmodel through a web-camera attached to a LCD monitor. Virtual 3D players and a ball are overlaid onto a tabletop fieldmodel where multiple planar markers are distributed.
The baseball game scenes are replayed by using input history data of the actual game. On current baseball web-sites, there are details about the day's games. However, such web-sites data is not easily readable to the user. (Players/Ball movement, etc...) Usual scorebook is not helpful to understand such game history, either. In contrast, the user can easily understand the game via watching the replayed game by virtual 3D players of our system.
For accurate overlaying of the baseball game scenes onto the real world captured with the web-camera, the camera's rotation and translation are estimated by multiple planar markers distributed at arbitrary positions and directions in the real world. First, the geometrical arrangement of the markers is estimated via multiple projective 3D spaces which are constructed by some reference images. Therefore the users do not have to measure the markers in advance. Since the markers can be placed any positions and directions, the user can freely distribute the markers and watch the virtual baseball game from various view point moving in wide area. Various arrangements of the markers also realize the stable registration of the virtual objects.
This system will provide a new style of watching and enjoying the baseball game.
Keywords: multiple planar markers, projective space, baseball
We demonstrate a simple-to-use camera calibration system that can handle multiple cameras whose fields of view do not necessarily overlap. It estimates the geometry of the cameras, their photometric responses, and an environmental lighting map. The only manual intervention required involves waving an arbitrarily textured planar pattern in front of the cameras. In other words, in one single operation, our system yields all the information required by sophisticated Augmented Reality applications to draw virtual 3--D objects at the right locations and then light them convincingly.
The user moves a calibration pattern in front of the cameras connected to a simple laptop running our software. We use a powerful computer vision technique to detect it in the images both robustly and in real-time. This provides homographies between the pattern and the images, which are then used to compute the intrinsic camera parameters. We also use them to compute the poses of the cameras with respect to each other. Finally, we rely on the intensity variations within the pattern and across the images to achieve photometric calibration.
The video streams are augmented in real-time and the light map is updated to reflect changes in the lighting environment.
The ARTag fiducial marker system can be used with 2D and 3D arrays of markers to create several different Augmented Reality (AR) interaction paradigms. Three of these are presented in this ISMAR\u201906 demo; a magic mirror system, a table-top system with users looking at a planar array of markers with PDA\u2019s and tablet PC\u2019s , and a prototype of the Webtag system from the ISMAR\u201906 poster paper. With the former, users hold and wear objects adorned with ARTag markers, they then see an image of themselves with 3D augmented content with different themes (medieval, sci-fi, underwater, historical). With the second demo, visitors can view a 3D animated scene with PDA's and tablet PC's by looking at a planar table-top marker array.
We demonstrate a prototype mobile augmented reality user interface for automated identification of botanical species in the field, developed for the Columbia, University of Maryland and Smithsonian Electronic Field Guide Project. Leaf images are captured with a head worn digital video camera. A computer vision component developed by our colleagues finds the best set of matching species, and we present the results as virtual representations along the side of a clipboard next to the physical leaf sample. A tangible card morphs into a given plant when placed over it. Virtual representations can be inspected and their semantics (such as SEM images and full plant images) can be changed through gestures and movement through spatial zones. Samples are matched with existing species or marked unknown for further study. The system is being tested by Smithsonian Institution botanists.
Keywords: electronic field guide, augmented reality, mobile computing, wearable computing, tangible user interface, computer vision, botanical species identification
We will give a demonstration on a novel method for measuring position and orientation of physical `tags' on a tabletop display. This method employs a set of photo sensors and accelerometers embedded in the tag to observe fiducial marker patterns shown on the display and to predict the incoming position of the tag. Our proposed maker pattern makes the physical tags smaller and less obtrusive than the ones previously proposed.
Keywords: tangible tabletop interface, position and orientation sensing, photo sensor, accelerometer, fiducial marker
We propose a system that supports players to play the guitar using Augmented Reality. This system shows visual aid information to a player by overlaying a virtual hand model to show the way of holding the strings and lines to show the correct holding position of the string onto the real guitar. The player can play the guitar intuitively by overlapping one's hand to the visual aid information. The important issue to construct this system is accurate registration between visual aid information and the guitar, so we need to track the pose and the position of the guitar. We propose a method for tracking the guitar with both the vision marker and the natural feature of the guitar. Since we utilize both marker information and edge information, the system can track the guitar so stably. Accordingly, our system can display visual aid information at the proper position constantly, so that a player can use the system in natural manner.
Keywords: support system for guitar playing, Accurate Registration
We present a demo supported on the UMPC and Smartphone platform. This demo allows the presentation of data extracted from a GIS database. The UMPC devices make use of the Context Sensitive Magic Lenses, a technique accepted for publication at ISMAR. Additionally they will also support an indoor version of the Going-Out: Robust Model-based tracking also accepted for publication at ISMAR. The Smartphones support self-dependent tracking via the ARToolkitPlus technology and additionally will feature Pixel-flow support for the fiducial tracking. These devices will feature a light-weight version of our Studierstube platform.
Keywords: UMPC, Smartphones, GIS, Magic Lenses, ARToolkitPlus, Robust Tracking
The Augmented Painting is an interactive tabletop augmented reality setup consisting of a computer, a webcam, and an observed area containing several background objects. As well, a special black-and-white marker pattern for camera tracking is placed in the observed scene. The user can interactively manipulate the placement of the camera and the objects. The user can also add or remove objects. An augmented version of the observed scene with added virtual graphical models is shown on the computer monitor. The virtual objects are rendered with correct spatial alignment relative to the tracking marker. However, unlike in conventional augmented reality, the output image is a stylized version of the observed scene. Both the real camera image and the virtual models are shown in the same artistic or illustrative style. It is possible to choose between three different types of stylization; a cartoon-like style made up of colored patches and silhouettes, a painterly style consisting of small brush strokes recreating the pointillism style of painting, and a black-and-white technical illustration style. All of these stylized augmented video streams are rendered in real-time, providing a truly interactive user experience. As a result of the stylization, virtual and real parts of the augmented image are difficult to distinguish. The integrated digital "still life" or "augmented painting" created provides a novel experience of an augmented environment and digital art for the observer.
Keywords: Augmented reality, non-photorealistic rendering, artistic and illustrative stylization
Outdoor Augmented/Mixed Reality applications often require a model of the real environment either for tracking or for interactive modification. While these models are often manually constructed using some 3D modeling tools, an automated acquisition pipeline is still not available for that purpose. Especially for urban planning, the status quo of existing buildings often need to be available in a 3D model. Since manual model creation is a tedious and time consuming task, a need for automated 3D reconstruction is given. Therefore, we come up with the idea of using a human scout equipped with a camera, GPS, and a UMPC (ultra mobile PC) who delivers a sequence of outdoor images annotated with GPS tracking data. These images are transmitted to a command center where spectators can observe the position of the scout on a pre-calibrated ortho image. During image acquisition, the 3D model is created on-the-fly using sophisticated 3D reconstruction algorithms and geo-referenced images. The final 3D model can then be projected onto the pre-calibrated ortho map and can be used for further interaction/modification. Although the reconstruction framework is shown within this scenario, it can also be used for any other applications, where 3D models of urban scenes are required (e.g. feature based tracking). This demo tries to bridge the gap between computer vision (3D reconstruction in particular) and AR/MR by building a framework for automated 3D model generation.
Keywords: urban planning, automated model acquisition, 3D reconstruction, scout surveillance
We demonstrate a first prototype of a new Hybrid Vision-Inertial smart sensor which improves on many aspects of the original InterSense InertiaCamTM.
Keywords: Hybrid Optical-Inertial Trackers, Smart Cameras, Wide-area and Scalable Tracking, Auto-mapping, Natural Features Tracking.
This demonstration presents our latest mobile outdoor augmented reality system and applications that currently run on it. Our previous backpack designs were simple collections of commercial components arranged onto a frame, optimised for flexibility in changes rather than size. Our new design is made up of many custom components as well as highly modified commercial components to reduce its size, and is wearable using only a belt if desired. The new design uses technologies such as wireless to reduce size and weight, and user interfaces to configure the system when no input devices are available. We will demonstrate applications such as our Tinmith mobile outdoor augmented reality system and our ARQuake game.
In the real world, to control water as you like is difficult, but virtual reality technology realizes controllable water. This demonstration provides the mixed reality environment in which users can control virtual water as users like. Users put markers which have variety of functions. These markers make virtual velocity field and water moves according to the velocity field. By using these markers, users design their own fountains on the desk. To construct this demonstration, we have developed the high-speed particle-based fluid dynamics simulation and rendering method of reflection and refraction with efficient imaging scheme of the surroundings.
Our demo is a mobile augmented reality system for outdoor annotation of the real world. To reduce user burden, we use aerial photographs in addition to the wearable system's usual data sources (position, orientation, camera and user input). This allows the user to accurately annotate 3D features from a single position by aligning features in both their first-person viewpoint and in the aerial view. At the same time, aerial photographs provide a rich set of features that can be automatically extracted to create best guesses of intended annotations with minimal user input. Thus, user interaction is often as simple as casting a ray from a first-person view, and then confirming the feature from the aerial view. We annotate three types of aerial photograph features - corners, edges, and regions - that are suitable for a wide variety of useful mobile augmented reality applications.
Keywords: wearable system, outdoor augmented reality, annotation, modeling, aerial photograph feature extraction, anywhere augmentation
We present a method for stabilizing live video from a moving camera for the purpose of a tele-meeting, in which a participant with an AR view onto a shared canvas collaborates with a remote user. The AR view is established without markers and using no other tracking equipment than a head-worn camera. The remote user is enabled to annotate the stabilized video of the local user's view directly in real time on a tablet PC. The planar homographies between the reference frame and the other following frames are maintained to stabilize the video. This new form of transient AR collaboration can be established easily, on a per need basis, and without complicated equipment or calibration requirements. We present several small demo applications with this setup: Collaborative drawing on a whiteboard, annotating a paper document, and playing tic-tac-toe on a sheet of paper.
Keywords: viewpoint stabilization, collaborative augmented reality, marker-less tracking
This demo will demonstrate a system allowing a user to navigate through a pre-captured panoramic environment with inserted virtual objects. Users navigate using a simple mouse-based interface, and can identify their geographical location through the use of an on-screen map. Virtual objects inserted into the scene can provide additional information as well as animated or interactive content.
Keywords: panorama, walkthrough, virtual objects
The evolving technology of computer auto-fabrication ("3-D printing") now makes it possible to produce physical models for complex biological molecules and assemblies. We report on an application that demonstrates the use of auto-fabricated tangible models and augmented reality for research and education in molecular biology, and for enhancing the scientific environment for collaboration and exploration. We have adapted an augmented reality system to allows virtual 3-D representations (generated by the Python Molecular Viewer) to be overlaid onto a tangible molecular model. Users can easily change the overlaid information, switching between different representations of the molecule, displays of molecular properties such as electrostatics, or dynamic information. The physical model provides a powerful, intuitive interface for manipulating the computer models, streamlining the interface between human intent, the physical model, and the computational activity.