Final Project for COMS W4172 3D User Interfaces (Highest Score)
An AR gallery showcasing media from the Making & Knowing Lab (an inter-disciplinary research and pedagogical initiative in the Center for Science and Society at Columbia University).
I. Team Roles
Team: Davide Zhang, Xuanyuan Zhang, Ayer Chan, Jason Perez

Role in Team: Preliminary Research, Design Resources Collection, Virtual Gallery UX Design and Engineering, iOS Deployment and Optimization, Walk-Through Video

Concept Collage

II. Background and Opportunity
Utilizing the latest Vuforia technology, we (Team ARts&Crafts) have developed an AR application that showcases the discoveries uncovered by the Making and Knowing Lab (a prestigious research and pedagogical initiative in the Center for Science and Society at Columbia University). 

Making & Knowing AR Gallery is designed for all ages as much of the information gathered by the lab has been made available to the public. This application focuses on the immersive, visual aspects of augmented reality – specifically those taught in COMS W4172 3D User Interfaces.

Our real hope for this project is to develop the bridge between AR and the educational elements found in the research of the Making and Knowing Lab. This application has the potential to engage students and the general public on a higher level – using a combination of interactive photo/video gallery and virtual-enhanced quizzes, we believe that this app will provide the initial framework for future AR applications that utilize research done through university labs.

III. Design Details and Decisions
1. Start Scene Heuristics
The basis of button selection was derived from the interface used by Microsoft in their Hololens device [0]. Buttons are represented by large, easy-to-read tiles that allows the user to understand the interface right from the beginning. The large surface area also allows users to choose their desired choices correctly without having to go through the interface for a long period of time. The button font-type was also chosen due to its ability to be read up close and from a distance. Our overall objective for this project was to create a simple, aesthetically pleasing 3D user interface that would engage the user and encourage them to move around to explore the virtual gallery.
2. Selection Tool
For selection, our team decided on using the Vuforia cylinder image target (provided by the class). This cylinder was ultimately the best tool available as it can be accurately tracked using Vuforia at all positions and angles. For customization, we opted on modeling our selection wand on the shape and color of a simple No. 2 pencil for educational aesthetics. Similar to a pencil, the user can point and select different objects in the application using the pencil “tip”. The pencil tip will essentially act as GameObject collider and trigger events that handle selection and manipulation. The figure below demonstrates of what the object looks like virtually.
Selection Tool Heuristics
A number of factors went into the development of the pencil wand - the original idea stemmed from a previous virtual reality experiment done using a headset and controller [1]. This idea essentially proposes that users utilize a virtual wand controller to select and manipulate objects at will. We went ahead with this idea and extended it further using the techniques tried in later AR/VR experiments [2]. The end result is a wand that is designed to encourage the user to select objects in a certain orientation (cylinder image target upright in hand). Pencils are widely found in classrooms and it is our hope that future students who download this app will find it easy to understand the intuition behind the pencil wand.
3. Selection Panel and Visualization of Earth
Once the user has pressed the virtual “Start” button they enter the “country selection” phase. A series of button panels will appear next to a Earth model which will provide different options on selecting various countries such as Greece, Egypt, and England. Once the user moves the wand towards a desired country, two buttons “Confirm” and “Cancel” will appear. The “Cancel” will essentially cancel the user action while the “Confirm” button will confirm country selection. Hitting “Confirm” will redirect the user to a virtual gallery for their desired country.

Whenever the user selects a country,  an animation will occur. This animation consists of an Earth model rotating so that the geographical location of the country is shown to the user’s point of view. Country locations are also annotated with a pin that will turn from red to yellow once it has been selected from the previous button options. If “Cancel” is hit, the pin will turn back to red, and the Earth model will slowly rotate back into its original orientation. Once the rotation is finished, the country options will appear again for the user to select. If “Confirm” is selected, the button panels and Earth model will disappear from view and the virtual gallery will begin initialization.

Selection Panel Heuristics
Like before, the country selection panel features the same principles used in the previous start menu. Tiles are easy-to-read, contain a large surface area, and are fairly responsive to user selection. With country selection, however, we opted to utilize a confirm/cancel sequence in order to help the user undo any erroneous selection [4]. This heuristic allows users the ability to pick and choose what country they wish to learn more about. The rotation of the Earth model also helps users understand the intuition behind the country selection through visual indicator aids.
4. Virtual Gallery
Overview
The virtual gallery consists of objects (image or video) floating in space and aligned along the z-axis. From the perspective of the user, the images or videos will appear in front of them. The space is divided into three sections: videos on the left, images on the right, and a central path. 

The reason the images/videos face the user instead of towards the center path is that although the latter resembles the angles as if they are hung on a wall, the former is more true to their nature as floating multimedia objects. An argument that opposes this decision may state that the latter satisfies the “match between system and the real world” [6]  heuristic because the objects are angled like they are hung on the wall. We argue that having them face the user satisfies this heuristic better because the user’s gallery viewing experience is characterized more by viewing objects in front of the user rather than sideways.

Selection
The user is able to play and pause each video by clicking the 3D play/pause button located at the lower left of every video. Once the pencil tip hits the button, the icon of the button will change accordingly to indicate play or pause. 

The user is also able to look at the blurbs located next to the images and videos and jump to the next page by touching the 3D page up/down buttons below the blurbs. Finally, the user is able to touch the portal at the end of the gallery and enter the quiz, which will be located at the starting point.

Wayfinding
Wayfinding is achieved through a country-specific 3D miniature map [7]  of the entire gallery that follows the pencil (wand) and can be toggled on or off via the 2D button “MAP” or “Hide Map.” Each video or image is represented by a white box. The user can touch the white box representation of the image or video that they would like to go to, and a path consisted of arrows that leads to the desired image/video will be created. 

The touched box will turn yellow to indicate that the user may have visited that image/video. The path also features an animation effect to better aid the user and strengthen the wayfinding feature. Clicking another white box will update the path. It is worth noting that the user can invoke the miniature map at any time as long as the pencil is in the camera’s field of view.

Mini Map
Mini Map
Path Animation 1
Path Animation 1
Path Animation 2
Path Animation 2
Path Animation 3
Path Animation 3
Travel
Travel is achieved by user’s paths within the gallery and choreographing the path from the gallery to the quiz section. Specifically, when the user touches the portal at the far end of the quiz with the pencil, the quiz appears at the position where the user originally started. This essentially directs the user back to the starting point so that the app does not require an extremely large space and visiting all three galleries will not extend the space forward unrestrictedly. This concept of redirection is reminiscent of the research project Redirection by Change Blindness (E. Suma et al., VR 2011) [10] except the user is not blinded.
Virtual Gallery Heuristics
The most prominent heuristic we are referencing here is the Match between system and the real world, since the way the images and videos are displayed and the movement of the user are natural: the images and videos float in the air and has thickness to them to resemble the canvas of paintings. In a sense, they are like floating paintings and floating movie screens. We also considered the User Control and Freedom heuristic, because we implemented the “back button” functionality in all our locations. In this specific case, the user is able to go back to the country selection scene from the gallery if they would like to visit a different gallery. The miniature 3D map is a smaller visual representation of the entire gallery. The user can simply recognize the desired object by looking at its relative position to others, as opposed to having texts that indicate the different objects. Therefore, it references the Recognition rather than Recall heuristic. The minimalist dynamic path (with animation) and the abstracted box representations of images and videos reference the Aesthetic and minimalist design heuristic.
5. Quizzes
To enter the quiz section of the app, the user must walk through a “portal” - a black doorway that acts as an AR portal mechanism. Walking through this doorway will “teleport” the user and give them full view of the quiz portion. The portal idea was taken from a paper proposed by R. Pausch [5] on mapping virtual environments within constrained physical environments. Essentially, by “teleporting” the user into a different scene we give them the ability to reside in the same place that they were before. This limits the amount of space needed to transition from the gallery to the quiz scene.

Once the scene is initialized, the user will find themselves in front a multiple choice question that tests the user’s knowledge learned in the previous gallery.
Design: placing quizzes around users in virtual space
Quizzes are intentionally positioned at all four directions with respect to the user: front, back, left and right. This was designed to simulate a classroom environment that encourages users to believe that this is a real quiz scenario. Our idea behind this was derived from the paper, Virtual Environment Display System, written by S. Fisher, M. McGreevy, J. Humphries, and W. Robinett [9]. The paper summarizes that through VR experiments, their findings concluded that control panels and data windows are best implemented when positioned in 3D space in view of a head-tracked user. We designed the quiz portion to match their findings and thus create a more engaging, interactive 3D learning experience. As such, we believe that this will significantly increase the level of enjoyment in users taking the quiz.
Design: image decrease in scale after selection
When the user begins image selection, a sophisticated image-plane manipulation technique is utilized that causes the temporary scaling of the entire image. This is done so that the image becomes small enough to control efficiently and avoid potential occlusion of the users’ perspective. This was inspired by the work found in M. Mine, F. Brooks & C Sequin’s paper [8] on pointer-object manipulation. The integration of the grab and point selection feature enables quick, highly responsive object manipulation. We also tweaked the object release to execute when the user has answered the question correctly. An incorrect answer choice will not trigger the object release and so the user will have to move on to the next choice.
Design: travel through selection of yellow arrows and rotation
The quiz portion implements virtual travel tasks to allow user navigate quizzes without the need to move around needlessly. This technique was mentioned in the classification of travel tasks discussed in the third lecture of class.
Quizz Heuristics
In most cultures, doorways are universally understood to open up new areas for individuals exploring an environment. The use of the black doorway essentially encourages the user to walk through and be transported into a different area. The same principles used in the start menu, country selection, and gallery are also used in the quiz portion of the app. The real beauty behind the quiz is that it grants the user an enjoyable, fun experience in answering questions using direct manipulation of images. Fun and colorful materials are also used to encourage participation from the user when picking answers. The “Go Back to Start” button also gives users the option to restart the app over if desired.
6. References
[0] S. Karthika et al. In International Journal of Computer Science and Mobile Computing Vol.6 Issue 2, pp. 41-50, 2017.

[1] D. Vickers. The Sorcerer’s Apprentice: Head-Mounted Display and Wand. Ph.D. Dissertation, University of Utah, 1972.

[2] L. Roberts. The Lincoln Wand. In proceedings of the AFIPS Conference, 1966.

[3] A. Olwal & S. Feiner. The flexible pointer - An interaction technique for selection in augmented and virtual reality. In UIST, 2003.

[4] K. Kiyokawa. A wild field-of-view head mounted display using hyperbolic half-silvered mirrors. In proceedings of ISMAR, 2007. 

[5] R. Stoakley, M. Conway, and R. Pausch. Virtual Reality on a WIM: Interactive Worlds in Miniature. In proceedings of the ACM Conference on Human Factors in Computing Systems, 1995. 

[6] K. Herndon et al. Interactive Shadows. In proceedings of UIST, 1992.

[7] W. Robinett and R. Holloway. Implementation of Flying, Scaling, and Grabbing in Virtual Worlds. In proceedings of symposium on Interactive 3D graphics, pp. 189-192, 1992.

[8] M Mine, F. Brooks & C Sequin. Moving objects in space: Exploiting proprioception in virtual environment interaction. In ACM SIGGRAPH Conference on Computer Graphics and Interactive Techniques, 1997.

[9] S. Fisher, M. McGreevy, J. Humphries, and W. Robinett. Virtual environment display system. In Workshop on Interactive 3D Graphics, pp. 77-87, 1986.

[10] E A. Suma, S Clark, D Krum, S Finkelstein, M Bolas, and Z Warte. Leveraging change blindness for redirection in virtual environments. In Proceedings of the 2011 IEEE Virtual Reality Conference (VR ‘11). IEEE Computer Society, Washington, DC, USA, pp. 159-166, 2011.